Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for...

74
Volume 6 : Issue 2. Dec. 2008 Listed in Ulrich's International Periodicals Directory, USA ISSN No. : 0974-5513 Institute of Management Studies Dehradun ims Information Technology Research Papers a bi-annual Journal An Improved form of LSB Based Steganography Mamta Juneja, Neeru Intricacies of Fault-Tolerant Scheduling for Precedence and Deadline Constrained Tasks Rakesh Kumar Bansal, Kawaljeet Singh and Savina Bansal Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, Bikram Keshari Ratha Image Segmentation: Objective Evaluation and a New Approach Silky Dhawan, Akshay Girdhar Functional Role of Remote Sensing in Studies of Land-Use & Land-Cover Change Aditya Kumar Gupta Impact of Internal Dynamics on Quality of Open Source Softwares Kumud K. Arora Reducing Overheads in Non-Blocking Three Phase Commit Protocol Shishir Kumar, Sonali Barvey Effect of Exponential Temperature Variation on Vibration of Orthotropic Rectangular Plate with Linearly Thickness Variation in Both Directions Dr. A.K. Gupta, Subodh Kumar Study of performance issues of TCP Reno and Vegas Dinesh C. Dobhal, Dr. D. Pantand Kumar Manoj Mobile Ad-hoc Network Protocols (A comparative analysis of some selected protocols) Ms. Garima Verma Design and Analysis of New Searching Algorithm Dr. Vinod Kumar, Dr. S.C. Agarwal and Sanjeev Kumar Sharma

Transcript of Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for...

Page 1: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Volume 6 : Issue 2. Dec. 2008 Listed in Ulrich's International Periodicals Directory, USA

ISSN No. : 0974-5513

Institute of Management Studies

Dehradun

ims

Information Technology

Research Papers

a bi-annual Journal

An Improved form of LSB Based SteganographyMamta Juneja, Neeru

Intricacies of Fault-Tolerant Scheduling for Precedence and Deadline Constrained TasksRakesh Kumar Bansal, Kawaljeet Singh and Savina Bansal

Analysis of schemes between Channel Assignment for GSM NetworksSudan Jha, Bikram Keshari Ratha

Image Segmentation: Objective Evaluation and a New ApproachSilky Dhawan, Akshay Girdhar

Functional Role of Remote Sensing in Studies of Land-Use & Land-Cover ChangeAditya Kumar Gupta

Impact of Internal Dynamics on Quality of Open Source SoftwaresKumud K. Arora

Reducing Overheads in Non-Blocking Three Phase Commit ProtocolShishir Kumar, Sonali Barvey

Effect of Exponential Temperature Variation on Vibration of Orthotropic Rectangular Plate with Linearly Thickness Variation in Both DirectionsDr. A.K. Gupta, Subodh Kumar

Study of performance issues of TCP Reno and Vegas Dinesh C. Dobhal, Dr. D. Pantand Kumar Manoj

Mobile Ad-hoc Network Protocols (A comparative analysis of some selected protocols)Ms. Garima Verma

Design and Analysis of New Searching AlgorithmDr. Vinod Kumar, Dr. S.C. Agarwal and Sanjeev Kumar Sharma

Page 2: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Copyright @ 2008 Institute of Management Studies, Dehradun.

All rights reserved.

No part of this publication may be reproduced or transmitted in any form or by any means, or stored in any retrieval system of any nature without prior written permission. Application for permission for other use of copyright material including permission to reproduce extracts in other published works shall be made to the publishers. Full acknowledgment of author, publishers and source must be given.

The Editorial Board invites original, unpublished contributions in the form of articles, case studies, research papers, and book reviews.

The views expressed in the articles are those of the contributors and not necessarily of the Editorial Board or the Institute.

Although every care has been taken to avoid errors or omissions, this publication is being sold on the condition and understanding that information given in this journal is merely for reference and must not be taken as having authority of or binding in any way on the authors, editors, publishers and sellers who do not owe any responsibility for any damage or loss to any person, a purchaser of this publication or not, for the result of any action taken on the basis of this work. All disputes are subject to Dehradun jurisdiction only.

Pragyaan : Information Technology

Volume 6 : Issue 2. December 2008

Patron Shri Naveen AgarwalChairmanIMS Society, Dehradun

Chief Editor Dr Pawan K AggarwalDirectorInstitute of Management Studies, Dehradun

Editor Monika ChauhanSenior LecturerIMS, Dehradun

Editorial Advisory Board

Dr. D P GoyalDirectorInstitute of Management EducationSahibababd

Dr. Bansidhar MajhiHead, CSENational Institute of Technology Rourkela

Dr. R K SharmaDean, Computer ScienceThapar University, Patiala

Ganesh SivaramanProduct Marketing ManagerMobile Software & Marketing Nokia

Dr. Shishir KumarProf. & Head, CSEJaypee Institute of Engineering and Technology, Guna

Dr. Sameer SaranScientist, Deptt. of GeoinformaticsIndian Institute of Remote Sensing Dehradun

Dr. Hardeep SinghProf. & Head, Computer ApplicationGuru Nanak Dev University Amritsar

Dinesh TashildarAsstt. Manager, Network & SystemCognizant Technologies Pvt. Ltd.

Dr. G P SahuAsstt. Prof., School of Management StudiesMoti Lal Nehru National Institute of Technology Allahabad

Prof. I HusainDeptt. of MathematicsJaypee Institute of Engineering & Technology Guna

Dr. Rajendra Kumar GartiaDeptt. of MathematicsSambhalpur UniversityOrissa

Prof. Rajiv SaxenaHead, ECEJaypee Institute of Engineering & Technology Guna

Dr. Durgesh PantHead, Deptt. of Comp. ApplicationsKumaon UniversityNainital

Dr. D S HoodaHead, Deptt. of MathematicsJaypee Institute of Engineering & TechnologyGuna

Dr. NipurHead, Deptt. of Comp. ApplicationsKanya Gurukul MahavidyalayaDehradun

Dr. Shishir KumarHead, CSEJaypee Institute of Engineering & TechnologyGuna

Dr. R K SharmaDean, Comp. ScienceThapar UniversityPatiala

Dr. Sameer SaranScientistIndian Institute of Remote SensingDehradun

Dr. Saurabh PalDeptt. of Comp. Applications VBS Purvanchal UniversityJaunpur

Panel of Referees

S DimriHead, Deptt. of Comp. ApplicationsGEIT UniversityDehradun

Dr. Vipin TyagiAsstt. Prof., CSEJaypee Institute of Engineering & TechnologyGuna

Dr. Shailendra MishraHead, ITDehradun Institute of TechnologyDehradun

Dr. K C JoshiDeptt. of IT & ManagementMJP Rohillkhand UniversityBareilly

Prof. R Sukesh Kumar Deptt. of CSE Birla Institute of TechnologyMesra

Prof. K R Pardasani Head, Mathematics & Comp. Applications Maulana Azad National Institute of Technology Bhopal

Prof. P K Panigrahi Indian Institute of Science ResearchKolkata (Constituent of IIT, Kharagpur)

Prof. R C Chakraborty Former Director, DRDO, DTRL

Dr. Somnath Tripathi Deptt. of CSEIITPatna

Dr. Ravinder Singh Deptt. of CSEMJP Rohillkhand UniversityBareilly

Page 3: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Copyright @ 2008 Institute of Management Studies, Dehradun.

All rights reserved.

No part of this publication may be reproduced or transmitted in any form or by any means, or stored in any retrieval system of any nature without prior written permission. Application for permission for other use of copyright material including permission to reproduce extracts in other published works shall be made to the publishers. Full acknowledgment of author, publishers and source must be given.

The Editorial Board invites original, unpublished contributions in the form of articles, case studies, research papers, and book reviews.

The views expressed in the articles are those of the contributors and not necessarily of the Editorial Board or the Institute.

Although every care has been taken to avoid errors or omissions, this publication is being sold on the condition and understanding that information given in this journal is merely for reference and must not be taken as having authority of or binding in any way on the authors, editors, publishers and sellers who do not owe any responsibility for any damage or loss to any person, a purchaser of this publication or not, for the result of any action taken on the basis of this work. All disputes are subject to Dehradun jurisdiction only.

Pragyaan : Information Technology

Volume 6 : Issue 2. December 2008

Patron Shri Naveen AgarwalChairmanIMS Society, Dehradun

Chief Editor Dr Pawan K AggarwalDirectorInstitute of Management Studies, Dehradun

Editor Monika ChauhanSenior LecturerIMS, Dehradun

Editorial Advisory Board

Dr. D P GoyalDirectorInstitute of Management EducationSahibababd

Dr. Bansidhar MajhiHead, CSENational Institute of Technology Rourkela

Dr. R K SharmaDean, Computer ScienceThapar University, Patiala

Ganesh SivaramanProduct Marketing ManagerMobile Software & Marketing Nokia

Dr. Shishir KumarProf. & Head, CSEJaypee Institute of Engineering and Technology, Guna

Dr. Sameer SaranScientist, Deptt. of GeoinformaticsIndian Institute of Remote Sensing Dehradun

Dr. Hardeep SinghProf. & Head, Computer ApplicationGuru Nanak Dev University Amritsar

Dinesh TashildarAsstt. Manager, Network & SystemCognizant Technologies Pvt. Ltd.

Dr. G P SahuAsstt. Prof., School of Management StudiesMoti Lal Nehru National Institute of Technology Allahabad

Prof. I HusainDeptt. of MathematicsJaypee Institute of Engineering & Technology Guna

Dr. Rajendra Kumar GartiaDeptt. of MathematicsSambhalpur UniversityOrissa

Prof. Rajiv SaxenaHead, ECEJaypee Institute of Engineering & Technology Guna

Dr. Durgesh PantHead, Deptt. of Comp. ApplicationsKumaon UniversityNainital

Dr. D S HoodaHead, Deptt. of MathematicsJaypee Institute of Engineering & TechnologyGuna

Dr. NipurHead, Deptt. of Comp. ApplicationsKanya Gurukul MahavidyalayaDehradun

Dr. Shishir KumarHead, CSEJaypee Institute of Engineering & TechnologyGuna

Dr. R K SharmaDean, Comp. ScienceThapar UniversityPatiala

Dr. Sameer SaranScientistIndian Institute of Remote SensingDehradun

Dr. Saurabh PalDeptt. of Comp. Applications VBS Purvanchal UniversityJaunpur

Panel of Referees

S DimriHead, Deptt. of Comp. ApplicationsGEIT UniversityDehradun

Dr. Vipin TyagiAsstt. Prof., CSEJaypee Institute of Engineering & TechnologyGuna

Dr. Shailendra MishraHead, ITDehradun Institute of TechnologyDehradun

Dr. K C JoshiDeptt. of IT & ManagementMJP Rohillkhand UniversityBareilly

Prof. R Sukesh Kumar Deptt. of CSE Birla Institute of TechnologyMesra

Prof. K R Pardasani Head, Mathematics & Comp. Applications Maulana Azad National Institute of Technology Bhopal

Prof. P K Panigrahi Indian Institute of Science ResearchKolkata (Constituent of IIT, Kharagpur)

Prof. R C Chakraborty Former Director, DRDO, DTRL

Dr. Somnath Tripathi Deptt. of CSEIITPatna

Dr. Ravinder Singh Deptt. of CSEMJP Rohillkhand UniversityBareilly

Page 4: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

From the Chief Editor

We draw immense pleasure in presenting December 2008 issue of Pragyaan: Information Technology.

Pragyaan: Information Technology continues to gain appreciation and accolade as it provides a platform that stimulates and guides the intellectual quests of IT scholars. Beginning December 2008, we have incorporated following major qualitative changes in our journal

1. Award of ISSN No. 0974- 5513 for our publication from NISCAIR, New Delhi.

2. Listing with prestigious Ulrich's International Periodicals Directory, USA.

3. Empanelment of external referees comprising eminent scholars.

This issue of Pragyaan: Information Technology contains articles pertaining to IT applications in the fields of Mobile Computing, Image Processing, Computer Networks, Remote Sensing and Software development. We feel sure that these articles will provide new insights into their respective areas of research.

We would like to express our gratitude to our valued contributors for their scholarly contributions to the journal. Appreciation is due to the Editorial Advisory Board, the panel of Referees and the Management of the institute for their constant guidance and support. Many faculty members of the Institute from Faculty of IT provided the necessary editorial support that resulted in enhanced reader friendliness of many of the articles, and Ms. Monika Chauhan diligently prepared the manuscript for the press. We are extremely thankful to all of them. We are also thankful to those who facilitated quality printing of this journal.

We continue our endeavor to harness intellectual capital of scholars and practitioners of IT and bring to our readers the value addition products.

We have tried our best to put together the scholarly contributions coherently. Suggestions from our valued readers for adding value to our Journal are solicited.

Dr. Pawan K AggarwalDirectorIMS, Dehradun

Pragyaan : Information Technology

Volume 6 : Issue 2. December 2008

An Improved form of LSB Based Steganography 1Mamta Juneja, Neeru

Intricacies of Fault-Tolerant Scheduling for Precedence and Deadline Constrained Tasks 5Rakesh Kumar Bansal, Kawaljeet Singh and Savina Bansal

Analysis of schemes between Channel Assignment for GSM Networks 11Sudan Jha, Bikram Keshari Ratha

Image Segmentation: Objective Evaluation and a New Approach 16Silky Dhawan, Akshay Girdhar

Functional Role of Remote Sensing in Studies of Land-Use & Land-Cover Change 22Aditya Kumar Gupta

Impact of Internal Dynamics on Quality of Open Source Software 27Kumud K. Arora

Reducing Overheads in Non-Blocking Three Phase Commit Protocol 33Shishir Kumar, Sonali Barvey

Effect of Exponential Temperature Variation on Vibration of Orthotropic Rectangular Plate with Linearly Thickness Variation in Both Directions 39Dr. A.K. Gupta, Subodh Kumar

Study of performance issues of TCP Reno and Vegas 46 Dinesh C. Dobhal, Dr. D. Pant and Kumar Manoj

Mobile Ad-hoc Network Protocols (A comparative analysis of some selected protocols) 51Ms. Garima Verma

Design and Analysis of New Searching Algorithm 58Dr. Vinod Kumar, Dr. S.C. Agarwal and Sanjeev Kumar Sharma

CONTENTS

Research Papers

Page 5: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

From the Chief Editor

We draw immense pleasure in presenting December 2008 issue of Pragyaan: Information Technology.

Pragyaan: Information Technology continues to gain appreciation and accolade as it provides a platform that stimulates and guides the intellectual quests of IT scholars. Beginning December 2008, we have incorporated following major qualitative changes in our journal

1. Award of ISSN No. 0974- 5513 for our publication from NISCAIR, New Delhi.

2. Listing with prestigious Ulrich's International Periodicals Directory, USA.

3. Empanelment of external referees comprising eminent scholars.

This issue of Pragyaan: Information Technology contains articles pertaining to IT applications in the fields of Mobile Computing, Image Processing, Computer Networks, Remote Sensing and Software development. We feel sure that these articles will provide new insights into their respective areas of research.

We would like to express our gratitude to our valued contributors for their scholarly contributions to the journal. Appreciation is due to the Editorial Advisory Board, the panel of Referees and the Management of the institute for their constant guidance and support. Many faculty members of the Institute from Faculty of IT provided the necessary editorial support that resulted in enhanced reader friendliness of many of the articles, and Ms. Monika Chauhan diligently prepared the manuscript for the press. We are extremely thankful to all of them. We are also thankful to those who facilitated quality printing of this journal.

We continue our endeavor to harness intellectual capital of scholars and practitioners of IT and bring to our readers the value addition products.

We have tried our best to put together the scholarly contributions coherently. Suggestions from our valued readers for adding value to our Journal are solicited.

Dr. Pawan K AggarwalDirectorIMS, Dehradun

Pragyaan : Information Technology

Volume 6 : Issue 2. December 2008

An Improved form of LSB Based Steganography 1Mamta Juneja, Neeru

Intricacies of Fault-Tolerant Scheduling for Precedence and Deadline Constrained Tasks 5Rakesh Kumar Bansal, Kawaljeet Singh and Savina Bansal

Analysis of schemes between Channel Assignment for GSM Networks 11Sudan Jha, Bikram Keshari Ratha

Image Segmentation: Objective Evaluation and a New Approach 16Silky Dhawan, Akshay Girdhar

Functional Role of Remote Sensing in Studies of Land-Use & Land-Cover Change 22Aditya Kumar Gupta

Impact of Internal Dynamics on Quality of Open Source Software 27Kumud K. Arora

Reducing Overheads in Non-Blocking Three Phase Commit Protocol 33Shishir Kumar, Sonali Barvey

Effect of Exponential Temperature Variation on Vibration of Orthotropic Rectangular Plate with Linearly Thickness Variation in Both Directions 39Dr. A.K. Gupta, Subodh Kumar

Study of performance issues of TCP Reno and Vegas 46 Dinesh C. Dobhal, Dr. D. Pant and Kumar Manoj

Mobile Ad-hoc Network Protocols (A comparative analysis of some selected protocols) 51Ms. Garima Verma

Design and Analysis of New Searching Algorithm 58Dr. Vinod Kumar, Dr. S.C. Agarwal and Sanjeev Kumar Sharma

CONTENTS

Research Papers

Page 6: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Mamta Juneja*Neeru**

*Asstt. Professor, CSE Deptt., RBIEBT, Sahauran**Student (M-Tech), Instrumentation PU, Chandigarh

1

An Improved form of LSB Based Steganography

ABSTRACT

Steganography is the process of hiding a message into another cover media. Various techniques were proposed and implemented. Image based steganography uses the images as the cover media. LSB is the simplest technique used in this fieled. An improvement over the LSB is the random pixel manipulation which uses the stego-key to randomize the data to be hidden; this paper proposes a new technique that takes the advantage of the 24 bits in each pixel in the RGB images. Least two significant bits of one of the channels will be used to indicate existence of data in the other two channels. Then, the paper focuses in security and capacity measures to evaluate the proposed technique.

Keywords: Steganography, LSB, RGB Bitmaps, Pixel Indicator Algorithm.

Introduction

Steganography is the art and science of hiding information by embedding messages within other seemingly harmless messages. Steganography, derived from Steganography (literally meaning covered writing) dates back to ancient Greece, where common practices consisted of etching messages in wooden tablets and covering them with wax, and tattooing a shaved messenger's head, letting his hair grow back, then shaving it again when he arrived at his contact point. Different types of Steganographic techniques employ invisible inks, microdots, character arrangement, digital signatures, convert channel, and spread spectrum communications.

Image steganography based techniques require two files: cover media, and the data to be hidden [3, 7, 8]. When combined, the cover image and the embedded message make a stego-image [3]. One of the simplest techniques is the LSB where the least significant bit of each pixel is replaced by one bit of the secret till secret message finishes [2, 4, 5]. The risk of information being uncovered is relatively high as such approach is susceptible to all 'sequential scanning' based techniques [1].

The random pixel manipulation technique attempts at overcoming this problem, where pixels,

which will be used to hide data are chosen in a random fashion based on a stego-key. However this key should be shared between the entities of communication. Moreover some synchronization between the entities is required when changing the key. [1] This will put more overhead on the system. Another technique is the Stego Color Cycle. This technique uses the RGB images to hide the data in the different channels. That is, it keeps cycling the hidden data between the Red, Green and Blue channels. The main problem of this technique is that, hiding the data in the channels is done in a systematic way. So, being able to discover the data in the first few pixels will make the discovery of the technique easy. StegoPRNG is also another technique that uses the RGB images. However in this technique, PRNG is used to select some pixels of the cover image. Then the secret will be hidden in the Blue channel of the selected pixels. Again this technique has the problem of managing the key, and problem of capacity since it uses only the Blue channel out of the three channels of their available channels. [6] Our suggested technique tries to solve the problem of the previous two techniques by using one of the channels as an indicator for data existence in the other two channels. The indicator is set randomly.

Requirements

Designing any algorithm that is used for data

"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

hiding should take into consideration the following aspects:

? Perceptual Transparency: the hiding process should be performed in a way that does not raise any suspicion of the eavesdropper.

? Capacity: this term refers to the amount of data that can be hidden without changing the medium.

? Robustness.

Pixel Indicator Technique

In this technique, RGB images are used. The technique uses least two significant bits of one of the channels, Red, Green or Blue as an indicator of data existence in the other two channels. The indicator bits are set randomly in the channel. The following table shows the relation between the indicator and the hidden data inside the other channels:

Table 1: Meaning of indicator values.

Indicator Ch 1 Ch 2

The indicators are chosen in a sequence, i.e. in the first pixel, Red is the indicator, while Green is channel 1 and Blue is channel 2. In the second pixel, Green is the indicator, while Red is channel 1 and Blue is channel 2. In third pixel, Blue is the indicator, while Red is channel 1 and Green is channel 2. The sequence of the algorithm is flowcharted in the Figure 1. The

recover algorithm will stop once the length of the secret, which is stored in the image, is reached.

Cover Image

check the 2 LSBs Of the indicator Channel

If Equal to ‘0 0’

If Equal to ‘0 1’

If Equal to ‘1 0’

No ChangeGo to next pixel

Extract 2 bits of data from 2 LSBs of channel 1

remaining =remaining-2

Go to next pixel

Extract 2 bits of data from 2 LSBs of channel 2

remaining =remaining-2 Go to next pixel

Extract 2 bits of data from 2 LSBs of channel 1 and 2 bits from 2 LSBs of

channel 2remaining =remaining-4

Go to next pixel

Yes

Yes

yes

No

No

Extract the length of message that

stored in the first 8 bytes of first row of cover image to variable (remaning)

If Equal to ‘1 1’ yes

If remaining >0

No

End

No

Yes

Starting from second row

Figure 1: Hiding Process flowchart

Cover Image

check the 2 LSBs Of the indicator Channel

If Equal to ‘0 0’

If Equal to ‘0 1’

If Equal to ‘1 0’

No ChangeGo to next pixel

Extract 2 bits of data from 2 LSBs of channel 1

remaining =remaining-2

Go to next pixel

Extract 2 bits of data from 2 LSBs of channel 2

remaining =remaining-2 Go to next pixel

Extract 2 bits of data from 2 LSBs of channel 1 and 2 bits from 2 LSBs of

channel 2remaining =remaining-4

Go to next pixel

Yes

Yes

yes

No

No

Extract the length of message that

stored in the first 8 bytes of first row of cover image to variable (remaning)

If Equal to ‘1 1’ yes

If remaining >0

No

End

No

Yes

Starting from second row

Figure 2: Recovery Process flowchart

An Improved form of LSB Based Steganography

2 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 7: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Mamta Juneja*Neeru**

*Asstt. Professor, CSE Deptt., RBIEBT, Sahauran**Student (M-Tech), Instrumentation PU, Chandigarh

1

An Improved form of LSB Based Steganography

ABSTRACT

Steganography is the process of hiding a message into another cover media. Various techniques were proposed and implemented. Image based steganography uses the images as the cover media. LSB is the simplest technique used in this fieled. An improvement over the LSB is the random pixel manipulation which uses the stego-key to randomize the data to be hidden; this paper proposes a new technique that takes the advantage of the 24 bits in each pixel in the RGB images. Least two significant bits of one of the channels will be used to indicate existence of data in the other two channels. Then, the paper focuses in security and capacity measures to evaluate the proposed technique.

Keywords: Steganography, LSB, RGB Bitmaps, Pixel Indicator Algorithm.

Introduction

Steganography is the art and science of hiding information by embedding messages within other seemingly harmless messages. Steganography, derived from Steganography (literally meaning covered writing) dates back to ancient Greece, where common practices consisted of etching messages in wooden tablets and covering them with wax, and tattooing a shaved messenger's head, letting his hair grow back, then shaving it again when he arrived at his contact point. Different types of Steganographic techniques employ invisible inks, microdots, character arrangement, digital signatures, convert channel, and spread spectrum communications.

Image steganography based techniques require two files: cover media, and the data to be hidden [3, 7, 8]. When combined, the cover image and the embedded message make a stego-image [3]. One of the simplest techniques is the LSB where the least significant bit of each pixel is replaced by one bit of the secret till secret message finishes [2, 4, 5]. The risk of information being uncovered is relatively high as such approach is susceptible to all 'sequential scanning' based techniques [1].

The random pixel manipulation technique attempts at overcoming this problem, where pixels,

which will be used to hide data are chosen in a random fashion based on a stego-key. However this key should be shared between the entities of communication. Moreover some synchronization between the entities is required when changing the key. [1] This will put more overhead on the system. Another technique is the Stego Color Cycle. This technique uses the RGB images to hide the data in the different channels. That is, it keeps cycling the hidden data between the Red, Green and Blue channels. The main problem of this technique is that, hiding the data in the channels is done in a systematic way. So, being able to discover the data in the first few pixels will make the discovery of the technique easy. StegoPRNG is also another technique that uses the RGB images. However in this technique, PRNG is used to select some pixels of the cover image. Then the secret will be hidden in the Blue channel of the selected pixels. Again this technique has the problem of managing the key, and problem of capacity since it uses only the Blue channel out of the three channels of their available channels. [6] Our suggested technique tries to solve the problem of the previous two techniques by using one of the channels as an indicator for data existence in the other two channels. The indicator is set randomly.

Requirements

Designing any algorithm that is used for data

"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

hiding should take into consideration the following aspects:

? Perceptual Transparency: the hiding process should be performed in a way that does not raise any suspicion of the eavesdropper.

? Capacity: this term refers to the amount of data that can be hidden without changing the medium.

? Robustness.

Pixel Indicator Technique

In this technique, RGB images are used. The technique uses least two significant bits of one of the channels, Red, Green or Blue as an indicator of data existence in the other two channels. The indicator bits are set randomly in the channel. The following table shows the relation between the indicator and the hidden data inside the other channels:

Table 1: Meaning of indicator values.

Indicator Ch 1 Ch 2

The indicators are chosen in a sequence, i.e. in the first pixel, Red is the indicator, while Green is channel 1 and Blue is channel 2. In the second pixel, Green is the indicator, while Red is channel 1 and Blue is channel 2. In third pixel, Blue is the indicator, while Red is channel 1 and Green is channel 2. The sequence of the algorithm is flowcharted in the Figure 1. The

recover algorithm will stop once the length of the secret, which is stored in the image, is reached.

Cover Image

check the 2 LSBs Of the indicator Channel

If Equal to ‘0 0’

If Equal to ‘0 1’

If Equal to ‘1 0’

No ChangeGo to next pixel

Extract 2 bits of data from 2 LSBs of channel 1

remaining =remaining-2

Go to next pixel

Extract 2 bits of data from 2 LSBs of channel 2

remaining =remaining-2 Go to next pixel

Extract 2 bits of data from 2 LSBs of channel 1 and 2 bits from 2 LSBs of

channel 2remaining =remaining-4

Go to next pixel

Yes

Yes

yes

No

No

Extract the length of message that

stored in the first 8 bytes of first row of cover image to variable (remaning)

If Equal to ‘1 1’ yes

If remaining >0

No

End

No

Yes

Starting from second row

Figure 1: Hiding Process flowchart

Cover Image

check the 2 LSBs Of the indicator Channel

If Equal to ‘0 0’

If Equal to ‘0 1’

If Equal to ‘1 0’

No ChangeGo to next pixel

Extract 2 bits of data from 2 LSBs of channel 1

remaining =remaining-2

Go to next pixel

Extract 2 bits of data from 2 LSBs of channel 2

remaining =remaining-2 Go to next pixel

Extract 2 bits of data from 2 LSBs of channel 1 and 2 bits from 2 LSBs of

channel 2remaining =remaining-4

Go to next pixel

Yes

Yes

yes

No

No

Extract the length of message that

stored in the first 8 bytes of first row of cover image to variable (remaning)

If Equal to ‘1 1’ yes

If remaining >0

No

End

No

Yes

Starting from second row

Figure 2: Recovery Process flowchart

An Improved form of LSB Based Steganography

2 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 8: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Modeling and Simulation

This section of the paper describes the method that was used to test the algorithm and the results.

A BMP image of size 512 X 384 was used to hide a text message of 11,733 characters length (that is 93,864 bits). The test was performed 10 times. Frequency analysis histogram was performed to check the change in the cover image. Also, the number of pixels used in each run to hide data was recorded.

Figure 3 shows the original picture before the hiding process; Figure 4 shows the picture after the hiding process algorithm. Figures 5 and 6 show the histogram of the Red channel pre and post the hiding process. Figures 7 and 8 show the histogram of the Green channel pre and post the hiding process. Figures 9 and 10 show the histogram of the Blue channel pre and post the hiding process.

Figure 3: BMP image of size 512X384 that was used to hide the data.

Figure 4: The BMP image after the hiding process was performed to hide a text message of

11,733 characters length.

Figure 5: Histogram of the red channel from the original image.

Figure 6: Histogram of the red channel in the modified image.

Comparison Conclusions

By comparing the two histograms of the red channel before and after the modification, the change is minimal and cannot be detected by naked eyes.

Figure 7: Histogram of the green channel in the original image

Institute of Management Studies, Dehradun

3"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Figure 8: Histogram of the green channel in the modified image

Figure 9: Histogram of blue channel in the original image

Figure 10: Histogram of the Blue channel of the modified image

Again by comparing the two histograms of the green channel before and after the modification, the change is minimal and cannot be detected by naked eyes.

However, changes can be detected in the histogram of the blue channel, which indicate that most of the changes were done in the blue channel. This is just a sample of one run. Since the change in our technique is random, in another run we might get better distribution between the three channels.

Table 2 shows that with an image of 196,608 pixels, only fifth of these pixels were used to hide the text.

Run Pixels

Required

Run Pixels Required

References1. A j i th Abraham, Marc in Papr zyck i ,

"Significance of Steganography on Data Security" IEEE Computer Society, 2004.

2. Kathryn, Hempstalk, " Hiding Behind Corners: Us ing Edges in Images fo r Be t t e r Steganography", 2006.

3. Neil F. Johnson. Sushil Jajodia, "Exploring Steganography: Seeing the Unseen", IEEE computer, 1998.

4. Gary C. Kess ler, “An Overview of Steganography for the Computer Forensics Examiner", 2004.

5. Donovan Artz Los Alamos National Laboratory, “Digital Steganography: Hiding Data within Data”,2001

6. Kevin Curran, Karen Bailey, “An Evaluation of Image Based Steganography Methods”

7. Adnan Gutub, Lahouari Ghouti, Alaaeldin Amin, Talal Alkharobi, and Mohammad K. Ibrahim, “Utilizing Extension Character 'Kashida' With Pointed Letters For Arabic Text Digital Watermarking”, International Conference on Security and Cryptography - SECRYPT, Barcelona, Spain, July 28 - 31, 2007.

8. Adnan Gutub and Manal Fattani, “A Novel Arabic Text Steganography Method Using Letter Points and Extensions”, WASET International Conference on Computer, Information and Systems Science and Engineering (ICCISSE), Vienna, Austria, May 25-27, 2007.

An Improved form of LSB Based Steganography

4 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 9: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Modeling and Simulation

This section of the paper describes the method that was used to test the algorithm and the results.

A BMP image of size 512 X 384 was used to hide a text message of 11,733 characters length (that is 93,864 bits). The test was performed 10 times. Frequency analysis histogram was performed to check the change in the cover image. Also, the number of pixels used in each run to hide data was recorded.

Figure 3 shows the original picture before the hiding process; Figure 4 shows the picture after the hiding process algorithm. Figures 5 and 6 show the histogram of the Red channel pre and post the hiding process. Figures 7 and 8 show the histogram of the Green channel pre and post the hiding process. Figures 9 and 10 show the histogram of the Blue channel pre and post the hiding process.

Figure 3: BMP image of size 512X384 that was used to hide the data.

Figure 4: The BMP image after the hiding process was performed to hide a text message of

11,733 characters length.

Figure 5: Histogram of the red channel from the original image.

Figure 6: Histogram of the red channel in the modified image.

Comparison Conclusions

By comparing the two histograms of the red channel before and after the modification, the change is minimal and cannot be detected by naked eyes.

Figure 7: Histogram of the green channel in the original image

Institute of Management Studies, Dehradun

3"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Figure 8: Histogram of the green channel in the modified image

Figure 9: Histogram of blue channel in the original image

Figure 10: Histogram of the Blue channel of the modified image

Again by comparing the two histograms of the green channel before and after the modification, the change is minimal and cannot be detected by naked eyes.

However, changes can be detected in the histogram of the blue channel, which indicate that most of the changes were done in the blue channel. This is just a sample of one run. Since the change in our technique is random, in another run we might get better distribution between the three channels.

Table 2 shows that with an image of 196,608 pixels, only fifth of these pixels were used to hide the text.

Run Pixels

Required

Run Pixels Required

References1. A j i th Abraham, Marc in Papr zyck i ,

"Significance of Steganography on Data Security" IEEE Computer Society, 2004.

2. Kathryn, Hempstalk, " Hiding Behind Corners: Us ing Edges in Images fo r Be t t e r Steganography", 2006.

3. Neil F. Johnson. Sushil Jajodia, "Exploring Steganography: Seeing the Unseen", IEEE computer, 1998.

4. Gary C. Kess ler, “An Overview of Steganography for the Computer Forensics Examiner", 2004.

5. Donovan Artz Los Alamos National Laboratory, “Digital Steganography: Hiding Data within Data”,2001

6. Kevin Curran, Karen Bailey, “An Evaluation of Image Based Steganography Methods”

7. Adnan Gutub, Lahouari Ghouti, Alaaeldin Amin, Talal Alkharobi, and Mohammad K. Ibrahim, “Utilizing Extension Character 'Kashida' With Pointed Letters For Arabic Text Digital Watermarking”, International Conference on Security and Cryptography - SECRYPT, Barcelona, Spain, July 28 - 31, 2007.

8. Adnan Gutub and Manal Fattani, “A Novel Arabic Text Steganography Method Using Letter Points and Extensions”, WASET International Conference on Computer, Information and Systems Science and Engineering (ICCISSE), Vienna, Austria, May 25-27, 2007.

An Improved form of LSB Based Steganography

4 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 10: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

ABSTRACT

Fault tolerance is a crucial requirement for real time systems working on time critical applications. The issue of real time fault tolerant scheduling, which guarantees timely completion of a set of tasks inspite of the presence of failure in the system, has largely been studied assuming no dependencies among the tasks. However, the well-known primary-backup approach that works well for independent tasks may not be directly applicable for precedence constrained tasks, as shown in this paper, through a recently reported work. Understanding the intricacies of fault-tolerant scheduling, for precedence constrained tasks, shall be useful for better design of scheduling algorithms in this area.

Key words: Fault-tolerant scheduling, real-time, algorithm, reliability, precedence constrained tasks

Introduction

A real time system is required to complete the job assigned and deliver results within a specified time frame called the deadline. Scheduling involves ordering the tasks of the job and allocation of resources in such a way that certain performance parameters are met or optimized when a real time job with a specified time deadline arrives in the system. Reliability is relatively a more vital requirement for a real time scheduler that generally pertains to mission critical applications required in avionics, medical treatments, defence, and hazardous industrial applications. As a result, the issue of fault tolerant scheduling, which guarantees the timely completion of tasks inspite of the presence of failure in one or more processors in the system, is of great significance. This issue caught the attention of researchers in late 80's and early 90's [13][14][17][3]; it was shown that the timely fault tolerant (TFT) scheduling problem is a hard problem to solve, even in the simple case when there are more than two processors and the tasks are independent and share a common deadline [18].

The most commonly employed fault-tolerant t e c h n i q u e i s Pr i m a r y / Ba c k u p a p p ro a c h

Intricacies of Fault-Tolerant Scheduling for Precedence and Deadline Constrained Tasks

Rakesh Kumar Bansal*Kawaljeet Singh**

Savina Bansal**

*Deptt. of ECE, GZS College of Engg. & Tech, Bathinda (Pb) -151 001, India**University Computer Centre, Punjabi University, Patiala (Pb), [email protected], [email protected], [email protected]

[1][14][13][17][19]. This scheme works fine as long as the tasks are assumed independent, as scheduling the backup copy of a task on a processor other than where it's primary is scheduled makes the scheduler 1-fault tolerant. The primary and backup copies may run concurrently [2] or sequentially (start-time of backup copy is more than the finish time of it's primary copy) on two different processors [7][19][17]. However, if the job to be scheduled comprise of precedence constrained sub-tasks (a 'child' task can not execute before the finishing of its 'parent' task), this needs to be considered afresh.

Recently, some works with generalized computation and communication costs have been reported for arbitrary task graphs [9, 10] giving quite

4 complex O(v ) scheduling algorithm; however, real time deadline constraints have not been considered in their work. In a more generalized recent work [19], a reliability aware 1-FT algorithm is reported for heterogeneous platforms and its performance evaluated in comparison to other algorithms [5][18] for precedence and deadline constrained tasks. The reported e-FRD algorithm works on the traditional primary-backup approach with backup overloading [1] to further improve the scheduling efficiency.

5"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

However, while analyzing and studying this algorithm, it was observed that in the presence of precedence among tasks, fault tolerance issue becomes more complex as compared to independent tasks. In this work, we are describing an example task graph instance that is scheduled through e-FRD fault tolerant algorithm but the schedule is actually not feasible.

The article is organized as follows- next section deals with the system and task modeling as used for the scheduler. Section 3 gives basics of e-FRD algorithm for better understanding of the work. Section 4 describes the example case for which this algorithm fails to be 1-fault tolerant. Section 5 concludes the work with some final remarks and discussion.

System And Task Modelling

A real time job with dependent tasks can be modeled by a Directed Acyclic Graph (DAG) T= {V,E), where V = {v , v , ..., v } is a set of 'n' real time n 1 2

tasks, assumed to be non-preemptive, and E is a set of weighted and directed edges representing precedence constraints and communication among tasks (Fig 1).

? indicates a message transmitted from task . When any random processor in a system fails, it takes a certain amount of time, denoted d, to detect and handle the fault. To tolerate permanent (or transient) faults in one processor, a primary-backup (PB) technique is applied that uses two identical primary

P B(v ) and backup (v ) copies of any task v and executes them sequentially on two different processors.

(v ,v ) v ,vi j i j

Intricacies of Fault-Tolerant Scheduling for Precedence...

v1

v2

((6,18 , 8),72)

1

2

Fig 1: A typical DAG representing the task model ((w , w , w ), d )i1 i2 i3 i

The heterogeneous multiprocessor computing system consists of a set of 'm' fully-connected heterogeneous processors. Processors communicate through message passing, and the communication time/cost between two tasks assigned to the same processor is assumed to be zero. A measure of computational heterogeneity is modeled by a function, W : V x P ? Z + , which represents the execution time of each task on each processor in the system. Thus, w denotes the execution time of taskij

on processor . Links are assumed to be homogeneous and reliable for the sake of explicit and specific comparison.

Given a taskdenote the deadline (the maximum time by which the task should finish its execution), scheduled start and

P = {p , p ,..., p }1 2 m

vi

p j

iv ∈V, Pid , )( and )( P

iPi vFTvST

FT )( Biv represent the same for ’s backup copy, iv

respectively. denotes the processor to which is)( ivp

allocated. These parameters are subject to constraint:

)()( ijPi

Pi wdvST −≤ , where , andjvp P

i =)( v ∈V,

( Pid = B

id = id here). Further, let X be an m by n binary

matrix corresponding to a schedule, in which the primary copies of n tasks are assigned to m processors. Element equals 1 if and only if ’s primary copy has been assigned to processor otherwise = 0. Likewise, denote an m by n binary allocation matrix of backup copies, in which an element is 1if and only if the backup copy of has been assigned to ; otherwise equals 0.

ijx iv

jp ; ijx

BX Bijx

ivjp B

ijx

iv

iv ’s primary copy, whereas and)(, Bi

Bi vSTdfinish time of

Reliability OverviewUnder the assumption of independent failures

of processors with a constant failure rate, the reliability of a processor in time interval ‘t’ is givenby ? ti

? (1 ? i ? m) is 's failure rate in a vector of i

failure rates ? = (? , ? , ..., ? ) with m being the 1 2 m

number of processors in the system. The state of the system is represented by a random variable K which takes in {0, 1, 2, …, m}. More precisely, K = 0 means that no processor permanently fails, and K = i (1 ? i ?

th m) signifies that the i processor encounters permanent failure. Probability for K is determined by

equation (1), where ? is schedule length of processor ii

p exp(– ), i

where pi

=−==

≠=

=

otherwise

)exp()]exp(--[1

0kfor

) exp(

]Pr[

,1k

m

1i

m

kjiiik

ii

kK

τλτλ

τλ

(1)

6 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 11: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

ABSTRACT

Fault tolerance is a crucial requirement for real time systems working on time critical applications. The issue of real time fault tolerant scheduling, which guarantees timely completion of a set of tasks inspite of the presence of failure in the system, has largely been studied assuming no dependencies among the tasks. However, the well-known primary-backup approach that works well for independent tasks may not be directly applicable for precedence constrained tasks, as shown in this paper, through a recently reported work. Understanding the intricacies of fault-tolerant scheduling, for precedence constrained tasks, shall be useful for better design of scheduling algorithms in this area.

Key words: Fault-tolerant scheduling, real-time, algorithm, reliability, precedence constrained tasks

Introduction

A real time system is required to complete the job assigned and deliver results within a specified time frame called the deadline. Scheduling involves ordering the tasks of the job and allocation of resources in such a way that certain performance parameters are met or optimized when a real time job with a specified time deadline arrives in the system. Reliability is relatively a more vital requirement for a real time scheduler that generally pertains to mission critical applications required in avionics, medical treatments, defence, and hazardous industrial applications. As a result, the issue of fault tolerant scheduling, which guarantees the timely completion of tasks inspite of the presence of failure in one or more processors in the system, is of great significance. This issue caught the attention of researchers in late 80's and early 90's [13][14][17][3]; it was shown that the timely fault tolerant (TFT) scheduling problem is a hard problem to solve, even in the simple case when there are more than two processors and the tasks are independent and share a common deadline [18].

The most commonly employed fault-tolerant t e c h n i q u e i s Pr i m a r y / Ba c k u p a p p ro a c h

Intricacies of Fault-Tolerant Scheduling for Precedence and Deadline Constrained Tasks

Rakesh Kumar Bansal*Kawaljeet Singh**

Savina Bansal**

*Deptt. of ECE, GZS College of Engg. & Tech, Bathinda (Pb) -151 001, India**University Computer Centre, Punjabi University, Patiala (Pb), [email protected], [email protected], [email protected]

[1][14][13][17][19]. This scheme works fine as long as the tasks are assumed independent, as scheduling the backup copy of a task on a processor other than where it's primary is scheduled makes the scheduler 1-fault tolerant. The primary and backup copies may run concurrently [2] or sequentially (start-time of backup copy is more than the finish time of it's primary copy) on two different processors [7][19][17]. However, if the job to be scheduled comprise of precedence constrained sub-tasks (a 'child' task can not execute before the finishing of its 'parent' task), this needs to be considered afresh.

Recently, some works with generalized computation and communication costs have been reported for arbitrary task graphs [9, 10] giving quite

4 complex O(v ) scheduling algorithm; however, real time deadline constraints have not been considered in their work. In a more generalized recent work [19], a reliability aware 1-FT algorithm is reported for heterogeneous platforms and its performance evaluated in comparison to other algorithms [5][18] for precedence and deadline constrained tasks. The reported e-FRD algorithm works on the traditional primary-backup approach with backup overloading [1] to further improve the scheduling efficiency.

5"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

However, while analyzing and studying this algorithm, it was observed that in the presence of precedence among tasks, fault tolerance issue becomes more complex as compared to independent tasks. In this work, we are describing an example task graph instance that is scheduled through e-FRD fault tolerant algorithm but the schedule is actually not feasible.

The article is organized as follows- next section deals with the system and task modeling as used for the scheduler. Section 3 gives basics of e-FRD algorithm for better understanding of the work. Section 4 describes the example case for which this algorithm fails to be 1-fault tolerant. Section 5 concludes the work with some final remarks and discussion.

System And Task Modelling

A real time job with dependent tasks can be modeled by a Directed Acyclic Graph (DAG) T= {V,E), where V = {v , v , ..., v } is a set of 'n' real time n 1 2

tasks, assumed to be non-preemptive, and E is a set of weighted and directed edges representing precedence constraints and communication among tasks (Fig 1).

? indicates a message transmitted from task . When any random processor in a system fails, it takes a certain amount of time, denoted d, to detect and handle the fault. To tolerate permanent (or transient) faults in one processor, a primary-backup (PB) technique is applied that uses two identical primary

P B(v ) and backup (v ) copies of any task v and executes them sequentially on two different processors.

(v ,v ) v ,vi j i j

Intricacies of Fault-Tolerant Scheduling for Precedence...

v1

v2

((6,18 , 8),72)

1

2

Fig 1: A typical DAG representing the task model ((w , w , w ), d )i1 i2 i3 i

The heterogeneous multiprocessor computing system consists of a set of 'm' fully-connected heterogeneous processors. Processors communicate through message passing, and the communication time/cost between two tasks assigned to the same processor is assumed to be zero. A measure of computational heterogeneity is modeled by a function, W : V x P ? Z + , which represents the execution time of each task on each processor in the system. Thus, w denotes the execution time of taskij

on processor . Links are assumed to be homogeneous and reliable for the sake of explicit and specific comparison.

Given a taskdenote the deadline (the maximum time by which the task should finish its execution), scheduled start and

P = {p , p ,..., p }1 2 m

vi

p j

iv ∈V, Pid , )( and )( P

iPi vFTvST

FT )( Biv represent the same for ’s backup copy, iv

respectively. denotes the processor to which is)( ivp

allocated. These parameters are subject to constraint:

)()( ijPi

Pi wdvST −≤ , where , andjvp P

i =)( v ∈V,

( Pid = B

id = id here). Further, let X be an m by n binary

matrix corresponding to a schedule, in which the primary copies of n tasks are assigned to m processors. Element equals 1 if and only if ’s primary copy has been assigned to processor otherwise = 0. Likewise, denote an m by n binary allocation matrix of backup copies, in which an element is 1if and only if the backup copy of has been assigned to ; otherwise equals 0.

ijx iv

jp ; ijx

BX Bijx

ivjp B

ijx

iv

iv ’s primary copy, whereas and)(, Bi

Bi vSTdfinish time of

Reliability OverviewUnder the assumption of independent failures

of processors with a constant failure rate, the reliability of a processor in time interval ‘t’ is givenby ? ti

? (1 ? i ? m) is 's failure rate in a vector of i

failure rates ? = (? , ? , ..., ? ) with m being the 1 2 m

number of processors in the system. The state of the system is represented by a random variable K which takes in {0, 1, 2, …, m}. More precisely, K = 0 means that no processor permanently fails, and K = i (1 ? i ?

th m) signifies that the i processor encounters permanent failure. Probability for K is determined by

equation (1), where ? is schedule length of processor ii

p exp(– ), i

where pi

=−==

≠=

=

otherwise

)exp()]exp(--[1

0kfor

) exp(

]Pr[

,1k

m

1i

m

kjiiik

ii

kK

τλτλ

τλ

(1)

6 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 12: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

Reliability heterogeneity is implied in the variation of computation time and failure rate of different

Bprocessors. Let R(? , X, X , T ) denote the system B reliability for a given schedule X and X , a set ? of

processors’ failure rates, and a job T. The system reliability equals the probability that all tasks can be successfully completed even in the presence of one processor’s hardware/software faults. Under the assumption that no more than one processor permanently fails in the current system, that is,

∑=

==m

i

ik0

1)Pr(

),,,(0 TXXR BΛth

, two kinds of reliabilities need to be

derived, namely: (1), the reliability when every processor is operational , and (2) the reliability when exactly the k processor fails is expressed as : .),,,(k TXXR BΛ

+Λ×==Λ BB TXXRKTXXR 0 ),,,()0Pr(),,,(

∑=

Λ×=m

k

Bk TXXRkK1

)],,,,()[Pr( (2)

(4)

(3)

[ ]

[ ] 1 where)(exp[

)exp()exp(

)exp()exp(),,,(

)exp(),,(

,1 1

,1 1

,1 1,1 1

1 1

0

mk xxxw

wxxwx

wxxwxTXXR

wxTXR

m

kjj

n

i

Bijikijijj

m

kjj

n

iij

Bijikjijijj

m

kjj

n

iij

Bijikj

m

kjj

n

iijijj

Bk

m

j

n

iijijj

≤≤+−=

−×−=

−×

−=Λ=

−=Λ=

∏ ∏

∏ ∏

∏ ∏∏ ∏

∏∏

≠= =

≠= =

≠= =≠= =

= =

λ

λλ

λλ

λ

As the communication links are assumed faultfree so the term represents the system

threliability in case of k processor failure, ,else it shall be the product of processor reliability andlink reliability as derived in [19].

The expression within first pair of brackets inequation (4) represents the probability that taskswhose primary copies are on fault free processor areoperational during the course of execution. Similarly,the expression in the second pair of brackets is theprobability that backup copies of tasks, whoseprimary copies reside on failed processor, areoperational during execution of these backup copies.

Overview of e-FRD AlgorithmIn the background of discussion in Section 2,

we now provide a brief overview of the e-FRDalgorithm [19] for the sake of completeness. The eFRD algorithm works in two parts: Firstly itschedules the primary copies of tasks in the DAGand then their backup copies, which are initiallysorted in non-decreasing order of their deadlines.The candidate task is allocated to the processor

),,,( TXXR Bk Λmk ≤≤0

Bi

Pi

Bi

BPi vvFTvSTvpvpVv ∧+≥∧≠∈∀ ))(()(()()((: δ

iBi

Pi dvFTorvFT ≤)()(

giving maximum system reliability. So, in the firstpart, all the primary copies, if scheduled (i.e.deadline constraints met), will tend to maximize theoverall reliability of the system. Start and finish timeof tasks are calculated keeping in view the precedenceand deadline constraints, which allow a task to startexecution only after all of its predecessors haveexecuted and communicated their messages (if any)to the candidate task and the task is able to fulfill itsspecified deadline i.e finish time . In the second phase, the backup copies of the tasks arescheduled though with some restrictions on thechoice of processors, so as to generate 1-fault tolerantschedule.Candidate tasks are chosen from the sorted queue (asin part 1) and scheduled on processors chosen as perthe following-

Proposition: A schedule is 1-TFT ?

can overlap with other backup copies on the sameprocessor if their primary copies are allocated todifferent processors. (Proof in [19])

7"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

iv

)()( ij vFTvST ≥ .

jv

iv jv iv jv

ßÙ×5ß�²^ß�1ßIÀöO¨ëR«­­…ßU¯�ð~ÈžsIßÐà�?«•,Xœ�Û·ñÇßßu

ß«�ñNo^ ßßß Á•´�1 æ¸ê•?�D×   ß•Ü� ß Í8ß¼ý?€ ßÑ)kß•â$ßtJ∈ß\—�äwí�Å—

eFRD algorithm

Step 1:

2. Sort the tasks in the DAG in non decreasing order of their deadline taking into consideration precedence constraints and put them in a ordered list OL

3. for each task iv in OL, following the order, schedule its primary copy on a processor

honoring deadline constraint and giving maximum reliability to the overall system 4. If no proper processor is available then return (FAIL);

5.

Update processor and link informations

end for

Step 2:

6.

for each task iv in OL, following the order, schedule its backup copy on a feasible processor

that honors deadline and gives maximum system reliability(vi) If no proper processor is available then return (FAIL);

(i) Update information on processors and links;

end for

return (SUCCEED)

Piv B

iv Pivschedule-preceding , then and can not be

allocated to the same processor. (Proof in [19]).The pseudo code of the algorithm is given in Fig 2.

Fig 2: Pseudo code for the e-FRD algorithm

Deviation from Fault Tolerance

In this section, we take up an example case study to describe the situation when the given algorithm may fail to generate a feasible schedule with a processor failure.

An Example Case Study:

The DAG under consideration is shown in Fig 3with n = 6, m =3, and processor failure rates as

Scheduling order for primary and backup copies is as <1, 2, 3, 4, 5, 6>. In the first step, primary copy of all the six tasks gets scheduled and in second phase, all backup copies are scheduled on processors resulting in maximum overall reliability of the system (Fig 4).

6

3

6

2

6

110152 and ,10153,10126 −−− ×=×=×= λλλ

Fig 3: Example DAG for case Study with v = ((w w w ), d )i i1, i2, i3 i

v1 v2

V3

2

18

7

0

V2

b

v1b V4

v1

V3b

Tables 1 and 2 provide schedule details for better understanding. It may be seen from Table 2 that in case processor 1 or processor 3 fails, the algorithm is still able to generate feasible schedule using backup copies although with increased schedule length (=203). However, in case P2 fails, some of the tasks shall not be able to execute as per the schedule generated by the algorithm as explained in Table 2.

Now, inspite of complying all the above propositions and theorems the schedule is still not 1-FT.The reason being that these propositions have been designed keeping in view the independent nature of

Fig 4: Schedule generated by e-FRD algorithm for example DAG of Fig 3

8

Intricacies of Fault-Tolerant Scheduling for Precedence...

"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 13: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

Reliability heterogeneity is implied in the variation of computation time and failure rate of different

Bprocessors. Let R(? , X, X , T ) denote the system B reliability for a given schedule X and X , a set ? of

processors’ failure rates, and a job T. The system reliability equals the probability that all tasks can be successfully completed even in the presence of one processor’s hardware/software faults. Under the assumption that no more than one processor permanently fails in the current system, that is,

∑=

==m

i

ik0

1)Pr(

),,,(0 TXXR BΛth

, two kinds of reliabilities need to be

derived, namely: (1), the reliability when every processor is operational , and (2) the reliability when exactly the k processor fails is expressed as : .),,,(k TXXR BΛ

+Λ×==Λ BB TXXRKTXXR 0 ),,,()0Pr(),,,(

∑=

Λ×=m

k

Bk TXXRkK1

)],,,,()[Pr( (2)

(4)

(3)

[ ]

[ ] 1 where)(exp[

)exp()exp(

)exp()exp(),,,(

)exp(),,(

,1 1

,1 1

,1 1,1 1

1 1

0

mk xxxw

wxxwx

wxxwxTXXR

wxTXR

m

kjj

n

i

Bijikijijj

m

kjj

n

iij

Bijikjijijj

m

kjj

n

iij

Bijikj

m

kjj

n

iijijj

Bk

m

j

n

iijijj

≤≤+−=

−×−=

−×

−=Λ=

−=Λ=

∏ ∏

∏ ∏

∏ ∏∏ ∏

∏∏

≠= =

≠= =

≠= =≠= =

= =

λ

λλ

λλ

λ

As the communication links are assumed faultfree so the term represents the system

threliability in case of k processor failure, ,else it shall be the product of processor reliability andlink reliability as derived in [19].

The expression within first pair of brackets inequation (4) represents the probability that taskswhose primary copies are on fault free processor areoperational during the course of execution. Similarly,the expression in the second pair of brackets is theprobability that backup copies of tasks, whoseprimary copies reside on failed processor, areoperational during execution of these backup copies.

Overview of e-FRD AlgorithmIn the background of discussion in Section 2,

we now provide a brief overview of the e-FRDalgorithm [19] for the sake of completeness. The eFRD algorithm works in two parts: Firstly itschedules the primary copies of tasks in the DAGand then their backup copies, which are initiallysorted in non-decreasing order of their deadlines.The candidate task is allocated to the processor

),,,( TXXR Bk Λmk ≤≤0

Bi

Pi

Bi

BPi vvFTvSTvpvpVv ∧+≥∧≠∈∀ ))(()(()()((: δ

iBi

Pi dvFTorvFT ≤)()(

giving maximum system reliability. So, in the firstpart, all the primary copies, if scheduled (i.e.deadline constraints met), will tend to maximize theoverall reliability of the system. Start and finish timeof tasks are calculated keeping in view the precedenceand deadline constraints, which allow a task to startexecution only after all of its predecessors haveexecuted and communicated their messages (if any)to the candidate task and the task is able to fulfill itsspecified deadline i.e finish time . In the second phase, the backup copies of the tasks arescheduled though with some restrictions on thechoice of processors, so as to generate 1-fault tolerantschedule.Candidate tasks are chosen from the sorted queue (asin part 1) and scheduled on processors chosen as perthe following-

Proposition: A schedule is 1-TFT ?

can overlap with other backup copies on the sameprocessor if their primary copies are allocated todifferent processors. (Proof in [19])

7"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

iv

)()( ij vFTvST ≥ .

jv

iv jv iv jv

ßBDæß�iz#ß�Ôß�8`ù–Œï‰„‚"¼â|�w>�Ài�ßï�oÖr�e�ª‰B“´Æißßb

ßp�*Í ¬z ßßß 1Âf&tŸV•ã<ó&Û�µßãƿ߶�©ß• ƒvißæ#� ßïZMßÊ7∈ß�o]Ÿâ�y=¬

eFRD algorithm

Step 1:

2. Sort the tasks in the DAG in non decreasing order of their deadline taking into consideration precedence constraints and put them in a ordered list OL

3. for each task iv in OL, following the order, schedule its primary copy on a processor

honoring deadline constraint and giving maximum reliability to the overall system 4. If no proper processor is available then return (FAIL);

5.

Update processor and link informations

end for

Step 2:

6.

for each task iv in OL, following the order, schedule its backup copy on a feasible processor

that honors deadline and gives maximum system reliability(vi) If no proper processor is available then return (FAIL);

(i) Update information on processors and links;

end for

return (SUCCEED)

Piv B

iv Pivschedule-preceding , then and can not be

allocated to the same processor. (Proof in [19]).The pseudo code of the algorithm is given in Fig 2.

Fig 2: Pseudo code for the e-FRD algorithm

Deviation from Fault Tolerance

In this section, we take up an example case study to describe the situation when the given algorithm may fail to generate a feasible schedule with a processor failure.

An Example Case Study:

The DAG under consideration is shown in Fig 3with n = 6, m =3, and processor failure rates as

Scheduling order for primary and backup copies is as <1, 2, 3, 4, 5, 6>. In the first step, primary copy of all the six tasks gets scheduled and in second phase, all backup copies are scheduled on processors resulting in maximum overall reliability of the system (Fig 4).

6

3

6

2

6

110152 and ,10153,10126 −−− ×=×=×= λλλ

Fig 3: Example DAG for case Study with v = ((w w w ), d )i i1, i2, i3 i

v1 v2

V3

2

18

7

0

V2

b

v1b V4

v1

V3b

Tables 1 and 2 provide schedule details for better understanding. It may be seen from Table 2 that in case processor 1 or processor 3 fails, the algorithm is still able to generate feasible schedule using backup copies although with increased schedule length (=203). However, in case P2 fails, some of the tasks shall not be able to execute as per the schedule generated by the algorithm as explained in Table 2.

Now, inspite of complying all the above propositions and theorems the schedule is still not 1-FT.The reason being that these propositions have been designed keeping in view the independent nature of

Fig 4: Schedule generated by e-FRD algorithm for example DAG of Fig 3

8

Intricacies of Fault-Tolerant Scheduling for Precedence...

"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 14: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Proc failure

Schedule length

Remarks

P1 203 Fault tolerant Feasible schedule P2

Infeasible schedule

Primary of v3 shall not execute as its predecessor Bv2

is not

message preceding Pv3

so instead Bv3

shall execute on P 1; but

then, since it is not schedule preceding Pv4 , and Bv

4 lies on failed processor so v4 shall fail to execute thereby making the schedule infeasible and algorithm non-fault tolerant

P3 203 Fault tolerant Feasible schedule

Table 2: 1-Fault tolerant Analysis of the schedule produced for Case 2

tasks, but when the execution is highly precedence constrained then these need to be modified accordingly. The proposition 4 and Theorem 2, which decides upon the feasible processors for scheduling backup copies are necessary but not sufficient to guarantee the 1-FT schedule of the DAG.

NodePrimary

(ST,FT, Proc)Bkup

(ST,FT, Proc)

V1 (0,7,p3) (7,28,p1)V2 (0,3,p2) (3,7, p1)V3 (11,47,p3) (61,108,p1)V4 (54,61,p1) (115,154,p2)V5 (47,68,p3) (108,135,p1)V6 (68,84,p3) (154,203,p2)

Table 1: Scheduling details for the example DAG

Discussion and Conclusions

Fault tolerance is a vital design issue for real time systems as it guarantees faithful operation of the system inspite of failures in the underlying processing components. In this work, we have discussed the intricacies of fault tolerant scheduling for precedence constrained task graphs with reference to one such recently reported algorithm. The algorithm does not remain 1-fault tolerant, as reported in literature. There is a need for re-examining the fault tolerance issues for precedence constrained task graphs in the light of case study shown in this work. The authors are presently working on this issue of non-fault-tolerance of real time scheduling of precedence constrained task graphs.

References

1. B Mirle et al.“Simulation of fault-tolerant Scheduling on real-time Multiprocessor systems using Primary-backup overloading”, UH-CS-06-04, Univ. of Houston, USA

2. S Balaji, L Jenkins, L M Patnaik and P S Goel, “Workload redistribution for fault tolerance in a hard real time distributed computing systems”, Proc IEEE Fault Tolerance Computing Symp, (FTCS-19), pp 366-383, 1989

3. Bertossi and L. Mancini, “Scheduling algorithms for fault-tolerance in hard-real-time systems”, Real-Time Systems Journal, Vol 7, No 3, pp 229245, 1994

4. Bertossi, L Mancini, and F Rossini, “Fault-tolerant rate-monotonic first-fit scheduling in hard real-time systems”, IEEE Trans. on Parallel and Distributed Systems, Vol 10, pp 934945, 1999

5. A Girault, C Lavarenne, M Sighireanu, and Y Sorel, “Generation of Fault-Tolerant Static Scheduling for Real-Time Embedded Systems with Multi-Point Links”, IEEE Workshop on Fault-Tolerant Parallel Systems, USA, Apr 2001

6. C Gong, R Melhem, and R Gupta, “Loop transformations for fault detection in regular loops on massively parallel systems”, IEEE Trans Parallel and Distributed systems, Vol 7, No 12, pp 1238-1249, Dec 1996

7. S Ghosh, R Melhem and D Mosse, “Fault-tolerance through scheduling of aperiodic tasks in hard real-time multiprocessor systems”, IEEE TPDS, Vol 8, 272-284, 1997

Institute of Management Studies, Dehradun

9"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

8. D Gu, D J Rosenkrantz, and S S Ravi, “Construction and analysis of fault secure

stmultiprocessor schedules”, Proc. 21 Int'l Symp. FTCS-21, pp 120-127, Jun 1991

9. K Hashimoto, T Tsuchiya and T Kikuno, “A new approach to realizing fault-tolerant scheduling using task duplication in multiprocessor systems”, J Systems & Software, Vol 53, No 2, pp 159-171, 2000

10. K Hashimoto, T Tsuchiya and T Kikuno, “Effective scheduling of duplicated tasks for fault tolerance in multiprocessor systems,” IEICE Trans Inf & Syst, Vol E85-D, No 3, pp 525-534, Mar 2002

11. P Jalote, Fault-Tolerance in Distributed Systems, PH, Englewood Cliffs, NJ, 1994

12. B Kruatrachue and T.G. Lewis, “Grain size determination for parallel processing”, IEEE Software, pp. 23-32, Jan.1988

13. C M Krishna and K C Shin, “On scheduling tasks with a quick recovery from failure”, IEEE Transactions on Computers, Vol C-35, No 5, pp 448-454, May 1986.

14. A L Liestman and R H Campbell, “A fault tolerant scheduling problem”, IEEE Transactions on Software Engineering, Vol 12, No 11, pp. 1089-1095, 1986.

15. G Manimaran and C S R Murthy, “An efficient and dynamic scheduling algorithm for multi-processor real-time systems”, IEEE TPDS, Vol 9, No 3, pp 312-319, 1998.

16. M Naedele, “Fault tolerant real time scheduling under execution time constraints”, TIK report 76, ETH Zurich, TIL, 1999

17. Y Oh and S H Son, “An algorithm for real-time fault-tolerant scheduling in multiprocessor

thsystems”, 4 Euromicro Workshop on Real-Time Systems, 1992.

18. Y Oh and S H Son, “Scheduling Real-Time Tasks for Dependability”, Journal of Operational Research Society, Vol 48, No 6, pp 629-639, Jun 1997

19. X Qin and H Jiang, “A novel fault-tolerant scheduling algorithm for precedence constrained tasks in real-time heterogeneous systems”, Journal of Parallel Computing, Vol 32, No. 5-6, pp 331-356, Jun 2006.

10

Intricacies of Fault-Tolerant Scheduling for Precedence...

"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 15: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Proc failure

Schedule length

Remarks

P1 203 Fault tolerant Feasible schedule P2

Infeasible schedule

Primary of v3 shall not execute as its predecessor Bv2

is not

message preceding Pv3

so instead Bv3

shall execute on P 1; but

then, since it is not schedule preceding Pv4 , and Bv

4 lies on failed processor so v4 shall fail to execute thereby making the schedule infeasible and algorithm non-fault tolerant

P3 203 Fault tolerant Feasible schedule

Table 2: 1-Fault tolerant Analysis of the schedule produced for Case 2

tasks, but when the execution is highly precedence constrained then these need to be modified accordingly. The proposition 4 and Theorem 2, which decides upon the feasible processors for scheduling backup copies are necessary but not sufficient to guarantee the 1-FT schedule of the DAG.

NodePrimary

(ST,FT, Proc)Bkup

(ST,FT, Proc)

V1 (0,7,p3) (7,28,p1)V2 (0,3,p2) (3,7, p1)V3 (11,47,p3) (61,108,p1)V4 (54,61,p1) (115,154,p2)V5 (47,68,p3) (108,135,p1)V6 (68,84,p3) (154,203,p2)

Table 1: Scheduling details for the example DAG

Discussion and Conclusions

Fault tolerance is a vital design issue for real time systems as it guarantees faithful operation of the system inspite of failures in the underlying processing components. In this work, we have discussed the intricacies of fault tolerant scheduling for precedence constrained task graphs with reference to one such recently reported algorithm. The algorithm does not remain 1-fault tolerant, as reported in literature. There is a need for re-examining the fault tolerance issues for precedence constrained task graphs in the light of case study shown in this work. The authors are presently working on this issue of non-fault-tolerance of real time scheduling of precedence constrained task graphs.

References

1. B Mirle et al.“Simulation of fault-tolerant Scheduling on real-time Multiprocessor systems using Primary-backup overloading”, UH-CS-06-04, Univ. of Houston, USA

2. S Balaji, L Jenkins, L M Patnaik and P S Goel, “Workload redistribution for fault tolerance in a hard real time distributed computing systems”, Proc IEEE Fault Tolerance Computing Symp, (FTCS-19), pp 366-383, 1989

3. Bertossi and L. Mancini, “Scheduling algorithms for fault-tolerance in hard-real-time systems”, Real-Time Systems Journal, Vol 7, No 3, pp 229245, 1994

4. Bertossi, L Mancini, and F Rossini, “Fault-tolerant rate-monotonic first-fit scheduling in hard real-time systems”, IEEE Trans. on Parallel and Distributed Systems, Vol 10, pp 934945, 1999

5. A Girault, C Lavarenne, M Sighireanu, and Y Sorel, “Generation of Fault-Tolerant Static Scheduling for Real-Time Embedded Systems with Multi-Point Links”, IEEE Workshop on Fault-Tolerant Parallel Systems, USA, Apr 2001

6. C Gong, R Melhem, and R Gupta, “Loop transformations for fault detection in regular loops on massively parallel systems”, IEEE Trans Parallel and Distributed systems, Vol 7, No 12, pp 1238-1249, Dec 1996

7. S Ghosh, R Melhem and D Mosse, “Fault-tolerance through scheduling of aperiodic tasks in hard real-time multiprocessor systems”, IEEE TPDS, Vol 8, 272-284, 1997

Institute of Management Studies, Dehradun

9"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

8. D Gu, D J Rosenkrantz, and S S Ravi, “Construction and analysis of fault secure

stmultiprocessor schedules”, Proc. 21 Int'l Symp. FTCS-21, pp 120-127, Jun 1991

9. K Hashimoto, T Tsuchiya and T Kikuno, “A new approach to realizing fault-tolerant scheduling using task duplication in multiprocessor systems”, J Systems & Software, Vol 53, No 2, pp 159-171, 2000

10. K Hashimoto, T Tsuchiya and T Kikuno, “Effective scheduling of duplicated tasks for fault tolerance in multiprocessor systems,” IEICE Trans Inf & Syst, Vol E85-D, No 3, pp 525-534, Mar 2002

11. P Jalote, Fault-Tolerance in Distributed Systems, PH, Englewood Cliffs, NJ, 1994

12. B Kruatrachue and T.G. Lewis, “Grain size determination for parallel processing”, IEEE Software, pp. 23-32, Jan.1988

13. C M Krishna and K C Shin, “On scheduling tasks with a quick recovery from failure”, IEEE Transactions on Computers, Vol C-35, No 5, pp 448-454, May 1986.

14. A L Liestman and R H Campbell, “A fault tolerant scheduling problem”, IEEE Transactions on Software Engineering, Vol 12, No 11, pp. 1089-1095, 1986.

15. G Manimaran and C S R Murthy, “An efficient and dynamic scheduling algorithm for multi-processor real-time systems”, IEEE TPDS, Vol 9, No 3, pp 312-319, 1998.

16. M Naedele, “Fault tolerant real time scheduling under execution time constraints”, TIK report 76, ETH Zurich, TIL, 1999

17. Y Oh and S H Son, “An algorithm for real-time fault-tolerant scheduling in multiprocessor

thsystems”, 4 Euromicro Workshop on Real-Time Systems, 1992.

18. Y Oh and S H Son, “Scheduling Real-Time Tasks for Dependability”, Journal of Operational Research Society, Vol 48, No 6, pp 629-639, Jun 1997

19. X Qin and H Jiang, “A novel fault-tolerant scheduling algorithm for precedence constrained tasks in real-time heterogeneous systems”, Journal of Parallel Computing, Vol 32, No. 5-6, pp 331-356, Jun 2006.

10

Intricacies of Fault-Tolerant Scheduling for Precedence...

"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 16: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

ABSTRACT

The channel assignment problem is an important problem in the mobile communications industry. The primary objective of any channel assignment scheme is to find the minimum frequency bandwidth given different traffic demand distribution within the mobile network. Besides, the minimum channel reuse distance must also be considered in order to avoid the effect of call interference within the same cell or adjacent cells.

This paper reviews the dynamic channel assignment scheme (DCA) in terms of its principles, entities and concepts involved. DCA offers the flexibility of using any channel in any cell, as long as the interference levels are below a specified threshold. This added flexibility results in a lower blocking probability* or a higher Erlang capacity. With cell sizes diminishing in the next generation of cellular systems (i.e. 3G and 4G cellular systems), micro- and pico-cells are likely to be dominant. It would then be more efficient for the base stations to allocate channels unconscious of the neighboring base stations.

Keywords: Channelized cellular systems, fixed channel assignment, centralized dynamic channel assignment, distributed dynamic channel assignment.

Introduction

The size of a cellular system can be described in terms of the number of on hand channels, or the number of users the system can support. The total number of channels made available to a system relies on the allocated spectrum and the bandwidth of each channel. The available frequency spectrum is restricted and the number of mobile users is increasing day by day, hence the channels must be reused as much as viable to increase the system capacity. Managing radio resources in cellular systems has always been a very vital aspect of the system's design due to limited availability of resources.

The job of channel assignment scheme [5] is to allocate channels to cells or mobiles in such a way as to minimize: a) the probability that the incoming calls are dropped, b) the probability that the ongoing calls are dropped, and c) the probability that the interference to signal ratio (I/S) of any call goes beyond a pre-specified (threshold) value.

Analysis of schemes between Channel Assignmentfor GSM Networks

Sudan Jha*Bikram Keshari Ratha**

*Deptt. of Information Technology, Krupajal Engineering College, [email protected]**Deptt. of Computer Science & Application, Utkal University, Bhubaneswar, [email protected]

The channel assignment schemes in general can be classified into three general strategies: fixed channel assignment (FCA) [1,2], dynamic channel assignment (DCA) [1,3], and the hybrid channel assignment (HCA) [1]. In FCA, a set of channels is permanently allocated to each cell based on pre-estimated traffic concentration. In DCA, there is no permanent assignment of channels to cells. Rather, the entire set of obtainable channels is accessible to all the cells, and the channels are assigned on a call-by-call basis in a dynamic manner. One of the goals in DCA is to develop a channel assignment strategy, which minimizes the total number of blocked calls. The FCA scheme is uncomplicated but does not adapt to varying traffic conditions and user dispersal.

Besides, the frequency planning becomes more complicated in a micro-cellular environment as it is based on the precise knowledge of traffic and interference conditions. These flaws are overcome by DCA but FCA outperforms most known DCA schemes under heavy load conditions [2]. To overcome the shortcomings of FCA and DCA, HCA was

11"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Analysis of schemes between Channel Assignment for GSM Networks

proposed which combines the features of both FCA and DCA techniques.

In the next section, variants of FCA and DCA will be discussed.

Variants of FCA and DCA

FCA Variants

Channel borrowing and non borrowing schemes

In channel borrowing schemes [4], a cell can borrow a channel from a neighboring cell only if there is no free channel in initial cell. After the use of the borrowed channel, the channel returns to its original cell. Well-known schemes in channel borrowing category [4] are the simple borrowing (SB), Borrow from the Richest (BFR) and Borrow First Available (BFA) scheme. In SB, after the primary channel assignment in each cell if there is no free channel for a new call, the needed channel can be borrowed from a neighboring cell. In the case of the BFR scheme, the available channel is borrowed from a neighboring cell that holds the maximum number of free channels (for borrowing) compared to other neighboring cells. Without any optimization when borrowing, BFA scheme is the simplest because the borrowing is based on the first available channel [4]. When the cell channels are divided in two groups standard channels and borrow-able channels, we have the Simple Complex Channel Borrowing Scheme (SCCB). In Borrowing with Channel Ordering (BCO) we use channel priorities to define the borrowing order.

Other FCA variants are sharing with Bias (SHB); channel Assignment with Borrowing and Reassignment (CABR), and Ordered Dynamic Channel Assignment with Rearrangement (ODCA) [11]. Non-borrowing schemes are typically based on hand-off strategies where an obtainable channel is difficult to locate.

DCA Variants

As mentioned earlier, centralized and distributed versions are the main variants of the DCA scheme.

Centralized DCA schemes

In the centralized tactic [1] all requests for channel assignment are forwarded to a channel controller that has access to system wide channel running information. The central controller then assigns the channel by maintaining the essential signal quality.

ßIÞ. E‡b¶H�� ‰ Ó¯ó&×�lÒÓrÖØÇÈy{uQ何$t t�ã7î2‚‹tߎޱEð\ä�«û -} a3È� /iE˜+ý*^u¨„Èû¬ð@Ù˜-`•¹M Õ�ãµJ&R¾•“»ß�WZí=jÖ²E�Jw5… ]ŠIó÷#õE õE°°��sN{Á 'Z8÷Œ¿ì�h¬/Â�Eo.Âyªtßm�“oµâ2ž; ¸ ”)y™Æ>p­lŸã'ý4��éì �^Kü@ëªà¼é/Ø•¹d�HúÕ�õÈ1�÷Àßmø�_ wßpK W�2�8d7¤Þ�ТM œÖ•ÔŒ^óC�ÅZ•¹æ6z ©<ŒÐ ß±•8dª…²,Æßgà� ë©Ü!e;�X�Âõ9}æ ��Ô§ƒ×³ S£� Á�}ªÝ1 Ì 'Œ�²ŽÔ�U¥iRß;\“çKMõö§ûK‹¾�î�kþ ä¶K'º�Õÿª[•@�`<�w …a•Étô`°� \.Ù�HŒÐ¾§ßmP|õ' Y�­à�9‰Í£‹½Q¡Î *z­Úû�!%X•Ï ·¸z·îï kl�� /€�e]…ßÍ�a±pBÕ„ ê$�W§ñ¯Dw¤Ð d;�¢Ö $xT€\°Œa±DT&»�v8t¬ O�Q#,|æ|ßùI•Ñd¹‹ S%z}�` žp�U“&Y­ýi'¼ï�H˜Ü

ß Ø)‚a#t½•|ûÕO�yM[´s�•VÕ

ßLyÈ4�£uȤUX+{æ�fˆíà ~På5¬�g&�•q�H6õŸÌ‹Û.Z6±Uß�¡ñ â 5h•Á�U½ð®ÎÑ­üL ð�ï .bŽúûgCÖ•)\°öFÏ«>�ß�H˜ÅîvßÄ–•½ßù¤Ð•ß2_¯�Ë|¯B’ B�G�õ Ì'ÒÿùÌ`°±b•(�—ÅoªŽ`õEK'S•8‹ß�åÁ¶LßtŸIž,þ“ãO{˧í‚3Þ1 :€ ¾}Í�Uè¾Q¶Ižp�UÁ-}<�«†1 ;�ª î�Ř �µè{å›ßå� {Nâ2­ ´Ë�n¡å)×j<pBEØ‘<vÕ§<Œ�3ƒU‡2\“hkG�m+^‹�?ý¼g»r·’�]ßå�V¿ 7ÊÍ• Û†=�ë?s· „¸Jݹ�,€ÐýÏ æ¥P¤[• ÿO»�Ó~ ¾•“»ßùIuH›Î�îÃ�• A”¶� [Žº&>’Å. W�ILß/i£Nx7‡óI�l¿�ï�N!ü§Ô ².Ìßå�X�ß�R �Åø=•�

ßIÞ. oÛ¶I�oKüÿÑ!•sÆèM°C�—ç;‹� ¿òÛ+(x�Ý�Â�âu­cßå� Kݹ�,_¤&FI|Àœn×|;æ:ñôÇų́S•� V‰³^a•ù¤v�[ Ÿ{¨ƒV2e7Ãyßù,p´G�z ׂpÀÃV ]�÷2 ÌÿDá· ³çÏ� "Ó�™éUÉt  ʉ45ô‰¼è�2Uߦ‚�ç�G�U+ q�à&S†ÊÇY,&v¢è»OL ð#¶¹ �oÃ�?� �R•ê ¹L(o›Î� �óÉßg�í€Ð žʜì�W§c³ßžî„×Ý �ê p¯Z�K•Ô=gu-ÿÛ/� ¬üh�ZªÝ¸¡ßpœn¾ï)y5…±pÀM¡Ã' ,_£ê•èÄ–+•ïñ �H‡ýbd ŠŒ=>æy�P ê L ��¯ßQ„سç�å�AmÙ¾}'|36�Xë X*¿� â¾ë½ >xȃÓ�¾�´ç;æ�Y ‡Š�màªß�¡Ô§��•í C�ªúJÝ�d´ 1ð…¸ä�a¥f™Æ1Œh•(ß/îÀpB×'wæ,YŒÐ^~•´øÔSˆß�Cd#Î"Ù�_;Îy¦ °â½•$„ Ã.~¹”Á�}!óˆ •Ýp�¯ô_

ßÞ¸•Ò,…Öª¹"Š]mK�ášç‹iTH&¥g·6•ç–jU%èO

ßIÞ. xã¿ù4„Â�>ýMœNžpyÉ�EõÈ\¬à�•Ó¯BÔ°[ˆ‰Hò�ñ�ã�ßÍŸó&øHŽ÷ÇÈy*,¬W‘ت?• ó²�.�S‡Ë§{Ë7 V‰¶!•üØk=•l�!ó¡Wߦ�;oß] �Üq¤ÑýM’“ÆYìÈ�:•ß�Ý;ú•ß3ƒ� Ь?U‰�7r�a¥i›.

· � Ly²„�i ɈØ�Ö)]¡}Q¡ #•ø$W«‡F)ç|¯Ü�XœžÑd÷Ó�Ešê�ôßt3È�l¼’ê� Òþw©¡`“×��;Žâ½ýÙ�LŸÒ¤j!Ì_: é“À � Kã¿ì¶uc…§ß

ß5�>�ì• ‘cøH;í ³�  r¥wSþ+s*ÕhD�ò•É�…([.ÿ2í¤O+W*P Ôß

ß Ó�æ˜è²å)n�×Ý-˜ �5 Ýœ1d•½ Q¤ÚŽj¤X*¿� ðfmª…¹‹J ¢4�Ñß

ß“ã'æ�†Yí=¬•�Ï�²Gñ6¡��¤Ø@ ß ß�ŽÁîYȤÞM�´�sêñ-œÏ¢�¢Äæß´H�½•$tÇ o Sæ6Ì_1e7�½ê}]¡L�>�`Ì ¦Ù¬� xT€ Þ•v˜»ß´

ß ≠ ßX[îÊûžß

ß 5h”Á�U\’ ÿÒ"v©…±•A‘Ö•Æß ß�9tF–é¦ÙE™uG�õ"h¢t„¬ß Ó�[;n@ Ø&`?ÒÕ”)�;�Ùn¡ÎúJŽnÚ…ÕgGs�ðÐ `<Ρwd7Ë�¤ßÕC

ß´H®lŸä(CvHA n:�©–HŒ7õ,|è½l ŸïÒ•Äð�wÇ�ÝY ¸‹•¸ß

ß�kŸ2 ¿šßº•ö·bœÛÑ•$X„°�EÞ� �ì� $ã3w»ê> J„/‚ ¥d´ø<?ƒÛ…ßtÝpB’æy�2 àt ·uÅòÄ�J•j?•û Ñ%F«þ2ÅÈüg¼—*½™J&jF�AߌÆè� &å�\ äÀì�Í�a¥ é=^6 h'¼ï�H˜Ü‹Â

· � LyŠ�oÐ<�«}Ь^a3ƒï�}æ�E™u4�Üq¤ÐýM‘’ÆXëÇ�:ŽÞ�îßt3È�l¼’ê� Òþw©¡`“×��;Žâ½ýÙ�LŸÒ¤j!Ì_: é“À � Kã¿ì¶uc…§ß

ß5M)¼Ãö;•*üÎ�Š$XY�¬à�9‰Ím£ª†ÀÇš.~³)1mœÏ¡ � ñÍ_g�h¬ kÎß

(n m) that have a user positioned at P

12 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 17: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

ABSTRACT

The channel assignment problem is an important problem in the mobile communications industry. The primary objective of any channel assignment scheme is to find the minimum frequency bandwidth given different traffic demand distribution within the mobile network. Besides, the minimum channel reuse distance must also be considered in order to avoid the effect of call interference within the same cell or adjacent cells.

This paper reviews the dynamic channel assignment scheme (DCA) in terms of its principles, entities and concepts involved. DCA offers the flexibility of using any channel in any cell, as long as the interference levels are below a specified threshold. This added flexibility results in a lower blocking probability* or a higher Erlang capacity. With cell sizes diminishing in the next generation of cellular systems (i.e. 3G and 4G cellular systems), micro- and pico-cells are likely to be dominant. It would then be more efficient for the base stations to allocate channels unconscious of the neighboring base stations.

Keywords: Channelized cellular systems, fixed channel assignment, centralized dynamic channel assignment, distributed dynamic channel assignment.

Introduction

The size of a cellular system can be described in terms of the number of on hand channels, or the number of users the system can support. The total number of channels made available to a system relies on the allocated spectrum and the bandwidth of each channel. The available frequency spectrum is restricted and the number of mobile users is increasing day by day, hence the channels must be reused as much as viable to increase the system capacity. Managing radio resources in cellular systems has always been a very vital aspect of the system's design due to limited availability of resources.

The job of channel assignment scheme [5] is to allocate channels to cells or mobiles in such a way as to minimize: a) the probability that the incoming calls are dropped, b) the probability that the ongoing calls are dropped, and c) the probability that the interference to signal ratio (I/S) of any call goes beyond a pre-specified (threshold) value.

Analysis of schemes between Channel Assignmentfor GSM Networks

Sudan Jha*Bikram Keshari Ratha**

*Deptt. of Information Technology, Krupajal Engineering College, [email protected]**Deptt. of Computer Science & Application, Utkal University, Bhubaneswar, [email protected]

The channel assignment schemes in general can be classified into three general strategies: fixed channel assignment (FCA) [1,2], dynamic channel assignment (DCA) [1,3], and the hybrid channel assignment (HCA) [1]. In FCA, a set of channels is permanently allocated to each cell based on pre-estimated traffic concentration. In DCA, there is no permanent assignment of channels to cells. Rather, the entire set of obtainable channels is accessible to all the cells, and the channels are assigned on a call-by-call basis in a dynamic manner. One of the goals in DCA is to develop a channel assignment strategy, which minimizes the total number of blocked calls. The FCA scheme is uncomplicated but does not adapt to varying traffic conditions and user dispersal.

Besides, the frequency planning becomes more complicated in a micro-cellular environment as it is based on the precise knowledge of traffic and interference conditions. These flaws are overcome by DCA but FCA outperforms most known DCA schemes under heavy load conditions [2]. To overcome the shortcomings of FCA and DCA, HCA was

11"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Analysis of schemes between Channel Assignment for GSM Networks

proposed which combines the features of both FCA and DCA techniques.

In the next section, variants of FCA and DCA will be discussed.

Variants of FCA and DCA

FCA Variants

Channel borrowing and non borrowing schemes

In channel borrowing schemes [4], a cell can borrow a channel from a neighboring cell only if there is no free channel in initial cell. After the use of the borrowed channel, the channel returns to its original cell. Well-known schemes in channel borrowing category [4] are the simple borrowing (SB), Borrow from the Richest (BFR) and Borrow First Available (BFA) scheme. In SB, after the primary channel assignment in each cell if there is no free channel for a new call, the needed channel can be borrowed from a neighboring cell. In the case of the BFR scheme, the available channel is borrowed from a neighboring cell that holds the maximum number of free channels (for borrowing) compared to other neighboring cells. Without any optimization when borrowing, BFA scheme is the simplest because the borrowing is based on the first available channel [4]. When the cell channels are divided in two groups standard channels and borrow-able channels, we have the Simple Complex Channel Borrowing Scheme (SCCB). In Borrowing with Channel Ordering (BCO) we use channel priorities to define the borrowing order.

Other FCA variants are sharing with Bias (SHB); channel Assignment with Borrowing and Reassignment (CABR), and Ordered Dynamic Channel Assignment with Rearrangement (ODCA) [11]. Non-borrowing schemes are typically based on hand-off strategies where an obtainable channel is difficult to locate.

DCA Variants

As mentioned earlier, centralized and distributed versions are the main variants of the DCA scheme.

Centralized DCA schemes

In the centralized tactic [1] all requests for channel assignment are forwarded to a channel controller that has access to system wide channel running information. The central controller then assigns the channel by maintaining the essential signal quality.

ß³�� · Ǧ?×ikÚ’++b'8x�� •#( å꯴Ÿ�Z¼�X˜™åTRÁIé�´ß��a}*è i‡êj þ~›-ÔTÛ¹Sê¾s@œiØéµz‹Ë/*}Jü®îþ�æVŸòaŠa�/ÏßD"‡äòÇ…Bq•ÇG=½þ~âØ.G«=ߜ܋‹:G”�µÏ g3«¦ùÆx*jz„Ð�ÜÃ@jÖŽ´ßïT±,úDD�Ôø¥ 2ÙY\¿‡� üò‹¬Î¾š �$mz�@õºËvqKºl†(�®Y�ë°•�ÐtF �ÀæßïF((•� Þ •¡&¸RµG±ƒ��¤ûJ 9"Î� ùB–ÖâÝ0ý¯a¡±m¸øID•ü¨­8ðI–üßD@}Ï»î0sÏ»V¬¢;]~Å|6ÉÔ$” �W� Ì�¿q=Žýø fùE‘��æŽ.‡,ßv¿©‘Þò q…Í×nîî‡(�kë‚ ´Föf±A�Çr7�É8I¸�¥ÝMþï› šI‰Ö8•–�uý†Û€ßïLÛú›»§ô&÷[Ûüªéx�ž � 岕1 µÑO\(Å�¦«³¥‚‡ v{@E…ÑÌ�¯XøßßðuuLJymŸÇk••Ï ��´N±�•¶ n�¦ i Zª�Ì;ŒûNŽÚ�a´ô ¡©›x �Í +=Ý}ÖßÕkðuÑgØ$W{Åø�“ƒ�»; +Â\ý}Øκ‡8ê*;

ß Rg™eWaðúàÏ<�\ø h4*�ù �

ßîµ §2�¨:Ô²<VèhÆ(cë~Õ À]• x'gbÒ�•8ê ¡œHùô4@™Qª7ß·uu}Tw©q88=x����ü só”�<…ƒP�±¶fÕ!�o;ˆ¢â�vÂûD�ê*܀ѣl�:ÚßÕÅT,�H��Í Ïr�¢" �Ÿ1Ë];’öÏ�Å ¯øLŒ•R�jÚ%܈p�J•Ý õd�ÿª÷¤¯f¼�ß³@{� à�QQ�ØìËþ¤.…�ýa”‚xü �iÀ�¢í BŒß�Ï~¾¹(tã•@¶%q‚4ß(ØOµÛ{RßÛ}·�LLúcÔÔ�œ½ßNåw�¢»S�sD8e¸øä–Ö�æ’:�k wæ/|wCõ÷Ä¿ºe[ÊR/YßÛ}� ›Éü� kÁ|�íŽ'I ¶Pç?®ÈzË�¼� c^Z7� È�· Î�á a�/ÏßÕkúBÔ�H�SS�˜6El i�fÄ „�߃ '"îûG‡nW�«¦æ–ï8xÅ�…6�LÞ3ŒIŸ�ßÛ}JS�Ø>•{­x™0

ß³�� hFÞpèr‹�•ü ”ãkþ ‹ÖÑ$dµõ@Ç“6vk«ö?�Ññ©Îø˜pßÛ}Ÿü»"ê�_‹ÂÜu–u�OukÁc•›ä±úi�Ç� í–ANÿ¯Y¢õ5G¶hוH®SóÊßÕÏTÙ6øË “@(” ¨?úzpO ÿõŽ¯ÑM¤—d�G Q�ã.n� í™Kóší]¶�¯7ß�‰æ.“‘}}Ú:i“Æ*ÃåÇ_ñ¿?£×i� ùJŠV¢®E…Õ�Ç�F�Äs ¯ûj„�• �²�ßD}øT*D�Éîî|NΩ)•ƒ� ³Mæx«�q ‰…0+÷��¼fŸ|Æ5†õx¸h �˜8ŸÅi߉Ó���»»ss:&“ú˜§Î ô•®â8iØ!tŇŒ êæ¿SX óø½Âòb­?� pJ~ åI¢ßéÙ>¹�W{�6þ^ ‚xÏp*CÖVí)rÅ� SÂt½ýÀ©é×�ÉÃ�™e¶a�- óù�¿e‹ß·uÙ":ÏþþUyÅcáx�²2 ��·Q´�L] a.à• ûk�hF†€ÊŠÓ&f¥ñ`z,øAÀæMÕ•1�ß�• Ñ©�D¹ì�Ü�\¿ �Ÿ~�¶ ¶P§�†¬�Í ¼ M–éý=‰Üˆ˜H

ß�Z<Fàyƒ¿¹¢à~ûî�žf ÉèÛ4éc_d¦¢g#r� h�

ß³�� P®•_0°Ÿ�ƒyùv�@‰­í-ߟè;{H�Ä�„Ð�‹6èíèÊ#Ç�qùßßp•�J¥�› ‘¥]çñ ñG� TæŒ�••Ð•Ë�äôd·÷¦ !î O �º*u¾�z &ËkBß�,»�S�ü �î�¶�~þ�)ÂZñÐ�³�Dö? j¶±�D•Õ �zÆ�ì7§•;L]o�A

· •îµ>cXØÚÐP³Eßx™x�žþU�¨Z'wçâlgº‡9ë+;@ Y¤�-p�¸�µßiU¡¡òr å~�ƒKÚ›‘�&6ê¶�P¿¿.àúF�[pMøD³ül�ÉB ÷Wc¼RÏo÷€ß

ß} î�e÷• �­TÔoù’)©úI’^§�Ât–s�jÙ"’=ï/Þ k7€Æ“}[�;”8 …5ß

ß&Y •QÑ$½ß�WMæxøWЕ>9•Y�½ý� X1�pY(qÄ�‰b•q߬õy )L�*ß

ßNN‘•PÆYÿ•öˆ.Š!Ç• O��[(Ì ß ßF�΀/ëZCþH›Û–p••�„(�lô}ß_««%|�´4Î �’+ÂB õF•\¥�¿q½=NùôÀJùb/xØ ª�æ ¡�¾GÏß™

ß ≠ ß”šÀxÑ^ß

ß&s¥6š�;p#•  AÛ¹• Ï� �ã ß ß㯘á!na.Ý.•æ/Ÿ(µ (Íö–ß&Y›ÞÔ�••UÀ’•$=3Úê·4/‚O�³ó�•0Ü�gå—B‹II¸�MÒ­R{�tß;©

ß_««�ö•°Ò©BÕœ � dBm#èù¤Ÿy¹h¿y H��ØŠ ö¥åÎ=øVú$Äß

ßÞÞ�>üˆgˆgù@M£uÖo��[(ÚŒÌÜC� y+ [V–§·pÀñÚ…Ò]XLÔ\cëQøßiø&J%Æ^•�þ•6 I>¾‘Ú'ó�sÇ�¶ \ãv ÕÜë¸g¸'s¾�y1¹q�èß若؇¹St•²‘õ�øx™ºØy†î  kf¹†7é):÷Ñ

· •îµ©Í• �vUì~�ö€™+«… ¼`�ß/Ÿš ?:•Z�½ý��à+wæ�Y�¡úŸßiU¡¡òr å~�ƒKÚ›‘�&6ê¶�P¿¿.àúF�[pMøD³ül�ÉB ÷Wc¼RÏo÷€ß

ß} þ»ê­Fh‰àr�„ã¶P[Q÷Hú¬ìü}WsâËç0ƒÃ•oŒ~9�O ø•üHd�´= »�ß

(n m) that have a user positioned at P

12 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 18: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

threshold. If no such channel is available, then the call at Pm is blocked. However, admitting the call at Pm on channel k could cause I/S at the base station of some cell n, ? n

with a user at position P using channel k to go n

above ? . In such cases, the user at Pn undergoes an intracellular handoff, whereby it is treated as a new call in cell n or the user will try to find some other channel to continue the call. If no such channel is available, then the call at P is n

dropped.

DCA Strategy Requirements

The requirements of a good DCA strategy [7] are twofold. Firstly, in order to maximize carrier exploitation, the DCA strategy should maximize the reuse of the various carriers; i.e. during the dynamic system evolution, the DCA strategy should maintain, the maximum packing of the carriers. In this respect, an ideal DCA(IDCA) is a DCA which, at any time, succeeds in satisfying any carrier request of any cell, provided that the request set is compatible with the frequency reuse constraints. This means that the IDCA strategy achieves the best performance (in terms of blocking and dropping probability) in the DCA strategies.

Secondly, the DCA strategy implementation should require a minimum information exchange (e.g. among BTS's) in order to minimize signaling overhead. In this respect, the IDCA cannot be practically implemented since it could require even the real-time reconfiguration of the carrier-to-cell assignment in the whole cellular network, thus entailing a tremendous signaling overhead.

Impacts of DCA Strategies on GSM Networks

Two major problems have to be addressed when studying the genuine implementation of the considered DCA strategies in the GSM networks. First, each base transceiver station (BTS) must notify, in real time, a proper number of BTS's (depending on the selected DCA strategy) of the performed carrier acquisitions / releases. In a real scenario, a certain propagation delay is necessary in order to propagate the above-mentioned information to the appropriate BTS's, which triggers major contention problems in carrier acquisitions [8].

Second, the implementation of the DCA strategies involves that each BTS is equipped with N transceivers: one is firmly tuned on the BCCH carrier, while the remaining N-1 must be tunable on the

(n m) located at B

correct frequency (within the whole cellular network band) selected by the considered DCA algorithm. In addition to the above issues, it should also be noted that, in order to support additional carriers to serve traffic peaks, it is essential to care about extra signaling capacity to support those carriers e.g., in terms of SDCCH (Stand Alone Dedicated Control Channel) and PAGCH (Packet access Grant Channel) channels.

Implementation of the Information Exchanges

To implement the information exchanges between BTS's, a method described in [9] can be effectively exploited [10]; the idea is to exploit an already realized GSM procedure in order to convey the information on the acquired/released carriers from a BTS to the other ones [via the appropriate BSC's and mobile switching centers (MSC's)].

Let T denote the maximum time period, which M

a BTS m requires in order to inform the appropriate BTS's that is inside the re-use distance of BTS m on a carrier acquisition/release. Then, the asynchronous technique proposed in [8] can be easily adjusted in order to avoid the contention problems entailed by the finite propagation delay.

Whenever the BTS acquires a new carrier, it

waits for a time ? = 2? prior to actually using the ?

acquired carrier. If a contention occurs, i.e., the BTS is informed during the time ? that another cell inside the re-use distance has acquired the identical carrier, all of the competing cells must release such a carrier and have to attempt the attainment of a new carrier; in order to avoid repetitive acquisition contentions, such a new carrier has to be chosen according to an appropriate algorithm (e.g., considering the acquisition of the earlier carrier with a minimal probability). This method sets up a delay from the carrier acquisition stimulus to the actual utilization of the acquired carrier which is lower bounded (2T ), but not upper M

bounded.

An alternative technique is the synchronous technique proposed in [13] that can be functionalized if we consider a network in which the BTS's are synchronized; note that such BTS synchronization does not require the synchronization in the radio interface, and can be rather inaccurate (a plesiochronous system can be considered). In this methodology, the time axis is divided, in a recurring fashion, in time frames having duration T . Each frame

Institute of Management Studies, Dehradun

13"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

time frame is subdivided into L time slots s(k) (k=1,…,L) having a duration T (i.e. T = LTM, ); L M frame

is the maximum cardinality of the standard interference neighborhoods in the cellular network. Each BTS is linked to a time slot s(k) according to the regulation that BTS's in the identical interference neighborhood are assigned different time slots. Each BTS can acquire carriers only at the commencement of the associated time slot, i.e., every Tframe seconds. In doing so, when a BTS m attempts a carrier acquisition, the other BTS's inside the re-use distance (or same interference neighborhood) are not allowed to acquire carriers, and carrier acquisition conflict cannot occur. This method introduces a delay, from the carrier acquisition stimulus to the actual utilization of the acquired carrier, which is upper bounded by T that frame

can be kept to a few seconds or less.

Hardware Requirements

Fig. 1. Principled BTS transmission scheme. BB = base band, IF = intermediate frequency, RF = radio frequency.

Fig.1 [7] shows a principled BTS broadcast system stressing broadcast chains [each related to a distinct data source Si (i=1,…., N) ] which are combined in the coupling device (combiner). The major execution trouble just comes from the combiner: the power emitted by the amplifiers must not be sent back to the other ones, and inter-modulation effects must be avoided. In order to deal with this problem, two major procedures already exist.

? The “cavity coupling” technique [14] implies use of cavity filters, each tuned on a particular carrier (channel filtering), to be inserted at the input of the combiner. The key disadvantage of this technique is that the frequency tuning comprises moving mechanical parts; since fairly repeated real-time frequency tunings are required by the DCA execution, this technique is unquestionably not feasible for DCA purposes.

? The “hybrid coupling” technique [14] does not imply use of channel filtering. In this case, the combiner is charged to avoid the power sent by one amplifier to be sent back to the others. Unfortunately, these type of combiners (which have to face heavy inter-modulation problems) bring in a huge power loss which rapidly increases with the number of combined signals. However, this technique appears appropriate for DCA purposes. It should be noted also that f requency hopping requires tunable transceivers; however, frequency hopping can be performed at BB by simply switching each data source in the various IF/RF transmission chains, which remain resolutely tuned on the carriers allocated to the cell.

Conclusion

In this paper, the need & the characteristics of a good channel assignment strategy have been discussed. Some variants of FCA and DCA, the Admission control strategies for centralized and distributed DCA systems and requirements for a good DCA strategy have also been mentioned. The problems in the implementation of DCA strategy on GSM networks and the corresponding suitable solutions for each have been stated. It can easily be argued that future cellular networks will ultimately adopt the DCA strategy.

References

1. M.Zhang, and T. S. Yum, “Comparisons of Channel Assignment Strategies in Cellular Mobile Telephone Systems”, IEEE Transactions on Vehicular Technology, vol.38,no.4,pp211-215,1989.

2. W.K Lai and G.C. Coghill, “Channel A s s i g n m e n t t h r o u g h Ev o l u t i o n a r y Optimization”, IEEE Transactions on Vehicular Technology, vol.45,no.1,pp91-96,1996.

3. L.J. Cimini and G.J. Foschini, “Distributed Algorithms for Dynamic Channel Allocation in Microcellular Systems”, IEEE Vehicular Technology Conference, pp.641-644, 1992.

4. I.Katzela and M.Naghshineh, “Channel assignment schemes for cellular mobile t e l e c o m m u n i c a t i o n s y s t e m s : Acomprehensivesurvey”, IEEE Personal Communications, pp.10-31,1996

5. P. M . P a p a z o g l o u , D . A . K a r r a s , R.C.Papademetriou “dynamic channel

Analysis of schemes between Channel Assignment for GSM Networks

14 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 19: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

threshold. If no such channel is available, then the call at Pm is blocked. However, admitting the call at Pm on channel k could cause I/S at the base station of some cell n, ? n

with a user at position P using channel k to go n

above ? . In such cases, the user at Pn undergoes an intracellular handoff, whereby it is treated as a new call in cell n or the user will try to find some other channel to continue the call. If no such channel is available, then the call at P is n

dropped.

DCA Strategy Requirements

The requirements of a good DCA strategy [7] are twofold. Firstly, in order to maximize carrier exploitation, the DCA strategy should maximize the reuse of the various carriers; i.e. during the dynamic system evolution, the DCA strategy should maintain, the maximum packing of the carriers. In this respect, an ideal DCA(IDCA) is a DCA which, at any time, succeeds in satisfying any carrier request of any cell, provided that the request set is compatible with the frequency reuse constraints. This means that the IDCA strategy achieves the best performance (in terms of blocking and dropping probability) in the DCA strategies.

Secondly, the DCA strategy implementation should require a minimum information exchange (e.g. among BTS's) in order to minimize signaling overhead. In this respect, the IDCA cannot be practically implemented since it could require even the real-time reconfiguration of the carrier-to-cell assignment in the whole cellular network, thus entailing a tremendous signaling overhead.

Impacts of DCA Strategies on GSM Networks

Two major problems have to be addressed when studying the genuine implementation of the considered DCA strategies in the GSM networks. First, each base transceiver station (BTS) must notify, in real time, a proper number of BTS's (depending on the selected DCA strategy) of the performed carrier acquisitions / releases. In a real scenario, a certain propagation delay is necessary in order to propagate the above-mentioned information to the appropriate BTS's, which triggers major contention problems in carrier acquisitions [8].

Second, the implementation of the DCA strategies involves that each BTS is equipped with N transceivers: one is firmly tuned on the BCCH carrier, while the remaining N-1 must be tunable on the

(n m) located at B

correct frequency (within the whole cellular network band) selected by the considered DCA algorithm. In addition to the above issues, it should also be noted that, in order to support additional carriers to serve traffic peaks, it is essential to care about extra signaling capacity to support those carriers e.g., in terms of SDCCH (Stand Alone Dedicated Control Channel) and PAGCH (Packet access Grant Channel) channels.

Implementation of the Information Exchanges

To implement the information exchanges between BTS's, a method described in [9] can be effectively exploited [10]; the idea is to exploit an already realized GSM procedure in order to convey the information on the acquired/released carriers from a BTS to the other ones [via the appropriate BSC's and mobile switching centers (MSC's)].

Let T denote the maximum time period, which M

a BTS m requires in order to inform the appropriate BTS's that is inside the re-use distance of BTS m on a carrier acquisition/release. Then, the asynchronous technique proposed in [8] can be easily adjusted in order to avoid the contention problems entailed by the finite propagation delay.

Whenever the BTS acquires a new carrier, it

waits for a time ? = 2? prior to actually using the ?

acquired carrier. If a contention occurs, i.e., the BTS is informed during the time ? that another cell inside the re-use distance has acquired the identical carrier, all of the competing cells must release such a carrier and have to attempt the attainment of a new carrier; in order to avoid repetitive acquisition contentions, such a new carrier has to be chosen according to an appropriate algorithm (e.g., considering the acquisition of the earlier carrier with a minimal probability). This method sets up a delay from the carrier acquisition stimulus to the actual utilization of the acquired carrier which is lower bounded (2T ), but not upper M

bounded.

An alternative technique is the synchronous technique proposed in [13] that can be functionalized if we consider a network in which the BTS's are synchronized; note that such BTS synchronization does not require the synchronization in the radio interface, and can be rather inaccurate (a plesiochronous system can be considered). In this methodology, the time axis is divided, in a recurring fashion, in time frames having duration T . Each frame

Institute of Management Studies, Dehradun

13"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

time frame is subdivided into L time slots s(k) (k=1,…,L) having a duration T (i.e. T = LTM, ); L M frame

is the maximum cardinality of the standard interference neighborhoods in the cellular network. Each BTS is linked to a time slot s(k) according to the regulation that BTS's in the identical interference neighborhood are assigned different time slots. Each BTS can acquire carriers only at the commencement of the associated time slot, i.e., every Tframe seconds. In doing so, when a BTS m attempts a carrier acquisition, the other BTS's inside the re-use distance (or same interference neighborhood) are not allowed to acquire carriers, and carrier acquisition conflict cannot occur. This method introduces a delay, from the carrier acquisition stimulus to the actual utilization of the acquired carrier, which is upper bounded by T that frame

can be kept to a few seconds or less.

Hardware Requirements

Fig. 1. Principled BTS transmission scheme. BB = base band, IF = intermediate frequency, RF = radio frequency.

Fig.1 [7] shows a principled BTS broadcast system stressing broadcast chains [each related to a distinct data source Si (i=1,…., N) ] which are combined in the coupling device (combiner). The major execution trouble just comes from the combiner: the power emitted by the amplifiers must not be sent back to the other ones, and inter-modulation effects must be avoided. In order to deal with this problem, two major procedures already exist.

? The “cavity coupling” technique [14] implies use of cavity filters, each tuned on a particular carrier (channel filtering), to be inserted at the input of the combiner. The key disadvantage of this technique is that the frequency tuning comprises moving mechanical parts; since fairly repeated real-time frequency tunings are required by the DCA execution, this technique is unquestionably not feasible for DCA purposes.

? The “hybrid coupling” technique [14] does not imply use of channel filtering. In this case, the combiner is charged to avoid the power sent by one amplifier to be sent back to the others. Unfortunately, these type of combiners (which have to face heavy inter-modulation problems) bring in a huge power loss which rapidly increases with the number of combined signals. However, this technique appears appropriate for DCA purposes. It should be noted also that f requency hopping requires tunable transceivers; however, frequency hopping can be performed at BB by simply switching each data source in the various IF/RF transmission chains, which remain resolutely tuned on the carriers allocated to the cell.

Conclusion

In this paper, the need & the characteristics of a good channel assignment strategy have been discussed. Some variants of FCA and DCA, the Admission control strategies for centralized and distributed DCA systems and requirements for a good DCA strategy have also been mentioned. The problems in the implementation of DCA strategy on GSM networks and the corresponding suitable solutions for each have been stated. It can easily be argued that future cellular networks will ultimately adopt the DCA strategy.

References

1. M.Zhang, and T. S. Yum, “Comparisons of Channel Assignment Strategies in Cellular Mobile Telephone Systems”, IEEE Transactions on Vehicular Technology, vol.38,no.4,pp211-215,1989.

2. W.K Lai and G.C. Coghill, “Channel A s s i g n m e n t t h r o u g h Ev o l u t i o n a r y Optimization”, IEEE Transactions on Vehicular Technology, vol.45,no.1,pp91-96,1996.

3. L.J. Cimini and G.J. Foschini, “Distributed Algorithms for Dynamic Channel Allocation in Microcellular Systems”, IEEE Vehicular Technology Conference, pp.641-644, 1992.

4. I.Katzela and M.Naghshineh, “Channel assignment schemes for cellular mobile t e l e c o m m u n i c a t i o n s y s t e m s : Acomprehensivesurvey”, IEEE Personal Communications, pp.10-31,1996

5. P. M . P a p a z o g l o u , D . A . K a r r a s , R.C.Papademetriou “dynamic channel

Analysis of schemes between Channel Assignment for GSM Networks

14 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 20: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

assignment simulation system for large scale cellular telecommunications”

6. P.Cherriman, F.Romiti and L.Hanzo, “Channel Allocation for Third-generation Mobile Radio Systems”,ACTS'98, vol.1, pp.255-261, 1998.

7. F. Delli Priscoli, Nicola Pio Magnani, Valerio Palestini, and Fabrizio Sestini, “Application of Dynamic Channel Allocation Strategies to the GSM Cellular Network” IEEE journal on selected areas in communications, vol. 15, no. 8, October 1997.

8. F. Delli Priscoli, “The asynchronous technique for carrier acquisition coordination,” IEEE J. Select. Areas Commun. (Special Issue on Mobile and Wireless Computing Network), vol. 13, pp. 908912, June 1995.

9. ETSI, ETS GSM 08.08, May 1994. [10] N. P. Magnani, “Implementation aspects for the

introduction of dynamic resource allocation strategies in the GSM system,” CSELT Int. Rep., 1997.

10. S.Jordan, “Resource Allocation in Wireless Networks”, Journal of High Speed Networks, vol.5,no.1, pp.23-24, 1996

11. S. Anand, A. Sridharan, and K. N. Sivarajan, “Performance Analysis of Channelized Cellular Systems with Dynamic Channel Allocation”, IEEE transactions on vehicular technology, vol. 52, no. 4, July 2003

12. “The geometric dynamic channel allocation strategy for high traffic FDM/TDMA mobile communications networks,” in Proc. 1994 Int. Zurich Seminar Digital Commun., Springer-Verlag, 1994, pp. 3344.

13. M. Mouly and M. B. Pautet, “The GSM system for mobile communications”

Institute of Management Studies, Dehradun

15"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

ABSTRACT

Image segmentation is the problem of partitioning an image into its constituent components. Typically, the effectiveness of a new algorithm is demonstrated only by presentation of a few segmented images and is otherwise left to subjective evaluation by the reader. In this paper, the comparison has been performed among the existing segmentation techniques with proposed segmentation method for segmentation using quantitative similarity measures. The proposed technique works by considering the color features of the image. The resultant segmented image is compared with manually segmented image also known as 'ground-truth' or 'gold-standard'. Results are presented on images from the publicly available Berkeley segmentation data set. On the basis of quantitative measures and experimental results it shows that the proposed technique is slightly better than region growing method but falling behind the other two techniques thresholding and edge detection.

Keywords- Image Segmentation, Ground-Truth, Thresholding, Edge Detection.

Introduction

Segmentation techniques are used to automate the process of isolating the relevant objects within a digital image from the extraneous background. Segmentation is the process of partitioning an image into regions i.e. group of connected pixels with similar properties such as gray levels, colors, textures, motion characteristics (motion vectors), edge continuity [2]. There are two approaches to segmentation (a) Region segmentation (b) Edge segmentation. In an edge-oriented scheme, the relevant object is identified by locating its outer edges. Such methods may be used, for example, for the segmentation of license plate numbers from an image of an automobile. In a region-oriented scheme, the relevant object is identified by accumulating neighboring pixels of similar intensities. This sort of method was developed to segment the soft tissue, lung and external regions in a tomographic image obtained from nuclear medicine imaging [7, 16].

Basic Segmentation Algorithms- Any segmentation algorithm must have the following two properties: (a) Capturing perceptually important groupings or regions which often reflect global aspects of the image: The clear

Image Segmentation: Objective Evaluation and a New Approach

Silky Dhawan*Akshay Girdhar**

*Lecturer, Deptt. of CSE NDEC, Ludhiana. [email protected] 97812-84014 **Asstt. Professor, Deptt. of IT NDEC, Ludhiana. [email protected] 98724-61620

definitions of the properties of a resulting segmentation is required so that it would be easy and better to understand the method as well as to facilitate the comparison of different segmentation approaches.

(b) Be highly efficient, running in time nearly linear in the number of image pixels: It is believed that the segmentation methods should run at linear time and with low constant factor, so that they can be used practically. Generally, it is considered that segmentation methods should run at speeds similar to edge detection or other low-level visual processing techniques. For example, a segmentation technique that runs at several frames per second can be used in video processing applications.

A large number of image segmentation techniques are available in literature [7, 8, 3]. But some of the following techniques are used for comparison with the proposed technique:

Thresholding- Taking the threshold of an image involves the comparison of each pixel value to some threshold value. If the value of the pixel is greater than the threshold value, the pixel is either preserved or set to some new value. If the pixel value is less than the threshold value, the pixel value is set to some

16 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 21: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

assignment simulation system for large scale cellular telecommunications”

6. P.Cherriman, F.Romiti and L.Hanzo, “Channel Allocation for Third-generation Mobile Radio Systems”,ACTS'98, vol.1, pp.255-261, 1998.

7. F. Delli Priscoli, Nicola Pio Magnani, Valerio Palestini, and Fabrizio Sestini, “Application of Dynamic Channel Allocation Strategies to the GSM Cellular Network” IEEE journal on selected areas in communications, vol. 15, no. 8, October 1997.

8. F. Delli Priscoli, “The asynchronous technique for carrier acquisition coordination,” IEEE J. Select. Areas Commun. (Special Issue on Mobile and Wireless Computing Network), vol. 13, pp. 908912, June 1995.

9. ETSI, ETS GSM 08.08, May 1994. [10] N. P. Magnani, “Implementation aspects for the

introduction of dynamic resource allocation strategies in the GSM system,” CSELT Int. Rep., 1997.

10. S.Jordan, “Resource Allocation in Wireless Networks”, Journal of High Speed Networks, vol.5,no.1, pp.23-24, 1996

11. S. Anand, A. Sridharan, and K. N. Sivarajan, “Performance Analysis of Channelized Cellular Systems with Dynamic Channel Allocation”, IEEE transactions on vehicular technology, vol. 52, no. 4, July 2003

12. “The geometric dynamic channel allocation strategy for high traffic FDM/TDMA mobile communications networks,” in Proc. 1994 Int. Zurich Seminar Digital Commun., Springer-Verlag, 1994, pp. 3344.

13. M. Mouly and M. B. Pautet, “The GSM system for mobile communications”

Institute of Management Studies, Dehradun

15"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

ABSTRACT

Image segmentation is the problem of partitioning an image into its constituent components. Typically, the effectiveness of a new algorithm is demonstrated only by presentation of a few segmented images and is otherwise left to subjective evaluation by the reader. In this paper, the comparison has been performed among the existing segmentation techniques with proposed segmentation method for segmentation using quantitative similarity measures. The proposed technique works by considering the color features of the image. The resultant segmented image is compared with manually segmented image also known as 'ground-truth' or 'gold-standard'. Results are presented on images from the publicly available Berkeley segmentation data set. On the basis of quantitative measures and experimental results it shows that the proposed technique is slightly better than region growing method but falling behind the other two techniques thresholding and edge detection.

Keywords- Image Segmentation, Ground-Truth, Thresholding, Edge Detection.

Introduction

Segmentation techniques are used to automate the process of isolating the relevant objects within a digital image from the extraneous background. Segmentation is the process of partitioning an image into regions i.e. group of connected pixels with similar properties such as gray levels, colors, textures, motion characteristics (motion vectors), edge continuity [2]. There are two approaches to segmentation (a) Region segmentation (b) Edge segmentation. In an edge-oriented scheme, the relevant object is identified by locating its outer edges. Such methods may be used, for example, for the segmentation of license plate numbers from an image of an automobile. In a region-oriented scheme, the relevant object is identified by accumulating neighboring pixels of similar intensities. This sort of method was developed to segment the soft tissue, lung and external regions in a tomographic image obtained from nuclear medicine imaging [7, 16].

Basic Segmentation Algorithms- Any segmentation algorithm must have the following two properties: (a) Capturing perceptually important groupings or regions which often reflect global aspects of the image: The clear

Image Segmentation: Objective Evaluation and a New Approach

Silky Dhawan*Akshay Girdhar**

*Lecturer, Deptt. of CSE NDEC, Ludhiana. [email protected] 97812-84014 **Asstt. Professor, Deptt. of IT NDEC, Ludhiana. [email protected] 98724-61620

definitions of the properties of a resulting segmentation is required so that it would be easy and better to understand the method as well as to facilitate the comparison of different segmentation approaches.

(b) Be highly efficient, running in time nearly linear in the number of image pixels: It is believed that the segmentation methods should run at linear time and with low constant factor, so that they can be used practically. Generally, it is considered that segmentation methods should run at speeds similar to edge detection or other low-level visual processing techniques. For example, a segmentation technique that runs at several frames per second can be used in video processing applications.

A large number of image segmentation techniques are available in literature [7, 8, 3]. But some of the following techniques are used for comparison with the proposed technique:

Thresholding- Taking the threshold of an image involves the comparison of each pixel value to some threshold value. If the value of the pixel is greater than the threshold value, the pixel is either preserved or set to some new value. If the pixel value is less than the threshold value, the pixel value is set to some

16 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 22: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

background value that is less than the threshold. This technique has significant position in various applications of image segmentation. The main advantage of this method is that, this segmentation method is very fast. On the other hand, the disadvantages of Thresholding method are: (a) it is dependent on the possibility of defining a threshold that works well everywhere in the image and (b) it requires a region growing or other technique of segmentation if two objects have the same color [2, 4].

Edge Detection- Edge characterizes boundaries and is therefore a problem of fundamental importance in image processing. Edge detection in an image 'significantly reduces the amount of data and filters out useless information, while preserving the important structural properties in an image. Since the edges for an image are always important characteristics, they offer an indication for a higher frequency. The two principal properties used for establishing similarity of edge pixels in this kind of analysis are: (a) strength of the response of the gradient operator used to produce the edge pixel; and (b) the direction of the gradient vector. The basic limitation of this method is that it results in failure in case of blurring [2, 9].

Region Growing- This is the procedure that groups pixels or sub regions into larger regions based on predefined criteria for growth. The basic idea is to start with a set of “seed” points and from these seed points regions grow by appending to each seed those neighboring pixels that have predefined properties similar to that seed such as specific ranges of gray level or color. The main advantage of Region Growing method is its easy implementation. However, its major disadvantages are: (a) regions obtained depend strongly on the first pixel chosen and the order in which the border pixels are examined, and (b) the results obtained after applying this segmentation method are very sensitive to the threshold value [2, 9].

Evaluation of Image Segmentation Algorithms-Evaluation methods are generally divided into two categories [15]: Analytical Methods and Empirical Methods. Analytical Methods are based on some principles and properties about segmentation algorithms. Analytical Methods also require some amount of prior knowledge that has been incorporated into segmentation algorithms like its processing complexity, efficiency, processing strategy etc. and

thus, examine and asses the segmentation algorithms by analyzing them. Empirical Methods indirectly judge the segmentation algorithm by applying them to test image and measuring the quality of segmentation result. Empirical Methods are further classified into two types: Goodness Methods and Discrepancy Methods. In empirical goodness methods, desirable properties of segmented images, often established according to human intuition about what conditions should be classified by an "ideal segmentation", are measured by goodness parameters. Some of the goodness measures proposed in the literature are Color Uniformity, Intraregion Uniformity, Inter-Region Uniformity, Region Shape etc. Empirical discrepancy methods are based on the availability of a Reference Segmentation, also called Gold Standard or Ground Truth. The performance of any segmentation algorithm can be judged on the basis of the disparity between an actually segmented image and ideally segmented image i.e. taken from the gold standard which is the best expected result. The actual segmented image as well as the reference image both are obtained from the same input image. These kinds of methods try to determine how far the actually segmented image is from reference image [1, 10].

Measures of Similarity- A measure makes the comparison by quantifying the agreement/similarity of output segmentation with various manual segmentations with a scoring baseline. In the domain of image analysis, the comparison of two segmentations is difficult as image segmentation is inherently ill-defined- there is no single ground truth label assignment that can be used for comparison. The desirable properties of a good measure are: (a) Accommodate Refinement: It should accommodate refinements only in regions that human segmenters find ambiguous and to penalize differences in refinement elsewhere. Label refinement can be defined as differences in the pixel-level granularity of label assignments in the segmentation of a given image. (b) Non-degeneracy: It does not have degenerate cases where unrealistic input instances give abnormally high values of similarity. There are generally two degenerate cases that give zero error- one pixel per segment, and one segment for the whole image. And they adversely limit the use of the errors functions. (c) No assumption about data generation: It does not assume equal cardinality of the labels or region sizes in the segmentation. (d) Comparable score: The measure gives

Institute of Management Studies, Dehradun

17"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Image Segmentation: Objective Evaluation and a New Approach

scores that permit meaningful comparison between segmentations of different images and between different segmentations of the same image [11, 12, 13]. The measures used to compare the other segmentation techniques with the proposed one are: (a) Rand Index, (b) Simple Matching Coefficient, (c) Jaccard's Coefficient, and (d) Fowlkes Mallows Index.

Proposed Technique- None of the developed segmentation algorithm is generally applicable to all

types of images and different algorithms are not equally suitable for a particular image. Thus, there is a need of an objective evaluation parameter that can numerically quantify the performance of that segmentation algorithm. The proposed algorithm has been implemented on a set of different natural images of sizes (160×240×3) or (240×160×3). The Figure 1.1 shows the original images taken from the publicly available data set of natural images [5, 17].

18

“star.jpg” “church.jpg” “pic1.jpg” “bird.jpg” “plane.jpg”

Figure 1.1: Original images taken from the Berkeley dataset.

In the proposed method, segmentation is based on color feature of the image and that color feature is used to segment different objects from one another or separate background from foreground object. The algorithm steps can be described as:

a) Consider a colored input image and specify the number of objects by selecting the respective pixel values of the objects from the image say N.

b) For all the selected pixels, its RGB value is calculated at that location.

c) The maximum value from selected RGB pixel is considered (say x).

d) For all other pixels in the image, assign x to them if they are having same RGB status as that of x.

e) Repeat Step (d) for N times and the resultant image is the segmented image.

After the completion of these steps, the segmented image is compared with the manual segmented image and values are calculated using the similarity measures whose values lie between the range 0 and 1. The flowchart for the proposed technique is shown in Figure 1.2.

Results and Conclusion- According to [6], it is probably meaningless to search for a best criterion for

Start

Select a (mxnx3) colored image

Number of segments

specified = n

Number of segments=2

Select the pixel values n times =x

Compare with other pixels

Assign the neighbor pixel value=x, if they have the

same RGB status as x

Shows the segmented image

No

Yes

n times

Figure 1.2: Flowchart for proposed segmentation method

"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 23: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

background value that is less than the threshold. This technique has significant position in various applications of image segmentation. The main advantage of this method is that, this segmentation method is very fast. On the other hand, the disadvantages of Thresholding method are: (a) it is dependent on the possibility of defining a threshold that works well everywhere in the image and (b) it requires a region growing or other technique of segmentation if two objects have the same color [2, 4].

Edge Detection- Edge characterizes boundaries and is therefore a problem of fundamental importance in image processing. Edge detection in an image 'significantly reduces the amount of data and filters out useless information, while preserving the important structural properties in an image. Since the edges for an image are always important characteristics, they offer an indication for a higher frequency. The two principal properties used for establishing similarity of edge pixels in this kind of analysis are: (a) strength of the response of the gradient operator used to produce the edge pixel; and (b) the direction of the gradient vector. The basic limitation of this method is that it results in failure in case of blurring [2, 9].

Region Growing- This is the procedure that groups pixels or sub regions into larger regions based on predefined criteria for growth. The basic idea is to start with a set of “seed” points and from these seed points regions grow by appending to each seed those neighboring pixels that have predefined properties similar to that seed such as specific ranges of gray level or color. The main advantage of Region Growing method is its easy implementation. However, its major disadvantages are: (a) regions obtained depend strongly on the first pixel chosen and the order in which the border pixels are examined, and (b) the results obtained after applying this segmentation method are very sensitive to the threshold value [2, 9].

Evaluation of Image Segmentation Algorithms-Evaluation methods are generally divided into two categories [15]: Analytical Methods and Empirical Methods. Analytical Methods are based on some principles and properties about segmentation algorithms. Analytical Methods also require some amount of prior knowledge that has been incorporated into segmentation algorithms like its processing complexity, efficiency, processing strategy etc. and

thus, examine and asses the segmentation algorithms by analyzing them. Empirical Methods indirectly judge the segmentation algorithm by applying them to test image and measuring the quality of segmentation result. Empirical Methods are further classified into two types: Goodness Methods and Discrepancy Methods. In empirical goodness methods, desirable properties of segmented images, often established according to human intuition about what conditions should be classified by an "ideal segmentation", are measured by goodness parameters. Some of the goodness measures proposed in the literature are Color Uniformity, Intraregion Uniformity, Inter-Region Uniformity, Region Shape etc. Empirical discrepancy methods are based on the availability of a Reference Segmentation, also called Gold Standard or Ground Truth. The performance of any segmentation algorithm can be judged on the basis of the disparity between an actually segmented image and ideally segmented image i.e. taken from the gold standard which is the best expected result. The actual segmented image as well as the reference image both are obtained from the same input image. These kinds of methods try to determine how far the actually segmented image is from reference image [1, 10].

Measures of Similarity- A measure makes the comparison by quantifying the agreement/similarity of output segmentation with various manual segmentations with a scoring baseline. In the domain of image analysis, the comparison of two segmentations is difficult as image segmentation is inherently ill-defined- there is no single ground truth label assignment that can be used for comparison. The desirable properties of a good measure are: (a) Accommodate Refinement: It should accommodate refinements only in regions that human segmenters find ambiguous and to penalize differences in refinement elsewhere. Label refinement can be defined as differences in the pixel-level granularity of label assignments in the segmentation of a given image. (b) Non-degeneracy: It does not have degenerate cases where unrealistic input instances give abnormally high values of similarity. There are generally two degenerate cases that give zero error- one pixel per segment, and one segment for the whole image. And they adversely limit the use of the errors functions. (c) No assumption about data generation: It does not assume equal cardinality of the labels or region sizes in the segmentation. (d) Comparable score: The measure gives

Institute of Management Studies, Dehradun

17"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Image Segmentation: Objective Evaluation and a New Approach

scores that permit meaningful comparison between segmentations of different images and between different segmentations of the same image [11, 12, 13]. The measures used to compare the other segmentation techniques with the proposed one are: (a) Rand Index, (b) Simple Matching Coefficient, (c) Jaccard's Coefficient, and (d) Fowlkes Mallows Index.

Proposed Technique- None of the developed segmentation algorithm is generally applicable to all

types of images and different algorithms are not equally suitable for a particular image. Thus, there is a need of an objective evaluation parameter that can numerically quantify the performance of that segmentation algorithm. The proposed algorithm has been implemented on a set of different natural images of sizes (160×240×3) or (240×160×3). The Figure 1.1 shows the original images taken from the publicly available data set of natural images [5, 17].

18

“star.jpg” “church.jpg” “pic1.jpg” “bird.jpg” “plane.jpg”

Figure 1.1: Original images taken from the Berkeley dataset.

In the proposed method, segmentation is based on color feature of the image and that color feature is used to segment different objects from one another or separate background from foreground object. The algorithm steps can be described as:

a) Consider a colored input image and specify the number of objects by selecting the respective pixel values of the objects from the image say N.

b) For all the selected pixels, its RGB value is calculated at that location.

c) The maximum value from selected RGB pixel is considered (say x).

d) For all other pixels in the image, assign x to them if they are having same RGB status as that of x.

e) Repeat Step (d) for N times and the resultant image is the segmented image.

After the completion of these steps, the segmented image is compared with the manual segmented image and values are calculated using the similarity measures whose values lie between the range 0 and 1. The flowchart for the proposed technique is shown in Figure 1.2.

Results and Conclusion- According to [6], it is probably meaningless to search for a best criterion for

Start

Select a (mxnx3) colored image

Number of segments

specified = n

Number of segments=2

Select the pixel values n times =x

Compare with other pixels

Assign the neighbor pixel value=x, if they have the

same RGB status as x

Shows the segmented image

No

Yes

n times

Figure 1.2: Flowchart for proposed segmentation method

"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 24: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

19

Institute of Management Studies, Dehradun

comparing segmentations, just as it is meaningless to search for the best clustering algorithm. Algorithms are "good" in as much as they match the task at hand. The results are evaluated and compared objectively as well as subjectively. The Figure 1.3 shows the results of proposed segmentation method. Thus, when the result

of the proposed technique is compared with the manual hand segmentations, the results were quite satisfactory as shown in the Figure 1.4. The top row shows the images taken from Berkeley dataset of manual hand segmented images and the bottom row shows the results of the proposed segmentation method.

Figure 1.3: Results of proposed segmentation technique on same set of original images

The experimental results on the basis of the similarity measures in the Table 1 show that the proposed approach is slightly better than region growing based segmentation method while it falls behind the other two. The result of these similarity measures lies between 0 and 1 and the value towards 0 means that there is less degree of similarity between the test image and manual hand segmented image and value towards 1 means that there is higher degree of similarity or images are very similar. Thus, on the basis of the above results, it is concluded that if any image is having lesser number of objects, it would result in

higher quality segmented image and image having higher differentiation between objects can have better results after segmentation. If higher is the intensity of any selected pixel in any objects, it will result in better segmented image. If the lesser number of objects are selected for separating them from the rest of the objects, the resultant segmented image would be of higher quality. For future work, it would be interesting to compare other widely used segmentation algorithms such as Mean-shift, Normalized cuts with ones presented here and distance measures like precision and recall can be used to rank the performance of the algorithms.

References

1. Cardoso, J.S. and Corte-Real, Luis, Toward a Generic Evaluation of Image Segmentation, IEEE Transactions On Image Processing, Vol. 14, No. 11, pp. 1773-1782 (2005).

2. Gonzalez, Rafael C. and Woods, Richards E., ndDigital Image Processing, 2 ed., Pearson

Prentice Hall, New Delhi, pp. 589-656 (2006).

3. Jain, A.K. and Dubes, Richard C., Algorithms

for Clustering Data, Prentice Hall, New Jersey, pp. 1-19 (1988).

4. Liao, P.S., Chen, T.S. and Chung, P.C., A Fast Algorithm for Multilevel Thresholding, Journal of Information Science and Engineering, pp. 713-727 (2001).

5. Martin, D., Fowlkes, C., Tal, D. and Malik, J., A Database of Human Segmented Natural Images and its Applications to Evaluating Segmentation Algorithms and Measuring

Figure 1.4: Comparison of manual hand segmentations with proposed technique

"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008 20

Image Segmentation: Objective Evaluation and a New Approach

Ecological Statistics, International Conference on Computer Vision (ICCV), Vol. 2, pp. 416-423 (2001).

6. Meila, M., Comparing Clusterings: An Axiomatic View, Proc. of 22nd International Conference on Machine Learning, Vol. 119, Germany, pp. 577-584 (2005).

7. Pal, S. K. and Pal, N. R., A Review on Image Segmentation Techniques, Pattern Recognition, India, Vol. 26, No. 9, pp. 1277-1294 (1993).

8. Pantofaru, C. and Hebert, M., A Comparison of Image Segmentation Algorithms, Technical Report, Robotics Institute, Carnegie Mellon University (2005).

9. Pavlidis, Theo and Liow, Y.T., Integrating Region Growing and Edge Detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, New York, Vol. 12, No.3, pp. 225-233 (1990).

10. Udupa, Jayaram K., LeBlanc, R., Zhuge, Y., Imielinska, C., Schmidt, H., Currie, Leanne M., Hirsch, Bruce E. and Woodburn, J.,A framework for Evaluating Image Segmentation Algorithm, ELSEVIER, Computerized Medical Imaging and Graphics, France, Vol. 30, pp. 75-87 (2006).

11. Unnikrishnan, R. and Hebert, M., Measures of Similarity, IEEE Workshop on Applications of

Computer Vision, Pittsburgh, Vol. 1, pp. 394-400 (2005).

12. Unnikrishnan, R., Pantofaru, C. and Hebert, M., A Measure for Objective Evaluation of Image Segmentation Algorithms, Proc. CVPR Workshop Empirical Evaluation Methods in Computer Vision, USA, (2005).

13. Unnikrishnan, R., Pantofaru, C. and Hebert, M., Toward Objective Evaluation of Image Segmentation Algorithms, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 29, No. 6, pp. 929-944 (2007).

14. Yong, X., Feng, D. and Rongchun, Z., Optimal Selection of Image Segmentation Algorithms Based on Performance Prediction, Conferences in Research and Practice in Information Technology, Sydney, Vol. 36, pp. 1-4 (2004).

15. Zhang, Y.J., The Evolution of Evaluation for Image Segmentation, SCATI Journal on Evaluation of Treatments on Low Level System Vision, Beijing, pp. 1-44 (2004).

16. http://en.wikipedia.org/wiki/Segmentation_ (image_processing).

17. Berkeley Segmentation Dataset of natural i m a g e s i s a v a i l a b l e a t : http://www.cs.berkeley.edu/projects/vision/bsds.

"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 25: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

19

Institute of Management Studies, Dehradun

comparing segmentations, just as it is meaningless to search for the best clustering algorithm. Algorithms are "good" in as much as they match the task at hand. The results are evaluated and compared objectively as well as subjectively. The Figure 1.3 shows the results of proposed segmentation method. Thus, when the result

of the proposed technique is compared with the manual hand segmentations, the results were quite satisfactory as shown in the Figure 1.4. The top row shows the images taken from Berkeley dataset of manual hand segmented images and the bottom row shows the results of the proposed segmentation method.

Figure 1.3: Results of proposed segmentation technique on same set of original images

The experimental results on the basis of the similarity measures in the Table 1 show that the proposed approach is slightly better than region growing based segmentation method while it falls behind the other two. The result of these similarity measures lies between 0 and 1 and the value towards 0 means that there is less degree of similarity between the test image and manual hand segmented image and value towards 1 means that there is higher degree of similarity or images are very similar. Thus, on the basis of the above results, it is concluded that if any image is having lesser number of objects, it would result in

higher quality segmented image and image having higher differentiation between objects can have better results after segmentation. If higher is the intensity of any selected pixel in any objects, it will result in better segmented image. If the lesser number of objects are selected for separating them from the rest of the objects, the resultant segmented image would be of higher quality. For future work, it would be interesting to compare other widely used segmentation algorithms such as Mean-shift, Normalized cuts with ones presented here and distance measures like precision and recall can be used to rank the performance of the algorithms.

References

1. Cardoso, J.S. and Corte-Real, Luis, Toward a Generic Evaluation of Image Segmentation, IEEE Transactions On Image Processing, Vol. 14, No. 11, pp. 1773-1782 (2005).

2. Gonzalez, Rafael C. and Woods, Richards E., ndDigital Image Processing, 2 ed., Pearson

Prentice Hall, New Delhi, pp. 589-656 (2006).

3. Jain, A.K. and Dubes, Richard C., Algorithms

for Clustering Data, Prentice Hall, New Jersey, pp. 1-19 (1988).

4. Liao, P.S., Chen, T.S. and Chung, P.C., A Fast Algorithm for Multilevel Thresholding, Journal of Information Science and Engineering, pp. 713-727 (2001).

5. Martin, D., Fowlkes, C., Tal, D. and Malik, J., A Database of Human Segmented Natural Images and its Applications to Evaluating Segmentation Algorithms and Measuring

Figure 1.4: Comparison of manual hand segmentations with proposed technique

"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008 20

Image Segmentation: Objective Evaluation and a New Approach

Ecological Statistics, International Conference on Computer Vision (ICCV), Vol. 2, pp. 416-423 (2001).

6. Meila, M., Comparing Clusterings: An Axiomatic View, Proc. of 22nd International Conference on Machine Learning, Vol. 119, Germany, pp. 577-584 (2005).

7. Pal, S. K. and Pal, N. R., A Review on Image Segmentation Techniques, Pattern Recognition, India, Vol. 26, No. 9, pp. 1277-1294 (1993).

8. Pantofaru, C. and Hebert, M., A Comparison of Image Segmentation Algorithms, Technical Report, Robotics Institute, Carnegie Mellon University (2005).

9. Pavlidis, Theo and Liow, Y.T., Integrating Region Growing and Edge Detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, New York, Vol. 12, No.3, pp. 225-233 (1990).

10. Udupa, Jayaram K., LeBlanc, R., Zhuge, Y., Imielinska, C., Schmidt, H., Currie, Leanne M., Hirsch, Bruce E. and Woodburn, J.,A framework for Evaluating Image Segmentation Algorithm, ELSEVIER, Computerized Medical Imaging and Graphics, France, Vol. 30, pp. 75-87 (2006).

11. Unnikrishnan, R. and Hebert, M., Measures of Similarity, IEEE Workshop on Applications of

Computer Vision, Pittsburgh, Vol. 1, pp. 394-400 (2005).

12. Unnikrishnan, R., Pantofaru, C. and Hebert, M., A Measure for Objective Evaluation of Image Segmentation Algorithms, Proc. CVPR Workshop Empirical Evaluation Methods in Computer Vision, USA, (2005).

13. Unnikrishnan, R., Pantofaru, C. and Hebert, M., Toward Objective Evaluation of Image Segmentation Algorithms, IEEE Transactions on Pattern Analysis and Machine Intelligence, Vol. 29, No. 6, pp. 929-944 (2007).

14. Yong, X., Feng, D. and Rongchun, Z., Optimal Selection of Image Segmentation Algorithms Based on Performance Prediction, Conferences in Research and Practice in Information Technology, Sydney, Vol. 36, pp. 1-4 (2004).

15. Zhang, Y.J., The Evolution of Evaluation for Image Segmentation, SCATI Journal on Evaluation of Treatments on Low Level System Vision, Beijing, pp. 1-44 (2004).

16. http://en.wikipedia.org/wiki/Segmentation_ (image_processing).

17. Berkeley Segmentation Dataset of natural i m a g e s i s a v a i l a b l e a t : http://www.cs.berkeley.edu/projects/vision/bsds.

"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 26: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Images Similarity

Measures

Thresholding Edge

Detection

Region

Growing

Proposed

Technique

Rand_index 0.4887 0.4871 0.4939 0.5765

Simple_coeff 0.8374 0.8266 0.7850 0.9309

Jaccard_coeff 0.8361 0.8257 0.7812 0.9306

star.jpg

FM_index 0.9115 0.9053 0.8801 0.9641

Rand_index 0.4916 0.4913 0.4992 0.5527

Simple_coeff 0.8539 0.8535 0.9477 0.9536

Jaccard_coeff 0.8584 0.8531 0.9466 0.9536

pic1.jpg

FM_index 0.9246 0.9216 0.9726 0.9762

Rand_index 0.4930 0.4912 0.4939 0.5815

Simple_coeff 0.9165 0.8964 0.9236 0.9342

Jaccard_coeff 0.9152 0.8950 0.9223 0.9331

church.jpg

FM_index 0.9557 0.9446 0.9596 0.9659

Rand_index 0.4865 0.4875 0.4898 0.5727

Simple_coeff 0.8486 0.8357 0.8106 0.9283

Jaccard_coeff 0.8478 0.8345 0.8083 0.9282

bird.jpg

FM_index 0.9180 0.9104 0.8953 0.9630

Rand_index 0.4994 0.4920 0.5014 0.5477

Simple_coeff 0.9583 0.9019 0.9681 0.9584

Jaccard_coeff 0.9576 0.9104 0.9674 0.9583

plane.jpg

FM_index 0.9783 0.9482 0.9823 0.9788

Table 1: Values of Similarity Measures on Proposed Technique

Institute of Management Studies, Dehradun

21"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Aditya Kumar Gupta

ABSTRACT

As its own area of global change research, land-use and land-cover change focuses on the characteristics, causes, and consequences of shifts in vegetation and other types of land cover. This area of research recognizes that human activities affect most, if not all, "natural systems" and so incorporates anthropogenic disturbance into ecological studies. Land use is but one disturbance that affects any area on the Earth. Also, it occurs in the context of natural variability and disturbance. Land use alters land cover, thereby affecting ecosystem processes. Even without specifically altering land cover, land use affects ecosystem functioning through intensification of usage. Either type of change alters biophysical, biogeochemical, and hydrological states and processes.

Remote sensing measurements of spectral signatures provide data on spectral color, temperature, moisture content, and organic and inorganic composition as well as spatial properties of aerial extent, geometry (size, shape, and texture), and position. Satellite based remote sensing technique combined with limited field survey provides valuable information in a short time. The data generated from satellite imageries provide information of land use and land cover change under observation at any particular date, and can be processed quickly. Wide spread application of remote sensing and GIS will be helpful for creating a systematic and sharable database on related issues and answer the unanswered questions of different stakeholders. This Paper provides a holistic view on present and further application of Remote Sensing and GIS Technology for the Studies of Land-Use & Land-Cover Change.

Functional Role of Remote Sensing in Studies of Land-Use & Land-Cover Change

Asstt. Professor, Deptt. of Computer Applications, School of Management Sciences, Varanasi, [email protected]

Introduction

Terrestrial ecology, the study of the land, depends on data derived from space-borne sensors. Not just limited to finite locations on the ground, terrestrial ecology integrates biospheric, atmospheric, and hydrologic processes at a variety of spatial scales. Increasing awareness of and appreciation for the insight that remote sensing techniques and data lend to these studies have expanded the role of satellite imagery in ecology. Indeed, remote sensing has been called essential "for addressing the role of ecological complexity in global processes" by Matson and Ustin. The power remote sensing offers ecological studies extends beyond the ability to map vegetation classes on the Earth's surface: functional variables can be derived from satellite data. The information ecology can then generate becomes integral to global change research. In particular, land-use/land-cover change research comprises both one of the main ecological applications of remote sensing and one of the primary foci of global

change research. After a brief review of some fundamental ecological concepts, the role of remote sensing in studies of land-use/land-cover change will be considered.

Scaling Structure and Function in Ecological Studies

Structure and function comprise the two major components of ecological systems. Structure, concerned with pattern, deals with the physical aspects of land cover, such as vegetation-type distribution, canopy structure, and leaf and stem areas. Function describes the processes that transfer matter and energy in the system, such as gas fluxes and water exchange between the biosphere and atmosphere. Structure and function are linked. Due to the complexity of their linkages, however, ecological studies usually concentrate on one or the other. Yet to fully understand how ecosystems operate, the two need to be integrated. In general, variations in canopy structure, such as size

22 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 27: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Images Similarity

Measures

Thresholding Edge

Detection

Region

Growing

Proposed

Technique

Rand_index 0.4887 0.4871 0.4939 0.5765

Simple_coeff 0.8374 0.8266 0.7850 0.9309

Jaccard_coeff 0.8361 0.8257 0.7812 0.9306

star.jpg

FM_index 0.9115 0.9053 0.8801 0.9641

Rand_index 0.4916 0.4913 0.4992 0.5527

Simple_coeff 0.8539 0.8535 0.9477 0.9536

Jaccard_coeff 0.8584 0.8531 0.9466 0.9536

pic1.jpg

FM_index 0.9246 0.9216 0.9726 0.9762

Rand_index 0.4930 0.4912 0.4939 0.5815

Simple_coeff 0.9165 0.8964 0.9236 0.9342

Jaccard_coeff 0.9152 0.8950 0.9223 0.9331

church.jpg

FM_index 0.9557 0.9446 0.9596 0.9659

Rand_index 0.4865 0.4875 0.4898 0.5727

Simple_coeff 0.8486 0.8357 0.8106 0.9283

Jaccard_coeff 0.8478 0.8345 0.8083 0.9282

bird.jpg

FM_index 0.9180 0.9104 0.8953 0.9630

Rand_index 0.4994 0.4920 0.5014 0.5477

Simple_coeff 0.9583 0.9019 0.9681 0.9584

Jaccard_coeff 0.9576 0.9104 0.9674 0.9583

plane.jpg

FM_index 0.9783 0.9482 0.9823 0.9788

Table 1: Values of Similarity Measures on Proposed Technique

Institute of Management Studies, Dehradun

21"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Aditya Kumar Gupta

ABSTRACT

As its own area of global change research, land-use and land-cover change focuses on the characteristics, causes, and consequences of shifts in vegetation and other types of land cover. This area of research recognizes that human activities affect most, if not all, "natural systems" and so incorporates anthropogenic disturbance into ecological studies. Land use is but one disturbance that affects any area on the Earth. Also, it occurs in the context of natural variability and disturbance. Land use alters land cover, thereby affecting ecosystem processes. Even without specifically altering land cover, land use affects ecosystem functioning through intensification of usage. Either type of change alters biophysical, biogeochemical, and hydrological states and processes.

Remote sensing measurements of spectral signatures provide data on spectral color, temperature, moisture content, and organic and inorganic composition as well as spatial properties of aerial extent, geometry (size, shape, and texture), and position. Satellite based remote sensing technique combined with limited field survey provides valuable information in a short time. The data generated from satellite imageries provide information of land use and land cover change under observation at any particular date, and can be processed quickly. Wide spread application of remote sensing and GIS will be helpful for creating a systematic and sharable database on related issues and answer the unanswered questions of different stakeholders. This Paper provides a holistic view on present and further application of Remote Sensing and GIS Technology for the Studies of Land-Use & Land-Cover Change.

Functional Role of Remote Sensing in Studies of Land-Use & Land-Cover Change

Asstt. Professor, Deptt. of Computer Applications, School of Management Sciences, Varanasi, [email protected]

Introduction

Terrestrial ecology, the study of the land, depends on data derived from space-borne sensors. Not just limited to finite locations on the ground, terrestrial ecology integrates biospheric, atmospheric, and hydrologic processes at a variety of spatial scales. Increasing awareness of and appreciation for the insight that remote sensing techniques and data lend to these studies have expanded the role of satellite imagery in ecology. Indeed, remote sensing has been called essential "for addressing the role of ecological complexity in global processes" by Matson and Ustin. The power remote sensing offers ecological studies extends beyond the ability to map vegetation classes on the Earth's surface: functional variables can be derived from satellite data. The information ecology can then generate becomes integral to global change research. In particular, land-use/land-cover change research comprises both one of the main ecological applications of remote sensing and one of the primary foci of global

change research. After a brief review of some fundamental ecological concepts, the role of remote sensing in studies of land-use/land-cover change will be considered.

Scaling Structure and Function in Ecological Studies

Structure and function comprise the two major components of ecological systems. Structure, concerned with pattern, deals with the physical aspects of land cover, such as vegetation-type distribution, canopy structure, and leaf and stem areas. Function describes the processes that transfer matter and energy in the system, such as gas fluxes and water exchange between the biosphere and atmosphere. Structure and function are linked. Due to the complexity of their linkages, however, ecological studies usually concentrate on one or the other. Yet to fully understand how ecosystems operate, the two need to be integrated. In general, variations in canopy structure, such as size

22 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 28: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

and spacing, usually accompany functional differences among vegetation types.

Scale presents another challenge facing ecological studies. Both structural attributes and functional processes represent entities with nonlinear components. Accordingly, data needs to be collected at a scale coarser than the plot level in order to understand large-scale dynamics. Remote sensing addresses this concern by providing data products with coarse spatial extent. The measurement taken by the sensor constitutes the result of interactions of electromagnetic radiation with surface constituents, including the soil and vegetation layers. With knowledge of the particular characteristics of canopy and landscape structures, remote sensing also allows the linkage of structure and function in ecological studies. For example, photosynthesis and carbon allocation processes are constrained by the amount of incoming radiation, which is affected by the canopy configuration. In turn, ecosystem structure emerges from the collective functioning of vegetation components. These connections will be further explored in later sections.

The Global Change of Land Use and Land Cover

NASA's Earth Science Division has developed the Land Cover Land Use Change (LCLUC) Program for the purpose of monitoring changes in land cover and understanding the consequences of land-cover and land-use change for the continued provision of ecological goods and services. Land-use/land-cover change could affect biogeochemical cycling, biophysical processes, biodiversity, trace gas and particulate fluxes, and coastal zone conditions. Accordingly, the program needs both basic and applied research in ecological structure and function. The status of land cover inherently affects the vitality of ecosystems by virtue of the ecological processes its structure supports. Research and data, then, need to examine the current distribution and conversion (both past and future) of cover types. Of special interest is the conversion of forests due to trace gas fluxes that occur in these types of systems.

Land use affects ecosystem processes in two ways beside vegetative composition: intensification and degradation. For example, intensification can alter hydrological and biogeochemical cycles through such activities as irrigation and fertilization in agricultural systems.

The Roles of Remote Sensing in Land-Use/Land-Cover Change Studies

These coarse scale concerns, and the repeated coverage necessary for studying processes of change, require the use of remote sensing for the retrieval of appropriate data. The use of remote sensing into land-use/land-cover studies occurs in three types of analyses: 1) spatial assessments through vegetation mapping and classification; 2) productivity assessments through vegetation indices; 3) process studies with parameters specified by data derived from satellite imagery. The spatial assessment aspect of land-use/land-cover change requires repeated global inventories of land cover. Studies at the ecosystem and biome scales require canopy information, such as leaf area index (LAI) and fraction of absorbed photo synthetically active radiation (fAPAR), for understanding ecosystem processes, such as gas exchange and nutrient cycling. To understand the consequences of cover type change and land-use intensification, real data must be entered into realistic ecosystem process models. The accuracy of ecosystem process models is increased, then, by using remotely-sensed data to define or constrain plant allocation parameters. The data needed at large scales for a variety of modeling include vegetation cover type, structural parameters (e.g. effective LAI), and biophysical parameters (effective fAPAR and albedo). Improved spatial resolution of structure, such as live vegetation and litter, are needed to detect small changes. This article considers all three types of remote sensing applications.

Monitoring Vegetation Change Through Mapping

The most established application of remote sensing to ecological studies is mapping patterns of the Earth's surface. Vegetation mapping allows quantification of actual vegetation on the ground. This provides ecological studies with realistic information instead of potential natural vegetation maps based on observed vegetation relationships to climatic factors. This type of analysis is fundamental to the study of land cover and land use, and much land-use/land-cover research still focuses on this type of work. First, it provides the current spatial distribution of land cover and land use for any given area that has been imaged by a sensor. Much of the global land-use classification is done with 1-kilometer (km) and 8-km AVHRR data.

Institute of Management Studies, Dehradun

23"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Other sensors, especially Landsat TM and SPOT, are used for finer-scale classifications. Second, repeat coverage allows temporal studies of change. Three decades of satellite imagery allow decadal studies of land-cover change. These archives of satellite imagery also serve as a baseline for future monitoring and change assessments. Change detection is accomplished by creating algorithms to quantify the magnitude and direction of change. Overall the assessments of land-use/land-cover change can be quantitative and comprehensive. This type of spatial analysis can also help to target study areas.

Classification techniques and algorithms, such as multi-temporal comparisons and decision trees, have been developed for creating land-cover type maps from satellite images. Further development of classification continues for more specific identification of land cover types. Especially important are those classes representing conditions after landscape conversion, such as secondary growth tropical forests that grow in after deforestation events. Intensification or degradation cannot necessarily be derived from the spatial information on satellite imagery; other information is needed. When categories of intensity can be translated into a land-cover type, some satellite information is useful. For example, Landsat imagery can detect the presence of intensive logging, yet technique development is still required for analyzing intensification and long term degradation.

Beyond assessing spatial changes, land-cover distribution and conversion maps can be used in the development and verification of biogeochemical and biophysical models, as well as in analyses of the effects of spatial patterns and conversion histories on ecosystem structure and biodiversity. The satellite-derived vegetation maps can be compared to the results of models for model verification. They can also serve as inputs to the model by prescribing the ecosystem type in the parameterization of biogeochemical models. The vegetation distribution maps can be used to describe land surface characteristics, such as roughness, resistance to CO and water vapor exchange, or 2

ecosystem components, such as (C:N) ratios. Through this type of modeling, the changes in land cover can be assessed since actual, instead of literature-derived, values are used.

Further studies can be completed by fusing remote sensing data with other data layers in a

Functional Role of Remote Sensing in Studies...

geographic information system (GIS). One modeling proposal for land-use/land-cover change studies integrates satellite and socioeconomic data into dynamic deforestation models to understand the characteristics and rates of deforestation, regrowth, and land-use transition. Another study considers alternative approaches to depict land-cover heterogeneity and change through regional and global biosphere-atmosphere models.

Empirical Sources of Function: Vegetation Indices

Beyond spatial description of change, scientists are interested in the amount of change. Vegetation indices offer quantitative information about vegetation productivity based on spectral information found in satellite imagery. Essentially, the indices serve as a surrogate for vegetation components. Generally, they are useful for continental- to global-scale models, which require coarse resolution inputs. Often, AVHRR has been used to calculate vegetation indices for monitoring land cover and vegetation phenology due to its global daily coverage. Indices can also be calculated from the spectral bands of any other sensor.

The most widely used vegetation index, the normalized difference vegetation index (NDVI), relates near infra-red to visible red reflectances (NIR-VIS)/(NIR+VIS) in order to take advantage of the differential reflectance characteristics of vegetation in these two spectra. The biological controls on this measure are foliage density and leaf chlorophyll content. NDVI, like most vegetation indices, relates a measure of "greenness", which is empirically related to vegetation structure and function, through variables such as LAI, vegetation cover, above-ground biomass, photosynthetic efficiency, fAPAR, and stomatal conductance. These variables, in turn, can be linked to large-scale ecosystem net primary production by using an efficiency factor model. NDVI, though, is very dependent on the reflectance characteristics of non-photosynthetic vegetation, such as woody stems, soil characteristics, and sun angle and viewing angle of the sensor. So, the index does not represent absolute ecophysiological attributes.

Multiple indices exist; they have all attempted to describe properly vegetative cover. NDVI was based on the simple ratio (SR) that exploited the differences between vegetation reflectances in the near infra-red and visible red spectra: (NIR/VIS). Other indices have

24 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 29: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

and spacing, usually accompany functional differences among vegetation types.

Scale presents another challenge facing ecological studies. Both structural attributes and functional processes represent entities with nonlinear components. Accordingly, data needs to be collected at a scale coarser than the plot level in order to understand large-scale dynamics. Remote sensing addresses this concern by providing data products with coarse spatial extent. The measurement taken by the sensor constitutes the result of interactions of electromagnetic radiation with surface constituents, including the soil and vegetation layers. With knowledge of the particular characteristics of canopy and landscape structures, remote sensing also allows the linkage of structure and function in ecological studies. For example, photosynthesis and carbon allocation processes are constrained by the amount of incoming radiation, which is affected by the canopy configuration. In turn, ecosystem structure emerges from the collective functioning of vegetation components. These connections will be further explored in later sections.

The Global Change of Land Use and Land Cover

NASA's Earth Science Division has developed the Land Cover Land Use Change (LCLUC) Program for the purpose of monitoring changes in land cover and understanding the consequences of land-cover and land-use change for the continued provision of ecological goods and services. Land-use/land-cover change could affect biogeochemical cycling, biophysical processes, biodiversity, trace gas and particulate fluxes, and coastal zone conditions. Accordingly, the program needs both basic and applied research in ecological structure and function. The status of land cover inherently affects the vitality of ecosystems by virtue of the ecological processes its structure supports. Research and data, then, need to examine the current distribution and conversion (both past and future) of cover types. Of special interest is the conversion of forests due to trace gas fluxes that occur in these types of systems.

Land use affects ecosystem processes in two ways beside vegetative composition: intensification and degradation. For example, intensification can alter hydrological and biogeochemical cycles through such activities as irrigation and fertilization in agricultural systems.

The Roles of Remote Sensing in Land-Use/Land-Cover Change Studies

These coarse scale concerns, and the repeated coverage necessary for studying processes of change, require the use of remote sensing for the retrieval of appropriate data. The use of remote sensing into land-use/land-cover studies occurs in three types of analyses: 1) spatial assessments through vegetation mapping and classification; 2) productivity assessments through vegetation indices; 3) process studies with parameters specified by data derived from satellite imagery. The spatial assessment aspect of land-use/land-cover change requires repeated global inventories of land cover. Studies at the ecosystem and biome scales require canopy information, such as leaf area index (LAI) and fraction of absorbed photo synthetically active radiation (fAPAR), for understanding ecosystem processes, such as gas exchange and nutrient cycling. To understand the consequences of cover type change and land-use intensification, real data must be entered into realistic ecosystem process models. The accuracy of ecosystem process models is increased, then, by using remotely-sensed data to define or constrain plant allocation parameters. The data needed at large scales for a variety of modeling include vegetation cover type, structural parameters (e.g. effective LAI), and biophysical parameters (effective fAPAR and albedo). Improved spatial resolution of structure, such as live vegetation and litter, are needed to detect small changes. This article considers all three types of remote sensing applications.

Monitoring Vegetation Change Through Mapping

The most established application of remote sensing to ecological studies is mapping patterns of the Earth's surface. Vegetation mapping allows quantification of actual vegetation on the ground. This provides ecological studies with realistic information instead of potential natural vegetation maps based on observed vegetation relationships to climatic factors. This type of analysis is fundamental to the study of land cover and land use, and much land-use/land-cover research still focuses on this type of work. First, it provides the current spatial distribution of land cover and land use for any given area that has been imaged by a sensor. Much of the global land-use classification is done with 1-kilometer (km) and 8-km AVHRR data.

Institute of Management Studies, Dehradun

23"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Other sensors, especially Landsat TM and SPOT, are used for finer-scale classifications. Second, repeat coverage allows temporal studies of change. Three decades of satellite imagery allow decadal studies of land-cover change. These archives of satellite imagery also serve as a baseline for future monitoring and change assessments. Change detection is accomplished by creating algorithms to quantify the magnitude and direction of change. Overall the assessments of land-use/land-cover change can be quantitative and comprehensive. This type of spatial analysis can also help to target study areas.

Classification techniques and algorithms, such as multi-temporal comparisons and decision trees, have been developed for creating land-cover type maps from satellite images. Further development of classification continues for more specific identification of land cover types. Especially important are those classes representing conditions after landscape conversion, such as secondary growth tropical forests that grow in after deforestation events. Intensification or degradation cannot necessarily be derived from the spatial information on satellite imagery; other information is needed. When categories of intensity can be translated into a land-cover type, some satellite information is useful. For example, Landsat imagery can detect the presence of intensive logging, yet technique development is still required for analyzing intensification and long term degradation.

Beyond assessing spatial changes, land-cover distribution and conversion maps can be used in the development and verification of biogeochemical and biophysical models, as well as in analyses of the effects of spatial patterns and conversion histories on ecosystem structure and biodiversity. The satellite-derived vegetation maps can be compared to the results of models for model verification. They can also serve as inputs to the model by prescribing the ecosystem type in the parameterization of biogeochemical models. The vegetation distribution maps can be used to describe land surface characteristics, such as roughness, resistance to CO and water vapor exchange, or 2

ecosystem components, such as (C:N) ratios. Through this type of modeling, the changes in land cover can be assessed since actual, instead of literature-derived, values are used.

Further studies can be completed by fusing remote sensing data with other data layers in a

Functional Role of Remote Sensing in Studies...

geographic information system (GIS). One modeling proposal for land-use/land-cover change studies integrates satellite and socioeconomic data into dynamic deforestation models to understand the characteristics and rates of deforestation, regrowth, and land-use transition. Another study considers alternative approaches to depict land-cover heterogeneity and change through regional and global biosphere-atmosphere models.

Empirical Sources of Function: Vegetation Indices

Beyond spatial description of change, scientists are interested in the amount of change. Vegetation indices offer quantitative information about vegetation productivity based on spectral information found in satellite imagery. Essentially, the indices serve as a surrogate for vegetation components. Generally, they are useful for continental- to global-scale models, which require coarse resolution inputs. Often, AVHRR has been used to calculate vegetation indices for monitoring land cover and vegetation phenology due to its global daily coverage. Indices can also be calculated from the spectral bands of any other sensor.

The most widely used vegetation index, the normalized difference vegetation index (NDVI), relates near infra-red to visible red reflectances (NIR-VIS)/(NIR+VIS) in order to take advantage of the differential reflectance characteristics of vegetation in these two spectra. The biological controls on this measure are foliage density and leaf chlorophyll content. NDVI, like most vegetation indices, relates a measure of "greenness", which is empirically related to vegetation structure and function, through variables such as LAI, vegetation cover, above-ground biomass, photosynthetic efficiency, fAPAR, and stomatal conductance. These variables, in turn, can be linked to large-scale ecosystem net primary production by using an efficiency factor model. NDVI, though, is very dependent on the reflectance characteristics of non-photosynthetic vegetation, such as woody stems, soil characteristics, and sun angle and viewing angle of the sensor. So, the index does not represent absolute ecophysiological attributes.

Multiple indices exist; they have all attempted to describe properly vegetative cover. NDVI was based on the simple ratio (SR) that exploited the differences between vegetation reflectances in the near infra-red and visible red spectra: (NIR/VIS). Other indices have

24 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 30: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

been developed in an attempt to account for the variables that affect the calculation of vegetation content. Other indicesthe optimized soil-adjusted vegetation index, the modified soil-adjusted vegetation index, and the transformed soil-adjusted vegetation index specify SAVI's adjustment factor differently in an attempt to better account for substrate reflectance. Instead of relying on ratios, orthogonal indices depend on the existence of a "soil line" in spectral space; the most widely used is the greenness index, or green vegetation index defined by the tasseled cap method.

All of these vegetation indices are correlated with vegetation cover, although to varying degrees in different environments. New techniques for extracting ecological variables from satellite imagery include combining NDVI with texture analysis to constrain LAI better and performing multiple regression analyses directly on spectral bands instead of using vegetation indices. Most likely, these empirical methods will never be exact, so quantitative methods also have been explored.

Quantitative Sources of Function: Radioactive Transfer Model Inversion

The problem of identifying the causes and consequences of land-use/land-cover change, part of the LCLUC Program mission, requires that the functional aspects of ecosystems be examined. This involves the incorporation of land-use change into models that can couple land use with biogeochemical, biophysical, and atmospheric dynamics. New techniques in quantitative data retrieval allow the application of remote sensing to expand from a role of mostly spatial description to one where functional relationships can be examined. The basic concept is to use the structural attributes that can be pulled from satellite imagery to relate directly to functional variables. These ecological variables can then be incorporated into models instead of literature values in order to make more realistic assessments of linkages between changes in canopy structure and biogeochemical processes. This technique holds promise for creating a general method of linking structure to function.

Biogeochemica l mode l s requi re the parameterization of plant carbon allocation either from knowledge of plant phonological properties or from proxies derived from remotely-sensed data that constrain aboveground carbon pools via variables such

as LAI or non-photosynthetic vegetation index (NPVI). Accurate estimation of these variables will then strengthen biogeochemical models. These variables are derived from satellite imagery by the inversion of radioactive transfer models. The radioactive transfer models use physical process algorithms to explain how large-scale structural and biophysical attributes affect canopy reflectance. By iteratively changing these variables and comparing the resultant reflectance from the model to an image reflectance, the structural variables driving the reflectance in the image can be derived. The model parameters are then forward-integrated to yield bulk biophysical properties such as fAPAR and albedo.

This physically-based method also exploits the bi-directional reflectance distribution function (BRDF) of vegetation by explicitly incorporating multi-view angle measurements into the process. This allows more accurate calculation of values. Vegetation reflects radiation anisotropically, depending on the leaf, canopy, and landscape structural and compositional characteristics and illumination and viewing angles. The multi-view angle measurements of vegetation BRDF will allow improved access to canopy structural characteristics (e.g. LAI) and the simultaneous retrieval of biophysical variables (e.g. fAPAR). The BRDF parameters are constrained either by specifying a possible range of values for a variable through theory and field measurements, or by finding relationships between variables that force them to systematically co vary during model inversion. Leaf optical properties constrain the BRDF model inversions best since they are stable, and LAI and leaf angle distribution vary spatially, temporally, and within and among species. Studies of spectral behavior in AVHRR Bands 1 and 2, have found that green and senescent foliage should be treated differently in BRDF inversions. BRDF inversion techniques aim for applicability to new environments. The method can also be applied to standard techniques for improved information extraction. Multi-view angle and geometric-optical inversions can help improve classification and mapping by backing out crown dimensions (geometry) and spacing (shadowing). Since these canopy characteristics are usually more stable than LAI or fear, they will provide a more concrete signal of vegetation type.

The Earth Observing System

The sensors brought on line as part of NASA's Earth Observing System aid the study of land-use/land-cover change immensely. NOAA's AVHRR

Institute of Management Studies, Dehradun

25"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

sensor had been one of the few instruments capable of acquiring off-nadir radiance measurements with adequate repeatability for BRDF model inversions. New sensors, though, have multiple viewing capacities. The high spectral resolution provided by the new sensors allows for the review of the accuracy, and further detailing, of classification into different cover types and chemical composition of plant canopies (including photosynthetic pigment components and nitrogen status). The 20 optical channels and multi-angle viewing capability of MODIS revolutionizes measurements of key ecological variables. For example, there is a 250-meter global land-use/land-cover change product available from the red and infra-red bands on MODIS, which provides information about vegetation, as outlined above. The BRDF inversion techniques benefit from shortwave infra-red channels that have increased the number of unique optical channels. This sensor increases the spatial and spectral resolution, and the MISR instrument the multiple view angle data, over that already available from AVHRR.

Conclusions

Traditional and innovative techniques incorporate data derived from remote sensing into land-use/land-cover change studies. Vegetation classification and mapping continues to be improved. The data products from this method permit the continued monitoring of land use and land cover. Efforts to understand ecological consequences of land-use/land-cover change employ traditional (vegetation indices) and new (model inversion) techniques for deriving function from structure. These advancements only reinforce the importance of remote sensing as a component of ecological studies.

The increased interest in maintaining ecological goods and services while monitoring large-scale changes in land cover has increased the importance of remote sensing in ecological studies. The mandate to understand the nature and consequences of land-use/land-cover change requires advances in both basic and applied ecological research. Accordingly, research progress will both advance the intellectual status of ecological research and address societal needs.

References

1. Asner, G.P., C.A. Wessman, D.S. Schimel, and S. Archer. 1998. Variability in leaf and litter optical properties: Implications for BRDF

model inversions using AVHRR, MODIS, and MISR. Remote Sensing of Environment, 63:243-257.

2. Crist, E.P. and R.J. Kauth. 1986. The Tasseled Cap de-mystified. Photogram metric Engineering and Remote Sensing, 52:81-86.

3. DeFries, R. 1998. Characterizing land cover heterogeneity and land cover change from multisensor satellite data. Project Abstracts.

4. Earth Sciences Portal, NASA Goddard Space Flight Center.

5. Fischer, A., S. Louahala, P. Maisongrande, L. Kergoat, and G. Dedieu. 1996. Satellite data for monitoring, understanding and modelling of ecosystem functioning. In B. Walker and W. Steffen, eds. Global Change and Terrestrial Ecosystems. Cambridge University Press: New York. pp 566-591. ISBN: 0521578108

6. Green, K., and D. Weinstein. Automated Change Detection Using Remotely Sensed Data. Pacific Meridian Resources. unpublished manuscript.

7. NASA's Earth Observing System

8. NASA's Earth Observing System Program Office.

9. NASA's Land Cover Land Use Change Program.

10. NASA's MISR Program

11. NASA's MODIS Program

26

Functional Role of Remote Sensing in Studies...

"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 31: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

been developed in an attempt to account for the variables that affect the calculation of vegetation content. Other indicesthe optimized soil-adjusted vegetation index, the modified soil-adjusted vegetation index, and the transformed soil-adjusted vegetation index specify SAVI's adjustment factor differently in an attempt to better account for substrate reflectance. Instead of relying on ratios, orthogonal indices depend on the existence of a "soil line" in spectral space; the most widely used is the greenness index, or green vegetation index defined by the tasseled cap method.

All of these vegetation indices are correlated with vegetation cover, although to varying degrees in different environments. New techniques for extracting ecological variables from satellite imagery include combining NDVI with texture analysis to constrain LAI better and performing multiple regression analyses directly on spectral bands instead of using vegetation indices. Most likely, these empirical methods will never be exact, so quantitative methods also have been explored.

Quantitative Sources of Function: Radioactive Transfer Model Inversion

The problem of identifying the causes and consequences of land-use/land-cover change, part of the LCLUC Program mission, requires that the functional aspects of ecosystems be examined. This involves the incorporation of land-use change into models that can couple land use with biogeochemical, biophysical, and atmospheric dynamics. New techniques in quantitative data retrieval allow the application of remote sensing to expand from a role of mostly spatial description to one where functional relationships can be examined. The basic concept is to use the structural attributes that can be pulled from satellite imagery to relate directly to functional variables. These ecological variables can then be incorporated into models instead of literature values in order to make more realistic assessments of linkages between changes in canopy structure and biogeochemical processes. This technique holds promise for creating a general method of linking structure to function.

Biogeochemica l mode l s requi re the parameterization of plant carbon allocation either from knowledge of plant phonological properties or from proxies derived from remotely-sensed data that constrain aboveground carbon pools via variables such

as LAI or non-photosynthetic vegetation index (NPVI). Accurate estimation of these variables will then strengthen biogeochemical models. These variables are derived from satellite imagery by the inversion of radioactive transfer models. The radioactive transfer models use physical process algorithms to explain how large-scale structural and biophysical attributes affect canopy reflectance. By iteratively changing these variables and comparing the resultant reflectance from the model to an image reflectance, the structural variables driving the reflectance in the image can be derived. The model parameters are then forward-integrated to yield bulk biophysical properties such as fAPAR and albedo.

This physically-based method also exploits the bi-directional reflectance distribution function (BRDF) of vegetation by explicitly incorporating multi-view angle measurements into the process. This allows more accurate calculation of values. Vegetation reflects radiation anisotropically, depending on the leaf, canopy, and landscape structural and compositional characteristics and illumination and viewing angles. The multi-view angle measurements of vegetation BRDF will allow improved access to canopy structural characteristics (e.g. LAI) and the simultaneous retrieval of biophysical variables (e.g. fAPAR). The BRDF parameters are constrained either by specifying a possible range of values for a variable through theory and field measurements, or by finding relationships between variables that force them to systematically co vary during model inversion. Leaf optical properties constrain the BRDF model inversions best since they are stable, and LAI and leaf angle distribution vary spatially, temporally, and within and among species. Studies of spectral behavior in AVHRR Bands 1 and 2, have found that green and senescent foliage should be treated differently in BRDF inversions. BRDF inversion techniques aim for applicability to new environments. The method can also be applied to standard techniques for improved information extraction. Multi-view angle and geometric-optical inversions can help improve classification and mapping by backing out crown dimensions (geometry) and spacing (shadowing). Since these canopy characteristics are usually more stable than LAI or fear, they will provide a more concrete signal of vegetation type.

The Earth Observing System

The sensors brought on line as part of NASA's Earth Observing System aid the study of land-use/land-cover change immensely. NOAA's AVHRR

Institute of Management Studies, Dehradun

25"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

sensor had been one of the few instruments capable of acquiring off-nadir radiance measurements with adequate repeatability for BRDF model inversions. New sensors, though, have multiple viewing capacities. The high spectral resolution provided by the new sensors allows for the review of the accuracy, and further detailing, of classification into different cover types and chemical composition of plant canopies (including photosynthetic pigment components and nitrogen status). The 20 optical channels and multi-angle viewing capability of MODIS revolutionizes measurements of key ecological variables. For example, there is a 250-meter global land-use/land-cover change product available from the red and infra-red bands on MODIS, which provides information about vegetation, as outlined above. The BRDF inversion techniques benefit from shortwave infra-red channels that have increased the number of unique optical channels. This sensor increases the spatial and spectral resolution, and the MISR instrument the multiple view angle data, over that already available from AVHRR.

Conclusions

Traditional and innovative techniques incorporate data derived from remote sensing into land-use/land-cover change studies. Vegetation classification and mapping continues to be improved. The data products from this method permit the continued monitoring of land use and land cover. Efforts to understand ecological consequences of land-use/land-cover change employ traditional (vegetation indices) and new (model inversion) techniques for deriving function from structure. These advancements only reinforce the importance of remote sensing as a component of ecological studies.

The increased interest in maintaining ecological goods and services while monitoring large-scale changes in land cover has increased the importance of remote sensing in ecological studies. The mandate to understand the nature and consequences of land-use/land-cover change requires advances in both basic and applied ecological research. Accordingly, research progress will both advance the intellectual status of ecological research and address societal needs.

References

1. Asner, G.P., C.A. Wessman, D.S. Schimel, and S. Archer. 1998. Variability in leaf and litter optical properties: Implications for BRDF

model inversions using AVHRR, MODIS, and MISR. Remote Sensing of Environment, 63:243-257.

2. Crist, E.P. and R.J. Kauth. 1986. The Tasseled Cap de-mystified. Photogram metric Engineering and Remote Sensing, 52:81-86.

3. DeFries, R. 1998. Characterizing land cover heterogeneity and land cover change from multisensor satellite data. Project Abstracts.

4. Earth Sciences Portal, NASA Goddard Space Flight Center.

5. Fischer, A., S. Louahala, P. Maisongrande, L. Kergoat, and G. Dedieu. 1996. Satellite data for monitoring, understanding and modelling of ecosystem functioning. In B. Walker and W. Steffen, eds. Global Change and Terrestrial Ecosystems. Cambridge University Press: New York. pp 566-591. ISBN: 0521578108

6. Green, K., and D. Weinstein. Automated Change Detection Using Remotely Sensed Data. Pacific Meridian Resources. unpublished manuscript.

7. NASA's Earth Observing System

8. NASA's Earth Observing System Program Office.

9. NASA's Land Cover Land Use Change Program.

10. NASA's MISR Program

11. NASA's MODIS Program

26

Functional Role of Remote Sensing in Studies...

"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 32: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Kumud K. Arora

ABSTRACT

Software product is a critical and strategic asset in an organization's business. The challenge is to develop more complicated software products within the constraints of time and resources without the sacrifice of quality. Quality standards, methodologies and techniques have been continually proposed by researchers and used by software engineers in the industry. Several measures of success /progress for an OSS project have been proposed in the literature: Counts of system failure i.e.: counts of abnormal terminations and counts of non conformance to user requirements [20], Number of downloads, page views, number of users, number of developers, re-use of code, bug fix turn around time and development of first stable release [21,22,20].Due to a large number of product, project and people parameters which impact software development efforts, measurement of open source software quality is a complex undertaking. The absolute perspective from which quality is measured is the customer satisfaction, which in itself is intangible parameter. Though no mature method or metrics have yet been developed to evaluate the quality of OSS.

In this article, some important software characteristics that contribute to the quality in the OSS are explored. Studies on OSS have mainly examined a small number of large, well-known and successful case studies [5]. The prior studies on OSS quality promoting factors have been extended. More specifically, what attributes must be possessed by quality-related interventions for them to be feasibly adoptable in open source practice are analyzed. The model for impact of parameters promoting quality in OSS is developed and empirically tested using the data collected from OSS projects hosted at Source forge. Sample includes varieties of projects ranging from the large to small projects and the projects that are successful and projects that do not progress well. A few open source software criteria used for quality evaluation are: Software Size in terms of Source line of code (SLOC), Percentage of lines of comments with respect to the number of lines of code (PerCM) (it describes the self descriptiveness of the code), number of releases, bug tracking, bug removal pattern and software maturity index (SMI) (it is an indication of stability of software product).

Key Words: OSS(Open source software), OSSD (Open source software development)

Introduction

In countries with growing economy where massive IT deployments cannot be of very high budgets, Open Source Software (OSS) approach can be used as a rescue measure. With the rising costs of hardware, software and associated licensing fees, IT community is forced to review their options with respect to implementing innovative technology solutions other than proprietary systems. The alternative to proprietary systems coming to the forefront is Open Source Software. The concept of free software can be traced back to free software foundation founded by Richard Stallman, a researcher in MIT

Sr. Lecturer, Inderprastha Engineering College, GhaziabadEmail Id: [email protected]

Impact of Internal Dynamics on Quality ofOpen Source Software

developed and distributed software under general public license history (GPL) while OSS a paradigm shift from closed software product is slowly revolutionizing the processes involved in the software development life cycle. Bruce Perens defined a set of guidelines which he called as OSI. OSS is a general term to describe a sub culture or community that works in collaboration with each another using internet as medium of communication. The philosophy of Free Software is not about cost. The theory behind Free Software is when the code is available to read, only then can innovation truly occur. OSS grants not only the developers but also the users,

27"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

who are potential developers, the right to read and change the source code. OSS development has been conceptualized as a phenomenon at community and team/group level. OSS provides an ongoing project to the new developers. Developers, users and user turned developers form a community of practice informally bounded by their common interest and practice in a specific domain. Community members regularly interact with each other for knowledge sharing. Learning is one of the major motivational forces that attract software developers and users to participate on OSS development and to become the members of OSS community. Eric states that developers are attracted towards open source development because that gives an opportunity to demonstrate their ability. When the programmers' code gets accepted, it boosts their ego and they get recognized for their effort in the community, Peer recognition creates reputation and a reputation as a good programmer is great motivating factor for the programmer [11]. The distinction between the users and developers is very little. Strict hierarchical structure does not exist in OSS communities; the structure of OSS communities is not completely flat.

Although the evolution of an OSS system is not well planned, “giving users of a product access to its source code and the right to create derivative works allows them to help themselves, and encourages natural product evolution as well as preplanned product design [14].”Research has assessed the output of individual OSS projects and examined how the dynamics within OSS Projects influence the development of software product (e.g.:[7,25]).Some specific indicators applied to commercial software projects :e.g.-Being on time , on budget and meeting specifications ,may not be easily applied in the OSS setting. As in OSS setting there may not be priori budget, timeline or set of specifications [25]. Also OSS often depends on volunteer labor, the extent to which the project attracts and retains developers. Software Quality is one of the most important metrics for success of a software project. Barry Boehm defines Software quality as “Achieving high levels of user satisfaction, portability, maintainability, robustness and fitness for use” [20]. Jones refers to quality as “the absence of defects that would make software either stop completely or produce unacceptable results” [21]. However these definitions of quality cannot be applied directly to OSS. Unlike in Close software systems, user

Impact of Internal Dynamics on Quality...

requirements are not formally available in OSS.OSS phenomenon derives its strength by harnessing the active participation from core developers, peripheral group and beta testers.

Issues for OSSD and OSS Dependability

The informal OSS approach is in contrast with the more formal software engineering processes. In OSS, tools are geared towards enhancing human collaboration and co-ordination during the development activities [31][32], whereas, traditionally, software engineering has typically been more oriented towards reducing and deskilling the human role of the developer through tool-support and methods that automate the software construction task wherever possible. At the process level, the OSS approach is not subjected to the same level of negative external process constraints of time and budget that can often subtly undermine the development of dependable systems within an organizational setting. Furthermore, despite the characterizations of the OSS approach as being highly ad-hoc and chaotic, OSS projects appear to be highly organized, in many cases, and provide tool-support focused upon enhancing human collaboration, creativity, skill, and learning - considered vital in developing trustworthy systems. However five most important issues with Open Source Software Development to be as follows:

a) User interface design: In OSSD the interface is not intuitive, as open source developers believe in “Programming for the self.” b) Documentation: In Open Source projects the documentation is usually intended to be a general guide rather than a complete manual that can guide a novice user and helps the user to figure out how to do something. Open Source software core documentation is found in Usenet articles, bulletin boards and chat logs. c) Feature centric development: In OSSD, it's the individual programmer who wants to add the feature to a project. With so much emphasis on features sometimes the fundamental aspects of a programming project (like coding standards, security, project direction) go missing. d) Programming for the self: Open Source projects community members invest time only in building what features they themselves would want to use rather than audience specific. e) Vulnerability to attacks: A potential problem particularly associated with OSS is the vulnerability to attacks by distribution of maliciously altered versions of software systems.

28 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 33: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Kumud K. Arora

ABSTRACT

Software product is a critical and strategic asset in an organization's business. The challenge is to develop more complicated software products within the constraints of time and resources without the sacrifice of quality. Quality standards, methodologies and techniques have been continually proposed by researchers and used by software engineers in the industry. Several measures of success /progress for an OSS project have been proposed in the literature: Counts of system failure i.e.: counts of abnormal terminations and counts of non conformance to user requirements [20], Number of downloads, page views, number of users, number of developers, re-use of code, bug fix turn around time and development of first stable release [21,22,20].Due to a large number of product, project and people parameters which impact software development efforts, measurement of open source software quality is a complex undertaking. The absolute perspective from which quality is measured is the customer satisfaction, which in itself is intangible parameter. Though no mature method or metrics have yet been developed to evaluate the quality of OSS.

In this article, some important software characteristics that contribute to the quality in the OSS are explored. Studies on OSS have mainly examined a small number of large, well-known and successful case studies [5]. The prior studies on OSS quality promoting factors have been extended. More specifically, what attributes must be possessed by quality-related interventions for them to be feasibly adoptable in open source practice are analyzed. The model for impact of parameters promoting quality in OSS is developed and empirically tested using the data collected from OSS projects hosted at Source forge. Sample includes varieties of projects ranging from the large to small projects and the projects that are successful and projects that do not progress well. A few open source software criteria used for quality evaluation are: Software Size in terms of Source line of code (SLOC), Percentage of lines of comments with respect to the number of lines of code (PerCM) (it describes the self descriptiveness of the code), number of releases, bug tracking, bug removal pattern and software maturity index (SMI) (it is an indication of stability of software product).

Key Words: OSS(Open source software), OSSD (Open source software development)

Introduction

In countries with growing economy where massive IT deployments cannot be of very high budgets, Open Source Software (OSS) approach can be used as a rescue measure. With the rising costs of hardware, software and associated licensing fees, IT community is forced to review their options with respect to implementing innovative technology solutions other than proprietary systems. The alternative to proprietary systems coming to the forefront is Open Source Software. The concept of free software can be traced back to free software foundation founded by Richard Stallman, a researcher in MIT

Sr. Lecturer, Inderprastha Engineering College, GhaziabadEmail Id: [email protected]

Impact of Internal Dynamics on Quality ofOpen Source Software

developed and distributed software under general public license history (GPL) while OSS a paradigm shift from closed software product is slowly revolutionizing the processes involved in the software development life cycle. Bruce Perens defined a set of guidelines which he called as OSI. OSS is a general term to describe a sub culture or community that works in collaboration with each another using internet as medium of communication. The philosophy of Free Software is not about cost. The theory behind Free Software is when the code is available to read, only then can innovation truly occur. OSS grants not only the developers but also the users,

27"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

who are potential developers, the right to read and change the source code. OSS development has been conceptualized as a phenomenon at community and team/group level. OSS provides an ongoing project to the new developers. Developers, users and user turned developers form a community of practice informally bounded by their common interest and practice in a specific domain. Community members regularly interact with each other for knowledge sharing. Learning is one of the major motivational forces that attract software developers and users to participate on OSS development and to become the members of OSS community. Eric states that developers are attracted towards open source development because that gives an opportunity to demonstrate their ability. When the programmers' code gets accepted, it boosts their ego and they get recognized for their effort in the community, Peer recognition creates reputation and a reputation as a good programmer is great motivating factor for the programmer [11]. The distinction between the users and developers is very little. Strict hierarchical structure does not exist in OSS communities; the structure of OSS communities is not completely flat.

Although the evolution of an OSS system is not well planned, “giving users of a product access to its source code and the right to create derivative works allows them to help themselves, and encourages natural product evolution as well as preplanned product design [14].”Research has assessed the output of individual OSS projects and examined how the dynamics within OSS Projects influence the development of software product (e.g.:[7,25]).Some specific indicators applied to commercial software projects :e.g.-Being on time , on budget and meeting specifications ,may not be easily applied in the OSS setting. As in OSS setting there may not be priori budget, timeline or set of specifications [25]. Also OSS often depends on volunteer labor, the extent to which the project attracts and retains developers. Software Quality is one of the most important metrics for success of a software project. Barry Boehm defines Software quality as “Achieving high levels of user satisfaction, portability, maintainability, robustness and fitness for use” [20]. Jones refers to quality as “the absence of defects that would make software either stop completely or produce unacceptable results” [21]. However these definitions of quality cannot be applied directly to OSS. Unlike in Close software systems, user

Impact of Internal Dynamics on Quality...

requirements are not formally available in OSS.OSS phenomenon derives its strength by harnessing the active participation from core developers, peripheral group and beta testers.

Issues for OSSD and OSS Dependability

The informal OSS approach is in contrast with the more formal software engineering processes. In OSS, tools are geared towards enhancing human collaboration and co-ordination during the development activities [31][32], whereas, traditionally, software engineering has typically been more oriented towards reducing and deskilling the human role of the developer through tool-support and methods that automate the software construction task wherever possible. At the process level, the OSS approach is not subjected to the same level of negative external process constraints of time and budget that can often subtly undermine the development of dependable systems within an organizational setting. Furthermore, despite the characterizations of the OSS approach as being highly ad-hoc and chaotic, OSS projects appear to be highly organized, in many cases, and provide tool-support focused upon enhancing human collaboration, creativity, skill, and learning - considered vital in developing trustworthy systems. However five most important issues with Open Source Software Development to be as follows:

a) User interface design: In OSSD the interface is not intuitive, as open source developers believe in “Programming for the self.” b) Documentation: In Open Source projects the documentation is usually intended to be a general guide rather than a complete manual that can guide a novice user and helps the user to figure out how to do something. Open Source software core documentation is found in Usenet articles, bulletin boards and chat logs. c) Feature centric development: In OSSD, it's the individual programmer who wants to add the feature to a project. With so much emphasis on features sometimes the fundamental aspects of a programming project (like coding standards, security, project direction) go missing. d) Programming for the self: Open Source projects community members invest time only in building what features they themselves would want to use rather than audience specific. e) Vulnerability to attacks: A potential problem particularly associated with OSS is the vulnerability to attacks by distribution of maliciously altered versions of software systems.

28 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 34: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

OSS Dependability

OSS Dependability is concerned with how such systems can be designed and developed to provide an acceptable continuity of service in the event of such faults giving rise to errors that may affect the expected delivery of service. The impairments of dependability are concerned with faults, errors, and failures .There exists a collection of methods and techniques to promote the ability to deliver a service on which reliance can be placed, and to establish confidence in the system's ability to help accomplish (a) fault prevention (to prevent fault occurrence or introduction), (b) fault removal (to reduce the presence (number, or seriousness) of faults), (c) fault tolerance (to provide a service complying with the specification in spite of faults), and (d) fault forecasting (to estimate the present number, the future incidence, and the consequences of faults).

In open-source development process, since the number of contributors are more, so there is a increased bug finding ability through massive peer reviews of submitted source-code. This was also characterized by [Raymond, 1999] in his seminal paper on evaluating the open-source software development approach as: "Given enough eyeballs, all bugs are shallow".This informal approach to using human diversity for bug finding was, however, exemplified much earlier by [Weinberg, 1971] in his “ego less programming” philosophy. The voluntary nature of open-source, developers naturally “gravitate” to software development work they are naturally interested and/or already knowledgeable in performing [Lang, 2000]. This combined with the reality that open-source developers are also users of the software they develop [Gacek et al., 2001], reveals that open-source developers have an intrinsically enhanced understanding and knowledge of the user-domain in which the software will be deployed.

Proposed framework depicting interrelation of internal dynamics that promote quality of OSS

OSS projects exhibit high quality despite the absence of defined user, requirements, costs or schedules. Research suggests many potential factors that may influence both development and usage success. OSS development has been conceptualized as a phenomenon at the community [2], organizational [10] and team/group level [8]. Work of Stewart[29]suggests a multifaceted understanding of

success in OSS from both a development perspective (i.e., success in terms of attracting input and producing and a usage perspective (i.e.: success in terms of user interest, adoption, and impact related to specific OSS projects). This kind of success may be indicated by such factors as the number of developers involved, the level of project activity (e.g., bug fixes, patches provided, new features and software releases), or project development status (e.g., alpha testing, beta testing, production, etc) [3, 27]. A circular Framework is proposed for the factors that affect/promote the quality of OSS. In this framework five factors have been taken : (a) Development Team size, (b) Language & OSSD tools familiarity, (c) Bug Fixing Time, (d) Documentation with in open source code, and (e) Adaptive & perfective Maintenance

All these factors are interrelated to each other. An increase or decrease in any of the factors impacts other factors.

Figure 1. Framework of Internal dynamics of OSS

The proposed framework is based upon the quantitative data analysis of the various open source projects of the sample taken. In the data collection stage, a few target open source projects were selected following SourceForge's “Most active” rank list as of May 2007. To rank projects according to activities, SourceForge.net uses an activity measure. It takes number of downloads, number of times mentioned in the forum and other measures into consideration and combine them to form an activity index. Most F/OSS project data is available as byproducts of development,

A

B

CD

E

Institute of Management Studies, Dehradun

29"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

maintenance, and system-use activities in F/OSS communi t i e s . Ten open source pro j ec t s (A,B,C,D,E,F,G,H,I,K) were choosen for sample with the development language varying from C++, Java, C#, Pyhton, Perl, Javascript, PL/SQL. The topics of the projects varies from file sharing, Instant messaging, Gaming, ERP to security systems. Actual names of the projects were not revealed to follow standard software engineering ethics [3].

Impacts of Internal Dynamics on OSS Quality

The timeline of the open source project is often characterized by the three distinct phases: Initial development phase(Phase I), User initiated change requests due to ad hoc partnering between the developers(Phase II), Establishment and re-factoring of the open source development project(Phase III). The quantitative analysis of the archival data of the project is correlated with the evolution of the project functionality through the three-phased timeline through the parameters discuss below:

Impact of Development Team Size

Since the FLOSS development process relies on contributions from active users as well as core developers, development team size reflected the size of this ex-tended team, rather than just the core developers listed on the project page. This parameter can be attributed to the fact of growing interest of OSS community members in the project. The requests for new features starts the evolution of the framework where the idea of the core developer starts progressing from the first stage of the timeline of the project to the next phase of the project. Average number of software downloads for the sample projects are given below in the graph:

33,976,713

73,529,161

24,836,59811,584,335

0

20000000

40000000

60000000

80000000

1 2 3 4 5

Figure 2: Average number of downloads Vs Timeline

The success of most OSS projects is not judged typically by profits (although some companies do make profits with OSS); the success is judged chiefly by how many people use the software.

Impact of Bug Fixing Time

Software Reliability is a key factor in software [26]. The reliability factors are concerned with the behavior of the software. It is the extent to which it performs its intended functions with required precision. Reliability has significant effect on software quality, since the user acceptability depends upon the software ability to function correctly. Project's performance is highly dependent upon the lifespan of bug fixing i.e., how long it took to fix bugs from the reporting time. More bug reports are indicative of a larger extended community and also there is more help available to close bugs. The quantitative analysis of the archival data of the project is correlated with the evolution of the project reliability through the three-phased timeline through the following parameters:

Bugs Reporting Location

Reliability assurance activities can be tailored and improved on the understanding of locations where the bugs are reported by the users/developers and their nature. In survey of bugs reported for the project sample chosen, it is found that following are the categories of the project and their reporting percentages:

Figure 4 : Percentage of Bugs Reporting Location

Impact of Language & Tools familiarity

OSS work requires specific skills and there is a limited pool of people with the knowledge and motivation to be able to productively contribute, leading to potential competition among projects to attract developer efforts. In the OSS context, although code is available for inspection, users may not have the necessary background knowledge to evaluate the inner workings and features of a software program before they install it, or even if they do have the requisite skill, they may seek to minimize the cognitive effort involved in evaluation by relying on more easily interpreted cues. Familiarity with various OSSD tools helps the contributor to contribute effectively. Language with good support makes contributors learning & usage

30

Impact of Internal Dynamics on Quality...

"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 35: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

OSS Dependability

OSS Dependability is concerned with how such systems can be designed and developed to provide an acceptable continuity of service in the event of such faults giving rise to errors that may affect the expected delivery of service. The impairments of dependability are concerned with faults, errors, and failures .There exists a collection of methods and techniques to promote the ability to deliver a service on which reliance can be placed, and to establish confidence in the system's ability to help accomplish (a) fault prevention (to prevent fault occurrence or introduction), (b) fault removal (to reduce the presence (number, or seriousness) of faults), (c) fault tolerance (to provide a service complying with the specification in spite of faults), and (d) fault forecasting (to estimate the present number, the future incidence, and the consequences of faults).

In open-source development process, since the number of contributors are more, so there is a increased bug finding ability through massive peer reviews of submitted source-code. This was also characterized by [Raymond, 1999] in his seminal paper on evaluating the open-source software development approach as: "Given enough eyeballs, all bugs are shallow".This informal approach to using human diversity for bug finding was, however, exemplified much earlier by [Weinberg, 1971] in his “ego less programming” philosophy. The voluntary nature of open-source, developers naturally “gravitate” to software development work they are naturally interested and/or already knowledgeable in performing [Lang, 2000]. This combined with the reality that open-source developers are also users of the software they develop [Gacek et al., 2001], reveals that open-source developers have an intrinsically enhanced understanding and knowledge of the user-domain in which the software will be deployed.

Proposed framework depicting interrelation of internal dynamics that promote quality of OSS

OSS projects exhibit high quality despite the absence of defined user, requirements, costs or schedules. Research suggests many potential factors that may influence both development and usage success. OSS development has been conceptualized as a phenomenon at the community [2], organizational [10] and team/group level [8]. Work of Stewart[29]suggests a multifaceted understanding of

success in OSS from both a development perspective (i.e., success in terms of attracting input and producing and a usage perspective (i.e.: success in terms of user interest, adoption, and impact related to specific OSS projects). This kind of success may be indicated by such factors as the number of developers involved, the level of project activity (e.g., bug fixes, patches provided, new features and software releases), or project development status (e.g., alpha testing, beta testing, production, etc) [3, 27]. A circular Framework is proposed for the factors that affect/promote the quality of OSS. In this framework five factors have been taken : (a) Development Team size, (b) Language & OSSD tools familiarity, (c) Bug Fixing Time, (d) Documentation with in open source code, and (e) Adaptive & perfective Maintenance

All these factors are interrelated to each other. An increase or decrease in any of the factors impacts other factors.

Figure 1. Framework of Internal dynamics of OSS

The proposed framework is based upon the quantitative data analysis of the various open source projects of the sample taken. In the data collection stage, a few target open source projects were selected following SourceForge's “Most active” rank list as of May 2007. To rank projects according to activities, SourceForge.net uses an activity measure. It takes number of downloads, number of times mentioned in the forum and other measures into consideration and combine them to form an activity index. Most F/OSS project data is available as byproducts of development,

A

B

CD

E

Institute of Management Studies, Dehradun

29"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

maintenance, and system-use activities in F/OSS communi t i e s . Ten open source pro j ec t s (A,B,C,D,E,F,G,H,I,K) were choosen for sample with the development language varying from C++, Java, C#, Pyhton, Perl, Javascript, PL/SQL. The topics of the projects varies from file sharing, Instant messaging, Gaming, ERP to security systems. Actual names of the projects were not revealed to follow standard software engineering ethics [3].

Impacts of Internal Dynamics on OSS Quality

The timeline of the open source project is often characterized by the three distinct phases: Initial development phase(Phase I), User initiated change requests due to ad hoc partnering between the developers(Phase II), Establishment and re-factoring of the open source development project(Phase III). The quantitative analysis of the archival data of the project is correlated with the evolution of the project functionality through the three-phased timeline through the parameters discuss below:

Impact of Development Team Size

Since the FLOSS development process relies on contributions from active users as well as core developers, development team size reflected the size of this ex-tended team, rather than just the core developers listed on the project page. This parameter can be attributed to the fact of growing interest of OSS community members in the project. The requests for new features starts the evolution of the framework where the idea of the core developer starts progressing from the first stage of the timeline of the project to the next phase of the project. Average number of software downloads for the sample projects are given below in the graph:

33,976,713

73,529,161

24,836,59811,584,335

0

20000000

40000000

60000000

80000000

1 2 3 4 5

Figure 2: Average number of downloads Vs Timeline

The success of most OSS projects is not judged typically by profits (although some companies do make profits with OSS); the success is judged chiefly by how many people use the software.

Impact of Bug Fixing Time

Software Reliability is a key factor in software [26]. The reliability factors are concerned with the behavior of the software. It is the extent to which it performs its intended functions with required precision. Reliability has significant effect on software quality, since the user acceptability depends upon the software ability to function correctly. Project's performance is highly dependent upon the lifespan of bug fixing i.e., how long it took to fix bugs from the reporting time. More bug reports are indicative of a larger extended community and also there is more help available to close bugs. The quantitative analysis of the archival data of the project is correlated with the evolution of the project reliability through the three-phased timeline through the following parameters:

Bugs Reporting Location

Reliability assurance activities can be tailored and improved on the understanding of locations where the bugs are reported by the users/developers and their nature. In survey of bugs reported for the project sample chosen, it is found that following are the categories of the project and their reporting percentages:

Figure 4 : Percentage of Bugs Reporting Location

Impact of Language & Tools familiarity

OSS work requires specific skills and there is a limited pool of people with the knowledge and motivation to be able to productively contribute, leading to potential competition among projects to attract developer efforts. In the OSS context, although code is available for inspection, users may not have the necessary background knowledge to evaluate the inner workings and features of a software program before they install it, or even if they do have the requisite skill, they may seek to minimize the cognitive effort involved in evaluation by relying on more easily interpreted cues. Familiarity with various OSSD tools helps the contributor to contribute effectively. Language with good support makes contributors learning & usage

30

Impact of Internal Dynamics on Quality...

"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 36: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

process easier. Greater the familiarity of the contributors with OSSD tools greater the OSS developer motivation and greater will be the development success of the project.

Impact of Documentation in Open Source Code

Qualities that make a OSS process appropriate include a high degree of responsiveness (release early, release often), to be inclusive, reliable and coherent (understood). A large part of the OSS design process takes place in the discussion space and is archived in the documentation space. Tools are also used to extract relevant data from large quantity of archived data g ene ra t ed f rom the de s i gn d i s cu s s ions . Documentation helps in providing learners a “gentle slope” learning curve. Mailing lists and chat logs contain a wealth of information about the project .The availability of structured documentation along with the source code will enhance the OSS usage success which in turn will enhance the motivation of developers to contribute to a project and thereby have a positive effect on development success.

Impact of Perfective & Adaptive Maintenance

O S S d e v e l o p m e n t e m p h a s i z e s t h e maintainability of the software released. Making software available on the Internet allows developers around the world to contribute code, adding new functionality (parallel development), improving the present one, and submitting bug fixes to the current release (parallel debugging). A well-known conjecture in modern software engineering is that external quality characteristics are correlated to internal quality characteristics. The measurement of source code provides useful information for the assessment of its quality, predicting to some extent the external system quality characteristics. Project output affects the user interest. Feedback from usage success to development success motivates OSS developers. More active projects are more popular. The greater the user interest in a project, the wider the audience for individual contributions and therefore the more visible the efforts of contributors.

Conclusions

The goals of open-source software development are not unique. It aims to sustain end-user confidence and goodwill by minimizing development and quality assurance costs. The process of OSSD tries to limit regression errors thereby avoiding breaking features which degrade performance relative to prior releases. Frequent “beta” releases, e.g., several times a month are used to ensure consistent quality of OSS. The overall results from the facts stated above seem to indicate an acceptable level of OSS quality during the

development phase of the open source project. As far as functionality of OSS is concerned, the ad hoc partnering between the developers and user initiated change requests has proved its suitability and acceptability for the systems with horizontal requirements. The “many eyeballs” quote also needs to be further explored with more quantitative studies. If structured documentation along with the source code is made viable to the users then it will promote OSS usage success among the common users. Various other quality metrics may be used to explore the fact of improvement in the development success of the open source project. Though the contributors to the project continuously carry out perfective and adaptive maintenance, the evolution of the maintainability of OSS has to be investigated further.

As a final conclusion, open source software's quality depends upon variety of interdependent factors and is an open issue. Research is going on to develop quality metrics for open source software to establish their reliability in contrast to proprietary software. Open source software continuously strives for better quality as developers are users of the software.

References 1. M. S. Elliott and W. Scacchi. Free software

development: Cooperation and conflict in a virtual organizational culture.In S. Koch, ed i tor, Free /Open Source Sof tware Development. Idea Publishing, 2004.

2. L. Gasser and W. Scacchi. Continuous design of free/open source software: Workshop report and research agenda, October 2003. http://www.isrl.uiuc.edu/˜gasser/papers/CD-OSS-prelim-report.pdf.

3. S. Koch and G. Schneider. Results from software engineering research into open source development projects using public data.

4. Libre Software Engineering tool repository. http://barba.dat.escet.urjc.es/index.php

5. A.Mockus, R. T. Fielding, and J. Herbsleb. Two case studies of open source software development: Apache and Mozilla. ACM Transactions on Software Engineering and Methodology, 11(3):138, July 2002.

6. G. Ripoche and L. Gasser. Scalable automatic extraction of process models for understanding F/OSS bug repair. In Proceedings of the International Conference on Software & Systems Engineering and their Applications (ICSSEA'03), Paris, France, December 2003.

Institute of Management Studies, Dehradun

31 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

7. John C. Georgas, Michael M. Gorlick & R i c h a r d N . T a y l o r , R a g i n g Incrementalism:Harnessing Change with Open Source Software, Institute for Software research ,University of California.

8. Raymond,E.S.,The Cathedral and bazaar ,O'reily & associates,2000.

9. Siraj A. Shaikh and Antonio Cerone ,Towards a quality model for Open Source Software (OSS),International Institute for Software Technology (IIST) , Macau SAR China

10. Nakakoji, K., Y. Yamamoto, Y. Nishinaka, K. Kishida, and Y. Ye. Evolution Patterns of Open-Source Software Systems and Communities, in Proceedings of International Workshop on Principles of Software Evolution (1WPSE 2002) (Orlando, FL, 2002), 76-85.

11. O'Reilly, T. Lessons from Open-Source Software Development. Communications of the ACM, 1999. 42(4): 33-37.

12. DiBona, C., S. Ockman, and M. Stone. eds. Open Sources: Voices from the Open Source Revolution. 1999, O'Reilly & Associates: Sebastopol, CA.

13. Vidyasagar Potdar, Elizabeth Chang , Open Source and Closed Source Software Development Methodologies, School of Information System, Curtin University of echnology, Perth, Australia.

14. Lyu, Michael R. (1996) Handbook of Software Engineering Reliability. McGraw-Hill.

15. The Open Source Initiative. Open source d e f i n i t i o n , v e r s i o n www.opensource.org/docs/definition.php.

16. Stewart,K.J. and S.Gosain. An Exploratory study of ideology and trust in open source

nddevelopment groups in proceedings of the 22 International Conference on Information Systems,2001,New Orleans,LA.

17. Scott H, Charles W, Plakosh D., Jayatirtha A., Perspectives on Open Source Software, Software Engineering Institute, Pittsburgh, Nov 2001 p49.

.18 Walt Scacchi, Is Open Source Software Development Faster, Better and Cheaper than

rdSoftware Engineering?, 23 International Conference on Software Engineering, Toronto, Ontario, Canada, 2001.

19. Saachi,W., Understanding Requirements for developing Open source Software systems, IEEE Proceedings on Software , 2002,149(1):p 24-39.

20. B.Bohem, Software Engineering Economics, IEEE Transactions on Software engineering, Vol.10,pp.4-21,1984.

21. C.L. Jones A process integrated Approach to Defect Prevention, IBM Systems Journal, vol.24,pp. 150-167,1985.

22. M.Godfrey and Q.Tu, Evolution in Open source software: A Case study, Proc. International conf. Software Maintenance ,pp.131-142,2000.

23. J.Paulson,G.Succi and A.Eberlein, An empirical study of Open Source and Closed source software products, IEEE transactions on software engineering,vol.30.no.pp.-246-256,2004.

24. M. S. Elliott and W. Scacchi. Free software development: Cooperation and conflict in a virtual organizational culture. In S. Koch, ed i tor, Free /Open Source Sof tware Development. Idea Publishing, 2004.

25. R. J. Sandusky, L. Gasser, and G. Ripoche. Bug report networks: Varieties, strategies, and impacts in an OSS development community. In Proceedings of the ICSE/MSR Workshop, Edinburgh, Scotland, UK, 25 May 2004.

26. IEEE standard for a software quality metrics methodology. In IEEE Std 1061-1998, 1998.

27. Kemerer, C.F. Software Complexity and Software Maintenance: A Survey of Empirical Research. Annals of Software Engineering, 1, 1 (1995) 1-22.

28. Gorla, N., and Ramakrishnan, R. Effect of Software Structure Attributes Software Development Productivity. Journal of Systems and Software, 36, 2 (1997) 191-199.

29. Katherine J. Stewart, University of Maryland, OSS Project Success: From Internal Dynamics to External Impact.

30. Bergquist, M. and J. Ljungberg, The Power of Gifts: Organizing Social Relationships in Open Source Communities, Information Systems Journal, 2001, 11(4): p. 305-320.

31. Reis, C., R. P0ntin, and M. Fortes (2002): An Overview of the Software Engineering Process and Tools in the Mozilla Project. In Proceedings of the Open Source Software Development Workshop, Newcastle upon Tyne, UK, February 25-26, 2002, ed. C. Gacek pp. 155-145.

32. Randell, B. (2000). Turing Memorial Lecture: Facing up to faults. The Computer Journal, Vol. 43. No. 2. pp 95-106.

Impact of Internal Dynamics on Quality...

32 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 37: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

process easier. Greater the familiarity of the contributors with OSSD tools greater the OSS developer motivation and greater will be the development success of the project.

Impact of Documentation in Open Source Code

Qualities that make a OSS process appropriate include a high degree of responsiveness (release early, release often), to be inclusive, reliable and coherent (understood). A large part of the OSS design process takes place in the discussion space and is archived in the documentation space. Tools are also used to extract relevant data from large quantity of archived data g ene ra t ed f rom the de s i gn d i s cu s s ions . Documentation helps in providing learners a “gentle slope” learning curve. Mailing lists and chat logs contain a wealth of information about the project .The availability of structured documentation along with the source code will enhance the OSS usage success which in turn will enhance the motivation of developers to contribute to a project and thereby have a positive effect on development success.

Impact of Perfective & Adaptive Maintenance

O S S d e v e l o p m e n t e m p h a s i z e s t h e maintainability of the software released. Making software available on the Internet allows developers around the world to contribute code, adding new functionality (parallel development), improving the present one, and submitting bug fixes to the current release (parallel debugging). A well-known conjecture in modern software engineering is that external quality characteristics are correlated to internal quality characteristics. The measurement of source code provides useful information for the assessment of its quality, predicting to some extent the external system quality characteristics. Project output affects the user interest. Feedback from usage success to development success motivates OSS developers. More active projects are more popular. The greater the user interest in a project, the wider the audience for individual contributions and therefore the more visible the efforts of contributors.

Conclusions

The goals of open-source software development are not unique. It aims to sustain end-user confidence and goodwill by minimizing development and quality assurance costs. The process of OSSD tries to limit regression errors thereby avoiding breaking features which degrade performance relative to prior releases. Frequent “beta” releases, e.g., several times a month are used to ensure consistent quality of OSS. The overall results from the facts stated above seem to indicate an acceptable level of OSS quality during the

development phase of the open source project. As far as functionality of OSS is concerned, the ad hoc partnering between the developers and user initiated change requests has proved its suitability and acceptability for the systems with horizontal requirements. The “many eyeballs” quote also needs to be further explored with more quantitative studies. If structured documentation along with the source code is made viable to the users then it will promote OSS usage success among the common users. Various other quality metrics may be used to explore the fact of improvement in the development success of the open source project. Though the contributors to the project continuously carry out perfective and adaptive maintenance, the evolution of the maintainability of OSS has to be investigated further.

As a final conclusion, open source software's quality depends upon variety of interdependent factors and is an open issue. Research is going on to develop quality metrics for open source software to establish their reliability in contrast to proprietary software. Open source software continuously strives for better quality as developers are users of the software.

References 1. M. S. Elliott and W. Scacchi. Free software

development: Cooperation and conflict in a virtual organizational culture.In S. Koch, ed i tor, Free /Open Source Sof tware Development. Idea Publishing, 2004.

2. L. Gasser and W. Scacchi. Continuous design of free/open source software: Workshop report and research agenda, October 2003. http://www.isrl.uiuc.edu/˜gasser/papers/CD-OSS-prelim-report.pdf.

3. S. Koch and G. Schneider. Results from software engineering research into open source development projects using public data.

4. Libre Software Engineering tool repository. http://barba.dat.escet.urjc.es/index.php

5. A.Mockus, R. T. Fielding, and J. Herbsleb. Two case studies of open source software development: Apache and Mozilla. ACM Transactions on Software Engineering and Methodology, 11(3):138, July 2002.

6. G. Ripoche and L. Gasser. Scalable automatic extraction of process models for understanding F/OSS bug repair. In Proceedings of the International Conference on Software & Systems Engineering and their Applications (ICSSEA'03), Paris, France, December 2003.

Institute of Management Studies, Dehradun

31 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

7. John C. Georgas, Michael M. Gorlick & R i c h a r d N . T a y l o r , R a g i n g Incrementalism:Harnessing Change with Open Source Software, Institute for Software research ,University of California.

8. Raymond,E.S.,The Cathedral and bazaar ,O'reily & associates,2000.

9. Siraj A. Shaikh and Antonio Cerone ,Towards a quality model for Open Source Software (OSS),International Institute for Software Technology (IIST) , Macau SAR China

10. Nakakoji, K., Y. Yamamoto, Y. Nishinaka, K. Kishida, and Y. Ye. Evolution Patterns of Open-Source Software Systems and Communities, in Proceedings of International Workshop on Principles of Software Evolution (1WPSE 2002) (Orlando, FL, 2002), 76-85.

11. O'Reilly, T. Lessons from Open-Source Software Development. Communications of the ACM, 1999. 42(4): 33-37.

12. DiBona, C., S. Ockman, and M. Stone. eds. Open Sources: Voices from the Open Source Revolution. 1999, O'Reilly & Associates: Sebastopol, CA.

13. Vidyasagar Potdar, Elizabeth Chang , Open Source and Closed Source Software Development Methodologies, School of Information System, Curtin University of echnology, Perth, Australia.

14. Lyu, Michael R. (1996) Handbook of Software Engineering Reliability. McGraw-Hill.

15. The Open Source Initiative. Open source d e f i n i t i o n , v e r s i o n www.opensource.org/docs/definition.php.

16. Stewart,K.J. and S.Gosain. An Exploratory study of ideology and trust in open source

nddevelopment groups in proceedings of the 22 International Conference on Information Systems,2001,New Orleans,LA.

17. Scott H, Charles W, Plakosh D., Jayatirtha A., Perspectives on Open Source Software, Software Engineering Institute, Pittsburgh, Nov 2001 p49.

.18 Walt Scacchi, Is Open Source Software Development Faster, Better and Cheaper than

rdSoftware Engineering?, 23 International Conference on Software Engineering, Toronto, Ontario, Canada, 2001.

19. Saachi,W., Understanding Requirements for developing Open source Software systems, IEEE Proceedings on Software , 2002,149(1):p 24-39.

20. B.Bohem, Software Engineering Economics, IEEE Transactions on Software engineering, Vol.10,pp.4-21,1984.

21. C.L. Jones A process integrated Approach to Defect Prevention, IBM Systems Journal, vol.24,pp. 150-167,1985.

22. M.Godfrey and Q.Tu, Evolution in Open source software: A Case study, Proc. International conf. Software Maintenance ,pp.131-142,2000.

23. J.Paulson,G.Succi and A.Eberlein, An empirical study of Open Source and Closed source software products, IEEE transactions on software engineering,vol.30.no.pp.-246-256,2004.

24. M. S. Elliott and W. Scacchi. Free software development: Cooperation and conflict in a virtual organizational culture. In S. Koch, ed i tor, Free /Open Source Sof tware Development. Idea Publishing, 2004.

25. R. J. Sandusky, L. Gasser, and G. Ripoche. Bug report networks: Varieties, strategies, and impacts in an OSS development community. In Proceedings of the ICSE/MSR Workshop, Edinburgh, Scotland, UK, 25 May 2004.

26. IEEE standard for a software quality metrics methodology. In IEEE Std 1061-1998, 1998.

27. Kemerer, C.F. Software Complexity and Software Maintenance: A Survey of Empirical Research. Annals of Software Engineering, 1, 1 (1995) 1-22.

28. Gorla, N., and Ramakrishnan, R. Effect of Software Structure Attributes Software Development Productivity. Journal of Systems and Software, 36, 2 (1997) 191-199.

29. Katherine J. Stewart, University of Maryland, OSS Project Success: From Internal Dynamics to External Impact.

30. Bergquist, M. and J. Ljungberg, The Power of Gifts: Organizing Social Relationships in Open Source Communities, Information Systems Journal, 2001, 11(4): p. 305-320.

31. Reis, C., R. P0ntin, and M. Fortes (2002): An Overview of the Software Engineering Process and Tools in the Mozilla Project. In Proceedings of the Open Source Software Development Workshop, Newcastle upon Tyne, UK, February 25-26, 2002, ed. C. Gacek pp. 155-145.

32. Randell, B. (2000). Turing Memorial Lecture: Facing up to faults. The Computer Journal, Vol. 43. No. 2. pp 95-106.

Impact of Internal Dynamics on Quality...

32 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 38: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

ABSTRACT

It has been considered from many years that a three phase commit protocol (3PC) has high communication overheads because of its greater message complexity and therefore it is not commonly used, and because of that, very little work has been done with regard to the equally important issue of ensuring distributed transaction atomicity. Here in this paper we try to remove a few basic overheads regarding communication and logging and we propose a new model for three phases commit protocol.

Introduction

Most of the real time applications are inherently distributed in nature. Earlier research which is based in this field is mostly based on centralized database and therefore there is very less development in the field of Distributed Real Time Database (DRTDBS). Earlier real time databases were mostly focused on reducing the number of missing deadlines. But rapid growth in this field forced us to focus on different issues of transaction atomicity and timing constraint.

One of the major difficulties in acquiring the timing constraint is the data access conflicts among the transactions. A transaction is a unit of program execution and it begins with execution stage and ends up with commit state. A commit state ensures transaction atomicity and execution stage works with various involved sites in this process. At the time of execution statement two conflicts might occur :- 1.Execute execute conflict and 2. Execute commit conflict. A good amount of work has been done to handle the former conflict but not much has been done to handle the later [1],[2]. The major difficulty in handling the execute commit conflict is its commit policy. Most of the times it takes a long time to complete and it may result in missing the deadline. Hence, it causes an adverse effect on systems ability. Therefore, designing a good commit protocol is required not only for resilience to failure and speed recovery but also for normal processing [3],[2].

Reducing Overheads in Non-Blocking Three Phase Commit Protocol

Shishir Kumar*Sonali Barvey*

Background and Related work:-

Distributed databases systems implement a transaction commit protocol to ensure transaction commit protocol to ensure transaction atomicity. To ensure the transaction atomicity many different commit protocols are developed.

I. Two Phase Commit (2PC) Protocol- As suggested by the name, it operates in two phases. In the first phase, called the voting phase, the master reaches a global decision (commit or abort) based on the local decisions of the cohorts. In the second phase, called the decision phase, the master conveys this decision to the cohorts. For its successful execution, the protocol assumes that each cohort of a transaction is able to provisionally perform the actions of the transaction in such a way that they can be undone if the transaction is eventually aborted. This is usually implemented by using logging mechanisms such as Write-Ahead Logging (WAL) [17], which maintain sequential histories of transaction actions in stable storage. The protocol also assumes that, if necessary, log records can be force-written, that is, written synchronously to stable storage. After receiving the WORKDONE message from all the cohorts participating in the distributed execution of the transaction, the master initiates the first phase of the commit protocol by

*Deptt. of CSE, Jaypee Institute of Engineering & Technology, A-B Raod, Raghogarh, Guna (MP)

33"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

sending PREPARE (to commit) messages in parallel to all its cohorts. Each cohort that is ready to commit first force-writes a prepare log record to its local stable storage and then sends a YES vote to the master. At this stage, the cohort has entered a prepared state wherein it cannot unilaterally commit or abort the transaction but has to wait for the final decision from the master. On the other hand, each cohort that decides to abort force-writes an abort log record and sends a NO vote to the master. Since a NO vote acts like a veto, the cohort is permitted to unilaterally abort the transaction without waiting for the decision from the master. After the master receives votes from all its cohorts, the second phase of the protocol is initiated. If all the votes are YES, the master moves to a committing state by force-writing a commit log record and sending COMMIT messages to all its cohorts. Each cohort, upon receiving the COMMIT message, moves to the committing state, force-writes a commit log record, and sends an ACK message to the master. On the other hand, if the master receives even one NO vote, it moves to the aborting state by force-writing an abort log record and sends ABORT messages to those cohorts that are in the prepared state. These cohorts, after receiving the ABORT message, move to the aborting state, force-write an abort log record and send an ACK message to the master. Finally, the master, after receiving ACKs from all the prepared cohorts, writes an end log record and then forgets the transaction (by removing from virtual memory all information associated with the transaction) [3].

II. Presumed Abort- As described above, the 2PC protocol requires transmission of several messages and writing or force-writing of several log records. A variant of the 2PC protocol, called presumed abort (PA) [16], tries to reduce these message and logging overheads by requiring all participants to follow abort rule during failure recovery and in the no-information case. That is, if after coming up from a failure a site queries the master about the final outcome of a transaction and finds no information available with the master, the transaction is (correctly) assumed to have been

aborted. With this assumption, it is not necessary for cohorts to send ACK for ABORT messages from the master, or to force-write the abort record to the log. It is also not necessary for an aborting master to force-write the abort log record or to write an end log record. In short, the PA protocol behaves identically to 2PC for committing transactions, but has reduced message and logging overheads for aborted transactions [15], [3].

III. Presumed Commit- Another variant of 2PC, called presumed commit (PC) [11], is based on the observation that, in general, the number of committed transactions is much more than the number of aborted transactions. In PC, the overheads are reduced for committing transactions, rather than aborted transactions, by requiring all participants to follow, during failure recovery, an in the no-information case, commit rule. In this scheme, cohorts do not send ACKs for a commit decision sent from the master, and also do not force-write the commit log record. In addition, the master does not write an end log record. On the down side, however, the master is required to force-write a collecting log record before initiating the two-phase protocol. This log record contains the names of all the cohorts involved in executing that transaction. The above optimizations of 2PC have been implemented in a number of database products and standards [12], [13, [3].

IV. Three Phase Commit- A fundamental problem with all of the above protocols is that cohorts may become blocked in the event of a site failure and remain blocked until the failed site recovers.

For example, if the master fails after initiating the protocol but before conveying the decision to its cohorts, these cohorts will become blocked and remain so until the master recovers and informs them of the final decision. During the blocked period, the cohorts may continue to hold system resources such as locks on data items, making these unavailable to other transactions. These transactions, in turn, become blocked waiting for the resources to be relinquished, resulting in cascaded blocking. So, if the duration of the blocked period is significant, the outcome could be a major disruption of transaction processing activity. To address the blocking problem, a three phase commit (3PC) protocol was proposed in [10]. This protocol

Reducing Overheads in Non-Blocking Three Phase Commit Protocol

34 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 39: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

ABSTRACT

It has been considered from many years that a three phase commit protocol (3PC) has high communication overheads because of its greater message complexity and therefore it is not commonly used, and because of that, very little work has been done with regard to the equally important issue of ensuring distributed transaction atomicity. Here in this paper we try to remove a few basic overheads regarding communication and logging and we propose a new model for three phases commit protocol.

Introduction

Most of the real time applications are inherently distributed in nature. Earlier research which is based in this field is mostly based on centralized database and therefore there is very less development in the field of Distributed Real Time Database (DRTDBS). Earlier real time databases were mostly focused on reducing the number of missing deadlines. But rapid growth in this field forced us to focus on different issues of transaction atomicity and timing constraint.

One of the major difficulties in acquiring the timing constraint is the data access conflicts among the transactions. A transaction is a unit of program execution and it begins with execution stage and ends up with commit state. A commit state ensures transaction atomicity and execution stage works with various involved sites in this process. At the time of execution statement two conflicts might occur :- 1.Execute execute conflict and 2. Execute commit conflict. A good amount of work has been done to handle the former conflict but not much has been done to handle the later [1],[2]. The major difficulty in handling the execute commit conflict is its commit policy. Most of the times it takes a long time to complete and it may result in missing the deadline. Hence, it causes an adverse effect on systems ability. Therefore, designing a good commit protocol is required not only for resilience to failure and speed recovery but also for normal processing [3],[2].

Reducing Overheads in Non-Blocking Three Phase Commit Protocol

Shishir Kumar*Sonali Barvey*

Background and Related work:-

Distributed databases systems implement a transaction commit protocol to ensure transaction commit protocol to ensure transaction atomicity. To ensure the transaction atomicity many different commit protocols are developed.

I. Two Phase Commit (2PC) Protocol- As suggested by the name, it operates in two phases. In the first phase, called the voting phase, the master reaches a global decision (commit or abort) based on the local decisions of the cohorts. In the second phase, called the decision phase, the master conveys this decision to the cohorts. For its successful execution, the protocol assumes that each cohort of a transaction is able to provisionally perform the actions of the transaction in such a way that they can be undone if the transaction is eventually aborted. This is usually implemented by using logging mechanisms such as Write-Ahead Logging (WAL) [17], which maintain sequential histories of transaction actions in stable storage. The protocol also assumes that, if necessary, log records can be force-written, that is, written synchronously to stable storage. After receiving the WORKDONE message from all the cohorts participating in the distributed execution of the transaction, the master initiates the first phase of the commit protocol by

*Deptt. of CSE, Jaypee Institute of Engineering & Technology, A-B Raod, Raghogarh, Guna (MP)

33"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

sending PREPARE (to commit) messages in parallel to all its cohorts. Each cohort that is ready to commit first force-writes a prepare log record to its local stable storage and then sends a YES vote to the master. At this stage, the cohort has entered a prepared state wherein it cannot unilaterally commit or abort the transaction but has to wait for the final decision from the master. On the other hand, each cohort that decides to abort force-writes an abort log record and sends a NO vote to the master. Since a NO vote acts like a veto, the cohort is permitted to unilaterally abort the transaction without waiting for the decision from the master. After the master receives votes from all its cohorts, the second phase of the protocol is initiated. If all the votes are YES, the master moves to a committing state by force-writing a commit log record and sending COMMIT messages to all its cohorts. Each cohort, upon receiving the COMMIT message, moves to the committing state, force-writes a commit log record, and sends an ACK message to the master. On the other hand, if the master receives even one NO vote, it moves to the aborting state by force-writing an abort log record and sends ABORT messages to those cohorts that are in the prepared state. These cohorts, after receiving the ABORT message, move to the aborting state, force-write an abort log record and send an ACK message to the master. Finally, the master, after receiving ACKs from all the prepared cohorts, writes an end log record and then forgets the transaction (by removing from virtual memory all information associated with the transaction) [3].

II. Presumed Abort- As described above, the 2PC protocol requires transmission of several messages and writing or force-writing of several log records. A variant of the 2PC protocol, called presumed abort (PA) [16], tries to reduce these message and logging overheads by requiring all participants to follow abort rule during failure recovery and in the no-information case. That is, if after coming up from a failure a site queries the master about the final outcome of a transaction and finds no information available with the master, the transaction is (correctly) assumed to have been

aborted. With this assumption, it is not necessary for cohorts to send ACK for ABORT messages from the master, or to force-write the abort record to the log. It is also not necessary for an aborting master to force-write the abort log record or to write an end log record. In short, the PA protocol behaves identically to 2PC for committing transactions, but has reduced message and logging overheads for aborted transactions [15], [3].

III. Presumed Commit- Another variant of 2PC, called presumed commit (PC) [11], is based on the observation that, in general, the number of committed transactions is much more than the number of aborted transactions. In PC, the overheads are reduced for committing transactions, rather than aborted transactions, by requiring all participants to follow, during failure recovery, an in the no-information case, commit rule. In this scheme, cohorts do not send ACKs for a commit decision sent from the master, and also do not force-write the commit log record. In addition, the master does not write an end log record. On the down side, however, the master is required to force-write a collecting log record before initiating the two-phase protocol. This log record contains the names of all the cohorts involved in executing that transaction. The above optimizations of 2PC have been implemented in a number of database products and standards [12], [13, [3].

IV. Three Phase Commit- A fundamental problem with all of the above protocols is that cohorts may become blocked in the event of a site failure and remain blocked until the failed site recovers.

For example, if the master fails after initiating the protocol but before conveying the decision to its cohorts, these cohorts will become blocked and remain so until the master recovers and informs them of the final decision. During the blocked period, the cohorts may continue to hold system resources such as locks on data items, making these unavailable to other transactions. These transactions, in turn, become blocked waiting for the resources to be relinquished, resulting in cascaded blocking. So, if the duration of the blocked period is significant, the outcome could be a major disruption of transaction processing activity. To address the blocking problem, a three phase commit (3PC) protocol was proposed in [10]. This protocol

Reducing Overheads in Non-Blocking Three Phase Commit Protocol

34 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 40: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

achieves a non blocking capability by inserting an extra phase, called the precommit phase, in between the two phases of the 2PC protocol. In the precommit phase, a preliminary decision is reached regarding the fate of the transaction. The information made available to the participating sites as a result of this preliminary decision allows a global decision to be made despite a subsequent failure of the master site. Note, however, that the price of gaining non blocking functionality is an increase in the communication and logging overheads since:

1) There is an extra round of message exchange between the master and the cohorts, and

2) Both the master and the cohorts have to force-write additional log records in the precommit phase. [3]

Concepts of Shared Memory:-

Shared memory is a memory management scheme that permits the physical- address space of a process to be noncontiguous. This concept is approved by paging which avoids the considerable problem of fitting the varying-sized memory chunks onto the backing store. In paging the physical memory is stored into fixed size blocks called frames. Logical memory is also broken into blocks of the same size called pages.

When a process is to be executed, its pages are loaded into any available memory frames from the backing store. Here paging gives advantage of sharing the common code. This consideration is particularly important in a time-sharing environment.

Temporary Files:-

Temporary files may be created by computer programs for a variety of purposes; principally when a program cannot allocate enough memory for its tasks, when the program is working on data bigger than architecture's address space, or as a primitive form of inter-process communication. Some programs create temporary files and then leave them behind - they do not delete them. This can happen because the program crashed or the developer of the program simply forgot to add the code needed to delete the temporary files after the program is done with them. The temporary files left behind by the programs accumulate over time and can take up a lot of disk space. System utilities, called temporary file cleaners or disk cleaners, can be used to address this issue.

Basic Idea:-

It was suggested by Heine Kolltveit and Svein-Olaf Hvasshovd [18]. Atomic commitment protocols are needed to preserve ACID properties of distributed

Ack

No

Yes abort

YesCommit

Req ack Start ack

w1

p1/q1

c1

Sharing of addressspace by either paging /

segmentationAck req

Start ack

Yes abort

No commit

Yes commit

Ackcommit

a1

q1

Fig -Modified non-blocking commit protocol where we are considering Virtual Precommit Phase.

q1

a1 p1

c1

w1

35"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Modified Algorithm for three phase commit protocol:-

transactional systems. To support their protocol concept they used both main memory and hard disk. Hassan Chafi and Jared Casper [19] introduced the use of main memory for parallel systems for handling the locking conditions. Rachid Guerraoui Mikel Larrea & Schiper Dkpart ement d'Informatique suggests reduction of steps in 3PC to commit, in their paper “Reducing the Cost for Non-Blocking in Atomic Commitment”[20]. The above all papers tried to remove the drawbacks related with three phase commit protocol but major failure which occurred in all is the loss of memory. We here tried to develop a model under which there is minimum loss of memory by making the second phase virtual. By studying these and many other papers, it could be concluded that the drawback (two steps while committing the 3PC) can be removed by introducing the virtual or shared memory in second phase of 3PC.

Our Assumption:-

In this paper, we have attempted to remove the drawbacks of three phase commit protocol by the use of sharing of address space using either paging or segmentation.

Proposed Architecture:-

Three phase commit protocol- Here the concept of address sharing by either paging or segmentation is used.

Let T be a transaction initiated at site S and let i

the transaction coordinator at S be C . When T i i

completes its execution i.e. when all the sites at which T has executed inform C that T is complete, C starts the i i

3PC protocol.

Phase 1- C adds the record < PREPARE T > to i

the log, and forces the log onto stable storage. It then sends a prepare T message to all the sites at which T is executed [8]. On receiving such a message, the transaction manager at that site determines whether it is willing to commit its portion of T. If answer is no, it adds a record <no T> to the log, and then responds by sending an abort T message to C . If the answer is YES, i

it adds a record <ready T > to the log, and forces the log (with all the log records corresponding to T) onto stable storage. The transaction manager then replies with a ready T message to C [9].i

No changes are made in phase 1.

Phase 2- If C receives an abort T message from a i

participating site, or if C receives no response within a i

prespecified interval from a participating site then C idecides to abort T. The abort decision is implemented. If C receives a ready T message from every i

participating site, C makes the preliminary decision to i

precommit T. Precommit differs from commit in that T may still abort eventually. The precommit decision allows the coordinator to inform each participating site that all participating sites are ready. C adds a record i

<precommit T> to the log and forces the log onto stable storage. Then, C sends a precommit T message to all i

participating site. When a site receives a message from the coordinator (either abort or precommit T), it records that message in its log, forces this information to stable storage and sends a message acknowledge T to the coordinator.

The main drawbacks are an extra round of acknowledgment and forced write on stable storage. Now let us reconsider Phase 2. When we consider the condition of pre commit we will involve it into that memory where we have stored our last forced write condition. Here we'll share the memory so that when next time C adds a record <precommit T> to the log i

and forces the log onto stable storage then it will execute itself from shared memory. The advantage of this technique is that though we have to log onto stable

w1

p1/q1

c1

Sharing of addressspace by either paging / segmentation

Ack reqStart ack

Yes abort

No commit

Yes commit

Ackcommit

a1

q1

Reducing Overheads in Non-Blocking Three Phase Commit Protocol

36 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 41: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

achieves a non blocking capability by inserting an extra phase, called the precommit phase, in between the two phases of the 2PC protocol. In the precommit phase, a preliminary decision is reached regarding the fate of the transaction. The information made available to the participating sites as a result of this preliminary decision allows a global decision to be made despite a subsequent failure of the master site. Note, however, that the price of gaining non blocking functionality is an increase in the communication and logging overheads since:

1) There is an extra round of message exchange between the master and the cohorts, and

2) Both the master and the cohorts have to force-write additional log records in the precommit phase. [3]

Concepts of Shared Memory:-

Shared memory is a memory management scheme that permits the physical- address space of a process to be noncontiguous. This concept is approved by paging which avoids the considerable problem of fitting the varying-sized memory chunks onto the backing store. In paging the physical memory is stored into fixed size blocks called frames. Logical memory is also broken into blocks of the same size called pages.

When a process is to be executed, its pages are loaded into any available memory frames from the backing store. Here paging gives advantage of sharing the common code. This consideration is particularly important in a time-sharing environment.

Temporary Files:-

Temporary files may be created by computer programs for a variety of purposes; principally when a program cannot allocate enough memory for its tasks, when the program is working on data bigger than architecture's address space, or as a primitive form of inter-process communication. Some programs create temporary files and then leave them behind - they do not delete them. This can happen because the program crashed or the developer of the program simply forgot to add the code needed to delete the temporary files after the program is done with them. The temporary files left behind by the programs accumulate over time and can take up a lot of disk space. System utilities, called temporary file cleaners or disk cleaners, can be used to address this issue.

Basic Idea:-

It was suggested by Heine Kolltveit and Svein-Olaf Hvasshovd [18]. Atomic commitment protocols are needed to preserve ACID properties of distributed

Ack

No

Yes abort

YesCommit

Req ack Start ack

w1

p1/q1

c1

Sharing of addressspace by either paging /

segmentationAck req

Start ack

Yes abort

No commit

Yes commit

Ackcommit

a1

q1

Fig -Modified non-blocking commit protocol where we are considering Virtual Precommit Phase.

q1

a1 p1

c1

w1

35"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Modified Algorithm for three phase commit protocol:-

transactional systems. To support their protocol concept they used both main memory and hard disk. Hassan Chafi and Jared Casper [19] introduced the use of main memory for parallel systems for handling the locking conditions. Rachid Guerraoui Mikel Larrea & Schiper Dkpart ement d'Informatique suggests reduction of steps in 3PC to commit, in their paper “Reducing the Cost for Non-Blocking in Atomic Commitment”[20]. The above all papers tried to remove the drawbacks related with three phase commit protocol but major failure which occurred in all is the loss of memory. We here tried to develop a model under which there is minimum loss of memory by making the second phase virtual. By studying these and many other papers, it could be concluded that the drawback (two steps while committing the 3PC) can be removed by introducing the virtual or shared memory in second phase of 3PC.

Our Assumption:-

In this paper, we have attempted to remove the drawbacks of three phase commit protocol by the use of sharing of address space using either paging or segmentation.

Proposed Architecture:-

Three phase commit protocol- Here the concept of address sharing by either paging or segmentation is used.

Let T be a transaction initiated at site S and let i

the transaction coordinator at S be C . When T i i

completes its execution i.e. when all the sites at which T has executed inform C that T is complete, C starts the i i

3PC protocol.

Phase 1- C adds the record < PREPARE T > to i

the log, and forces the log onto stable storage. It then sends a prepare T message to all the sites at which T is executed [8]. On receiving such a message, the transaction manager at that site determines whether it is willing to commit its portion of T. If answer is no, it adds a record <no T> to the log, and then responds by sending an abort T message to C . If the answer is YES, i

it adds a record <ready T > to the log, and forces the log (with all the log records corresponding to T) onto stable storage. The transaction manager then replies with a ready T message to C [9].i

No changes are made in phase 1.

Phase 2- If C receives an abort T message from a i

participating site, or if C receives no response within a i

prespecified interval from a participating site then C idecides to abort T. The abort decision is implemented. If C receives a ready T message from every i

participating site, C makes the preliminary decision to i

precommit T. Precommit differs from commit in that T may still abort eventually. The precommit decision allows the coordinator to inform each participating site that all participating sites are ready. C adds a record i

<precommit T> to the log and forces the log onto stable storage. Then, C sends a precommit T message to all i

participating site. When a site receives a message from the coordinator (either abort or precommit T), it records that message in its log, forces this information to stable storage and sends a message acknowledge T to the coordinator.

The main drawbacks are an extra round of acknowledgment and forced write on stable storage. Now let us reconsider Phase 2. When we consider the condition of pre commit we will involve it into that memory where we have stored our last forced write condition. Here we'll share the memory so that when next time C adds a record <precommit T> to the log i

and forces the log onto stable storage then it will execute itself from shared memory. The advantage of this technique is that though we have to log onto stable

w1

p1/q1

c1

Sharing of addressspace by either paging / segmentation

Ack reqStart ack

Yes abort

No commit

Yes commit

Ackcommit

a1

q1

Reducing Overheads in Non-Blocking Three Phase Commit Protocol

36 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 42: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

storage but we will save the extra memory which we need at the time <precommit T> [9],[20].

Phase 3- This phase is executed only if the decision in the phase 2 was precommitted. After the precommit T messages are sent to all participating sites, the coordinator must wait until it receives at least K acknowledgment T messages. Then, the coordinator reaches a commit decision. It adds a <commit T> Record to its log, and forces the log to stable storage. Then C sends a commit T message to all participating i

sites. When a site receives that message, it records the information in its log.

No changes are required in this phase except that Commit T will occur from shared memory and finally the transaction becomes fully Commit.

The automata we have designed for this is

Conclusion :-

In this paper we have tried to remove the problems related with 3 phase commit protocol by putting the concept of shared memory and to achieve this we have proposed a new model for three phase commit protocol by putting precommit phase virtually, through which overhead of forced write into stable storage is avoided.

Future Work :-

In this paper we have proposed only a model for Non-Blocking Three phase commit protocol. Coding and Analysis is required to be done on this model.

References

1. Distributed Commit with bounded waiting ,D . Dolev H. R . Strong IBM Research Laboratory.

2. Some Performance Issues in Distributed Real Time Database Systems Udai Shanker*, Manoj Misra and Anil K. Sarje Department of Electronics & Computer Engineering Indian

Automata for New Non-Blocking commit protocol

Finite

Institute of Technology Roorkee, Roorkee-247 667, INDIA.

3. The PROMPT Real-Time Commit Protocol Jayant R. Haritsa, Member, IEEE, Krithi Ramamritham, Fellow, IEEE, and Ramesh Gupta.

4. A Formal Model of Crash Recovery in a Distributed System DALE SKEEN AND MICHAEL STONEBRAKER.

5. A Scalable, Non-blocking Approach to Transactional Memory Hassan Chafi Jared Casper Brian D. Carlstrom Austen McDonald Chi Cao Minh Woongki Baek Christos Kozyrakis Kunle Olukotun Computer Systems Laboratory Stanford University.Main Memory Commit Protocols for Multiple Backups Heine Kolltveit and Svein-Olaf Hvasshovd.

6. Fast Recovery in Distributed Shared Virtual Memory Systems* Va-On Tam Meichun Hsu Aiken Computation Lab. Harvard University C a m b r i d g e , M A 0 2 1 3 8 t a m @ h a r v a r d . h a r v a r d . e d u [email protected].

7. An Efficient Commit Protocol Exploiting Primary-Backup Placement in a Distributed Storage System Xiangyong Ouyangy, Tomohiro Yoshiharay, and Haruo Yokotazy Department of Computer Science, Graduate School of Information Science and Engineering, Tokyo Institute of Technology 2121 Ookayama, Meguro-ku, Tokyo 1528552, Japan Global Scientific Information and Computing Center, Tokyo Institute of Technology 2121 Ookayama, Meguro-ku, Tokyo 1528550, Japan fouyang, [email protected], [email protected]

8. D. Skeen. Non Blocking Commit Protocols. In Proceedings of the ACM SIGMOD International Conference on Management of Data, pages 133-142. ACM Press, 1981.

9. D. Skeen. A Quorum-Based Commit Protocol. In Proceedings of the Berkeley Workshop on distributed Data Management and Computer Networks, 6, pages 69-80, February 1982.MOND Intl. Conf. on Management of Data., June 1981.

37"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

10. C. Mohan, B. Lindsay, and R. Obermarck, Transaction Management in the Distributed database Management System, ACM Trans. Database Systems, vol. 11, no. 4, 1986.

11. G. Samaras et al., Two-Phase Commit Optimizations in a Commercial Distributed Environment, J. Distributed and Parallel.

12. P. Bernstein, V. Hadzilacos, and N. Goodman, Concurrency Control and Recovery in Database Systems. Addison-Wesley, 1987.

13. R. Abbott and H. Garcia-Molina, Scheduling Real-Time Transactions: A Performance evaluation, Proc. 14th Int'l Conf. Very Large Databases, Aug. 1988.

14. R. Agrawal, M. Carey, and M. Livny, Concurrency Control Performance Modeling: Alternatives and Implications, ACM Trans. Database Systems, vol. 12, no. 4, Dec. 1987.

15. C. Mohan et al., ARIES: A Transaction Recovery Method Supporting Fine-Granularity Locking and Partial Rollbacks Using Write-Ahead Logging, ACM Trans. Database Systems, vol. 17, no. 1, 1992.

16. J. Gray, Notes on Database Operating Systems, Operating Systems: An Advanced Course. Springer Verlag, 1978.

17. Main Memory Commit Protocols for Multiple Backups Heine Kolltveit and Svein-Olaf Hvasshovd.

18. A Scalable, Non-blocking Approach to Transactional Memory Hassan Chafi Jared Casper Brian D Carlstrom Austen McDonald Chi Cao Minh Woongki Baek Christos Kozyrakis Kunle Olukotun Computer Systems Laboratory Stanford University.

19. Reducing the Cost for Non-Blocking in Atomic Commitment, *Rachid Guerraoui Mike1 Larrea And& Schiper Dkpart ement d'Informat ique Ecole Polytechnique Fkdkrale de Lausanne 1015 Lausanne, Switzerland.Fast Recovery in Distributed Shared Virtual Memory Systems* Va-On Tam Meichun Hsu Aiken Computation Lab. Harvard University Cambridge, MA 0 2 1 3 8 t a m @ h a r v a r d . h a r v a r d . e d u , [email protected].

38 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Reducing Overheads in Non-Blocking Three Phase Commit Protocol

Page 43: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

storage but we will save the extra memory which we need at the time <precommit T> [9],[20].

Phase 3- This phase is executed only if the decision in the phase 2 was precommitted. After the precommit T messages are sent to all participating sites, the coordinator must wait until it receives at least K acknowledgment T messages. Then, the coordinator reaches a commit decision. It adds a <commit T> Record to its log, and forces the log to stable storage. Then C sends a commit T message to all participating i

sites. When a site receives that message, it records the information in its log.

No changes are required in this phase except that Commit T will occur from shared memory and finally the transaction becomes fully Commit.

The automata we have designed for this is

Conclusion :-

In this paper we have tried to remove the problems related with 3 phase commit protocol by putting the concept of shared memory and to achieve this we have proposed a new model for three phase commit protocol by putting precommit phase virtually, through which overhead of forced write into stable storage is avoided.

Future Work :-

In this paper we have proposed only a model for Non-Blocking Three phase commit protocol. Coding and Analysis is required to be done on this model.

References

1. Distributed Commit with bounded waiting ,D . Dolev H. R . Strong IBM Research Laboratory.

2. Some Performance Issues in Distributed Real Time Database Systems Udai Shanker*, Manoj Misra and Anil K. Sarje Department of Electronics & Computer Engineering Indian

Automata for New Non-Blocking commit protocol

Finite

Institute of Technology Roorkee, Roorkee-247 667, INDIA.

3. The PROMPT Real-Time Commit Protocol Jayant R. Haritsa, Member, IEEE, Krithi Ramamritham, Fellow, IEEE, and Ramesh Gupta.

4. A Formal Model of Crash Recovery in a Distributed System DALE SKEEN AND MICHAEL STONEBRAKER.

5. A Scalable, Non-blocking Approach to Transactional Memory Hassan Chafi Jared Casper Brian D. Carlstrom Austen McDonald Chi Cao Minh Woongki Baek Christos Kozyrakis Kunle Olukotun Computer Systems Laboratory Stanford University.Main Memory Commit Protocols for Multiple Backups Heine Kolltveit and Svein-Olaf Hvasshovd.

6. Fast Recovery in Distributed Shared Virtual Memory Systems* Va-On Tam Meichun Hsu Aiken Computation Lab. Harvard University C a m b r i d g e , M A 0 2 1 3 8 t a m @ h a r v a r d . h a r v a r d . e d u [email protected].

7. An Efficient Commit Protocol Exploiting Primary-Backup Placement in a Distributed Storage System Xiangyong Ouyangy, Tomohiro Yoshiharay, and Haruo Yokotazy Department of Computer Science, Graduate School of Information Science and Engineering, Tokyo Institute of Technology 2121 Ookayama, Meguro-ku, Tokyo 1528552, Japan Global Scientific Information and Computing Center, Tokyo Institute of Technology 2121 Ookayama, Meguro-ku, Tokyo 1528550, Japan fouyang, [email protected], [email protected]

8. D. Skeen. Non Blocking Commit Protocols. In Proceedings of the ACM SIGMOD International Conference on Management of Data, pages 133-142. ACM Press, 1981.

9. D. Skeen. A Quorum-Based Commit Protocol. In Proceedings of the Berkeley Workshop on distributed Data Management and Computer Networks, 6, pages 69-80, February 1982.MOND Intl. Conf. on Management of Data., June 1981.

37"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

10. C. Mohan, B. Lindsay, and R. Obermarck, Transaction Management in the Distributed database Management System, ACM Trans. Database Systems, vol. 11, no. 4, 1986.

11. G. Samaras et al., Two-Phase Commit Optimizations in a Commercial Distributed Environment, J. Distributed and Parallel.

12. P. Bernstein, V. Hadzilacos, and N. Goodman, Concurrency Control and Recovery in Database Systems. Addison-Wesley, 1987.

13. R. Abbott and H. Garcia-Molina, Scheduling Real-Time Transactions: A Performance evaluation, Proc. 14th Int'l Conf. Very Large Databases, Aug. 1988.

14. R. Agrawal, M. Carey, and M. Livny, Concurrency Control Performance Modeling: Alternatives and Implications, ACM Trans. Database Systems, vol. 12, no. 4, Dec. 1987.

15. C. Mohan et al., ARIES: A Transaction Recovery Method Supporting Fine-Granularity Locking and Partial Rollbacks Using Write-Ahead Logging, ACM Trans. Database Systems, vol. 17, no. 1, 1992.

16. J. Gray, Notes on Database Operating Systems, Operating Systems: An Advanced Course. Springer Verlag, 1978.

17. Main Memory Commit Protocols for Multiple Backups Heine Kolltveit and Svein-Olaf Hvasshovd.

18. A Scalable, Non-blocking Approach to Transactional Memory Hassan Chafi Jared Casper Brian D Carlstrom Austen McDonald Chi Cao Minh Woongki Baek Christos Kozyrakis Kunle Olukotun Computer Systems Laboratory Stanford University.

19. Reducing the Cost for Non-Blocking in Atomic Commitment, *Rachid Guerraoui Mike1 Larrea And& Schiper Dkpart ement d'Informat ique Ecole Polytechnique Fkdkrale de Lausanne 1015 Lausanne, Switzerland.Fast Recovery in Distributed Shared Virtual Memory Systems* Va-On Tam Meichun Hsu Aiken Computation Lab. Harvard University Cambridge, MA 0 2 1 3 8 t a m @ h a r v a r d . h a r v a r d . e d u , [email protected].

38 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Reducing Overheads in Non-Blocking Three Phase Commit Protocol

Page 44: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

ABSTRACT

Exponentially thermal effect on vibration of orthotropic rectangular plate having bi-direction linearly thickness variations is studied in this paper. Rayleigh Ritz method is used to evaluate the frequency parameter. The two-dimensional thickness variation is taken as the Cartesian product of linear variations along the two concurrent edges of the plate. Yet the method used for the solution of present problem provides an approximate solution but it is quite convenient and authentic one. A two-term deflection function is used and frequency corresponding to first mode of vibration is calculated for clamped plate, for different values of temperature gradient and two taper constants. In special cases, comparisons have been made with results that are available in the literature.

Key words: vibration, rectangular plate, thickness variation linearly, exponentially temperature variation.

Introduction

Vibration phenomenon, common in mechanical devices and structures, is undesirable in many cases, such as machine tools. But this phenomenon is not always unwanted; for example, vibration is needed in the operation of vibration screens. In the course of time, engineers have become increasingly conscious of the importance of an elastic behavior of many materials and mathematical formulations have been attempted and applied to practical problems.

In the recent past a considerable amount of work has been done on vibrations of plates of variable thickness in one direction, due to their continually increasing use in the dynamic design of various engineering structures, but few work on vibration of plates of variable thickness in two directions. Two dimensional tapered plates are used quite often to model structures used in a wide variety of applications.

Leissa [1] studied the effect of taper constants in two directions for elastic plate. Few papers are available on the vibrations of uniform isotropic beams and

Effect of Exponential Temperature Variation on Vibration of Orthotropic Rectangular Plate with Linearly Thickness Variation in Both Directions

Dr. A.K. Gupta* Subodh Kumar**

*Reader, Deptt. of Mathematics, M.S. College, Saharanpur, U.P., India, E-mail : [email protected]**Lecturer, Deptt. of Mathematics, Govt. Degree College, Ambala, Haryana, India

plates. Gupta, Johri and Vats [2] discussed the thermal effect on vibration of non-homogenous orthotropic rectangular plate having bi-directional parabolically varying thickness. Gupta and Khanna [3] have studied the effect of linearly varying thickness in both the direction on vibration of visco-elastic rectangular plate of variable thickness. Singh and Saxena [4] analyzed the transverse vibration of rectangular plate with bi-directional thickness variation. Cheung and Zhou [5] gave solution of free vibration of tapered rectangular plates using a new set of beam functions with the Rayliegh-Ritz method. Tomar and Gupta [6] studied the effect of thermal gradient on frequencies of an orthotropic rectangular plate whose thickness varies in two directions. Dhotarad and Ganeson [7] worked on the vibration of a rectangular plate subjected to a thermal gradient. Kumar [8] studied the effect of thermal gradient on some vibration problems of orthotropic visco- elastic plates of variable thickness. The thermal effect on vibration of non-homogenous visco-elastic rectangular plate of linear varying thickness is discussed by Gupta and Kumar [9]. Laura, Grossi and Corneiro [10] worked on the transverse vibrations of rectangular plates with thickness varying

39"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

in two directions and with edges elastically restrained against rotation. Free vibrations of rectangular plates whose thickness varies parabolically have been studied by Jain and Soni [11]. Gupta and Khanna [12] have worked on the vibration of clamped visco-elastic rectangular plate with parabolic thickness variations. Leissa [13] has studied the vibration of plates. Gutierrez, Laura and Grossi [14] discussed the vibrations of rectangular plates of bi-linearly varying thickness with general boundary conditions. Khanna [15] studied some vibration problems of visco-elastic plate of variable thickness in two directions.

The objective behind present study is to assess the effect of exponential thermal gradient on the vibration of orthotropic rectangular plate with bi-directional linear variation in thickness. Under the influence of exponential one-dimensional distributed temperature field, it is assumed that Young's Modulus and Shear Modulus of plate material varies exponentially along x- axis. The governing differential equation of motion of the orthotropic rectangular plate is solved by using Rayleigh- Ritz method.

Frequency parameter is calculated for first two modes of vibration for clamped orthotropic rectangular plate whose thickness varies linearly in both directions under exponentially varying temperature distribution, for various values of thermal gradient a and taper constants ί , ί . These results have 1 2

been compared with those obtained for orthotropic rectangular plate with bi-directional linear variation in thickness with linearly varying temperature distribution in one direction.

Equations of Motion

Let the plate under consideration be subjected to a steady one dimensional exponential temperature distribution along x- axis, then

where T is the temperature excess above the reference temperature at a distance x/a and T is the temperature 0excess above the reference temperature at the end of the plate i.e. at x=a. Expressions for Moduli of elasticity, using equation (1) becomes

1 11

x ae e

e eT = T0

− − −

− (1)

1( ) 11 1

1x a

x

e eE T E

e eα

= − − − −

2( ) 11 1

1x a

y

e eE T E

e eα

= − − − −

0( ) 11 1

1x a

xy

e eG T G

e eα

= − − − −

(2)

(3)

(4)

(5)

(6)

where and are Young's moduli in x- and y- directions respectively and is shear modulus, is Slope of variation of moduli with temperature and is thermal gradient parameter.

The governing differential equation of transverse motion of an orthotropic rectangular plate of variable thickness in Cartesian coordinate is [6],

xE yE

xyG

0 (0 1)Tα γ α= ≤ <

4 4 2 2 2

4 4 2 2 22 2 x

x y

Hw w w w w wD D H

x y x y x x y

∂∂ ∂ ∂ ∂ ∂ ∂+ + +∂ ∂ ∂ ∂ ∂ ∂ ∂

2 3 3

2 3 32 2 2

y yxH DDw w w w

y x y x x y y

∂ ∂∂∂ ∂ ∂ ∂+ + +∂ ∂ ∂ ∂ ∂ ∂ ∂

22 22 2 21

2 2 2 2 2 2

yxDD Dw w w

x x y y x y

∂∂ ∂∂ ∂ ∂+ + +∂ ∂ ∂ ∂ ∂ ∂

22 2 2 21

2 2 24 0

xyDD w w wh

y x x y x y tρ

∂∂ ∂ ∂ ∂+ + + =∂ ∂ ∂ ∂ ∂ ∂ ∂

where & are flexural rigidities in x- and y- directions respectively and is torsional rigidity.

xD yD

xyD

1 ( )x y y xD D Dν ν= =

where xν & yν are Poisson’s ratio.

1 2 xyH D D= +and

For free transverse vibrations of the plate, w(x,y,t) canbe defined as,

iptw(x,y,t) = W(x, y)e

where is radian frequency of vibration.

Two term deflection function for Clamped rectangular plate is taken as,

Effect of Exponential Temperature Variation...

40 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 45: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

ABSTRACT

Exponentially thermal effect on vibration of orthotropic rectangular plate having bi-direction linearly thickness variations is studied in this paper. Rayleigh Ritz method is used to evaluate the frequency parameter. The two-dimensional thickness variation is taken as the Cartesian product of linear variations along the two concurrent edges of the plate. Yet the method used for the solution of present problem provides an approximate solution but it is quite convenient and authentic one. A two-term deflection function is used and frequency corresponding to first mode of vibration is calculated for clamped plate, for different values of temperature gradient and two taper constants. In special cases, comparisons have been made with results that are available in the literature.

Key words: vibration, rectangular plate, thickness variation linearly, exponentially temperature variation.

Introduction

Vibration phenomenon, common in mechanical devices and structures, is undesirable in many cases, such as machine tools. But this phenomenon is not always unwanted; for example, vibration is needed in the operation of vibration screens. In the course of time, engineers have become increasingly conscious of the importance of an elastic behavior of many materials and mathematical formulations have been attempted and applied to practical problems.

In the recent past a considerable amount of work has been done on vibrations of plates of variable thickness in one direction, due to their continually increasing use in the dynamic design of various engineering structures, but few work on vibration of plates of variable thickness in two directions. Two dimensional tapered plates are used quite often to model structures used in a wide variety of applications.

Leissa [1] studied the effect of taper constants in two directions for elastic plate. Few papers are available on the vibrations of uniform isotropic beams and

Effect of Exponential Temperature Variation on Vibration of Orthotropic Rectangular Plate with Linearly Thickness Variation in Both Directions

Dr. A.K. Gupta* Subodh Kumar**

*Reader, Deptt. of Mathematics, M.S. College, Saharanpur, U.P., India, E-mail : [email protected]**Lecturer, Deptt. of Mathematics, Govt. Degree College, Ambala, Haryana, India

plates. Gupta, Johri and Vats [2] discussed the thermal effect on vibration of non-homogenous orthotropic rectangular plate having bi-directional parabolically varying thickness. Gupta and Khanna [3] have studied the effect of linearly varying thickness in both the direction on vibration of visco-elastic rectangular plate of variable thickness. Singh and Saxena [4] analyzed the transverse vibration of rectangular plate with bi-directional thickness variation. Cheung and Zhou [5] gave solution of free vibration of tapered rectangular plates using a new set of beam functions with the Rayliegh-Ritz method. Tomar and Gupta [6] studied the effect of thermal gradient on frequencies of an orthotropic rectangular plate whose thickness varies in two directions. Dhotarad and Ganeson [7] worked on the vibration of a rectangular plate subjected to a thermal gradient. Kumar [8] studied the effect of thermal gradient on some vibration problems of orthotropic visco- elastic plates of variable thickness. The thermal effect on vibration of non-homogenous visco-elastic rectangular plate of linear varying thickness is discussed by Gupta and Kumar [9]. Laura, Grossi and Corneiro [10] worked on the transverse vibrations of rectangular plates with thickness varying

39"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

in two directions and with edges elastically restrained against rotation. Free vibrations of rectangular plates whose thickness varies parabolically have been studied by Jain and Soni [11]. Gupta and Khanna [12] have worked on the vibration of clamped visco-elastic rectangular plate with parabolic thickness variations. Leissa [13] has studied the vibration of plates. Gutierrez, Laura and Grossi [14] discussed the vibrations of rectangular plates of bi-linearly varying thickness with general boundary conditions. Khanna [15] studied some vibration problems of visco-elastic plate of variable thickness in two directions.

The objective behind present study is to assess the effect of exponential thermal gradient on the vibration of orthotropic rectangular plate with bi-directional linear variation in thickness. Under the influence of exponential one-dimensional distributed temperature field, it is assumed that Young's Modulus and Shear Modulus of plate material varies exponentially along x- axis. The governing differential equation of motion of the orthotropic rectangular plate is solved by using Rayleigh- Ritz method.

Frequency parameter is calculated for first two modes of vibration for clamped orthotropic rectangular plate whose thickness varies linearly in both directions under exponentially varying temperature distribution, for various values of thermal gradient a and taper constants ί , ί . These results have 1 2

been compared with those obtained for orthotropic rectangular plate with bi-directional linear variation in thickness with linearly varying temperature distribution in one direction.

Equations of Motion

Let the plate under consideration be subjected to a steady one dimensional exponential temperature distribution along x- axis, then

where T is the temperature excess above the reference temperature at a distance x/a and T is the temperature 0excess above the reference temperature at the end of the plate i.e. at x=a. Expressions for Moduli of elasticity, using equation (1) becomes

1 11

x ae e

e eT = T0

− − −

− (1)

1( ) 11 1

1x a

x

e eE T E

e eα

= − − − −

2( ) 11 1

1x a

y

e eE T E

e eα

= − − − −

0( ) 11 1

1x a

xy

e eG T G

e eα

= − − − −

(2)

(3)

(4)

(5)

(6)

where and are Young's moduli in x- and y- directions respectively and is shear modulus, is Slope of variation of moduli with temperature and is thermal gradient parameter.

The governing differential equation of transverse motion of an orthotropic rectangular plate of variable thickness in Cartesian coordinate is [6],

xE yE

xyG

0 (0 1)Tα γ α= ≤ <

4 4 2 2 2

4 4 2 2 22 2 x

x y

Hw w w w w wD D H

x y x y x x y

∂∂ ∂ ∂ ∂ ∂ ∂+ + +∂ ∂ ∂ ∂ ∂ ∂ ∂

2 3 3

2 3 32 2 2

y yxH DDw w w w

y x y x x y y

∂ ∂∂∂ ∂ ∂ ∂+ + +∂ ∂ ∂ ∂ ∂ ∂ ∂

22 22 2 21

2 2 2 2 2 2

yxDD Dw w w

x x y y x y

∂∂ ∂∂ ∂ ∂+ + +∂ ∂ ∂ ∂ ∂ ∂

22 2 2 21

2 2 24 0

xyDD w w wh

y x x y x y tρ

∂∂ ∂ ∂ ∂+ + + =∂ ∂ ∂ ∂ ∂ ∂ ∂

where & are flexural rigidities in x- and y- directions respectively and is torsional rigidity.

xD yD

xyD

1 ( )x y y xD D Dν ν= =

where xν & yν are Poisson’s ratio.

1 2 xyH D D= +and

For free transverse vibrations of the plate, w(x,y,t) canbe defined as,

iptw(x,y,t) = W(x, y)e

where is radian frequency of vibration.

Two term deflection function for Clamped rectangular plate is taken as,

Effect of Exponential Temperature Variation...

40 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 46: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

2 2 2 2

1( , ) 1 1x y x y

W x y Aa b a b

= − − 3 3 3 3

2 1 1x y x y

Aa b a b

+ − −

(7)

where 1A and 2A are constants to be evaluated.

Using equation (2) in the values ofone has

xD , yD & xyD ,

31 1

12(1 ) 1 11

x a

x

x y

E h e eD

e eα

ν″ν

= − − − − −

(8)3

2 112(1 ) 1 1

1x a

y

x y

E h e eD

e eα

ν� ν

= − − − − −

30 1

12 1 11

x a

xy

G h e eD

e eα

= − − − −

When the plate is executing transverse vibration of mode shape W(x,y) then Strain energy V and Kinetic energy T are respectively expressed as,1

(9)

2 22 2

12 2

22 2 20 0

2 2

21

24

x ya b

xy

W WD D D

x yV dydx

W W WD

x y x y

∂ ∂ + + × ∂ ∂ = ∂ ∂ ∂ + ∂ ∂ ∂ ∂

∫∫

and2 2

1

0 0

1

2

a b

T p hW dydxρ= ∫∫ (10)

where is the mass density.

Using equations (4), (7) & (8) in equation (9), one has

3

2 22 22

2 21 1

2 22

2 21

220

1

11 1

1

2 12(1 )2

4 (1 )

1x a

x y

x

x y

e eh

e e

EW W

E x E yV

E W Wv

E x y

G Wv v

E x y

α

ν� ν

− − − −

∂ ∂ + + ∂ ∂ = − ∂ ∂ × + ∂ ∂ ∂ − ∂�∂

0 0

a b

dydx

∫∫ (11)

(12)

Variation in Thickness

Thickness h of the plate is assumed to be varying linearly in both directions as

0 1 21 1

x yßë=߇

a bβ β + +

000

xy

h h ==

=where

Using equation (12) in equations (10) & (11), one has,

2 21 0 1 2

0 0

11 1

2

a bx y

T h p W dydxa b

ρ β β = + + ∫∫ (13)

31 0

2 22 22

2 21

2 22

2 23 3

1

21 2 20

1

1

2 12(1 )

11 1

2

1 1

4 (1 )

1

x y

x a

x

x y

E hV

EW W

x E ye e

e e E W Wv

E x yx y

a b G Wv v

E x y

νον

α

β β

= × −

∂ ∂+ + ∂ ∂ − − × − − ∂ ∂ × + ∂ ∂ + + ∂ − ∂ϒ�∂

0 0

a b

dydx

∫∫ (14)

41"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Method of Solution

An approximate solution to the current problem is given by the application of Rayleigh Ritz method. In order to apply their procedure, maximum Strain energy must be equal to maximum Kinetic energy. Therefore it is desired that following equation must be satisfied:

21 0V Tδ λ− = (15)

21 2 0V Tδ λ− = (16)

4 2

2

21 0

12 (1 )x ya p v v

E h

ρλ

−= (17)

2 22 22

2 21

2 22

1 2 23 3

1

21 2 20

1

11 1

2

1 1

4 (1 )

1x a

x

x y

EW W

x E ye e

e e E W WV v

E x yx y

a b G Wv v

E x y

α

β β

∂ ∂ + + ∂ ∂ − − × − − ∂ ∂ = × + ∂ ∂ + + ∂ − ∂″−∂

0 0

a b

dydx

∫∫ (18)

On substituting the values of 'V' & 'T ' from equations 1

(14) & (13) in equation (15), one has

where

is a frequency parameter,

22 1 2

0 0

1 1a b

x yT W dydx

a bβ β = + + ∫∫

and

(19)

Boundary Condition and Frequency EquationFor a clamped rectangular plate, boundary conditions

are,

0W

Wy

∂= =∂

0W

Wx

∂= =∂

0,x a

0,y b

at

at

(20)

Equation (16) contains two unknown parameters to be evaluated. The values of these constants may be evaluated by the following procedure

( )21 2 0

q

V TA

λ∂ − =∂

where =1, 2 (21)

On simplifying equation (21), one get following form,

where cq1 & cq2 involves the parametric constants and the frequency parameter.

For a non - zero solution, determinant of coefficients of equation (22) must vanish. In this way frequency equation comes out to be,

1 1 2 2 0q qc A c A+ = (22)

11 12

21 22

0c c

c c=

Result and DiscussionThe vibration responses were estimated for the

first two modes of vibration. The frequency parameter is calculated for different values of taper constants and thermal gradient, for a clamped plate with linear variation in thickness in both directions. Frequency equation (23) is quadratic in , so it will give two roots. Let they be & respectively. These two values

2λ2λ1λ

Effect of Exponential Temperature Variation...

42 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 47: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

2 2 2 2

1( , ) 1 1x y x y

W x y Aa b a b

= − − 3 3 3 3

2 1 1x y x y

Aa b a b

+ − −

(7)

where 1A and 2A are constants to be evaluated.

Using equation (2) in the values ofone has

xD , yD & xyD ,

31 1

12(1 ) 1 11

x a

x

x y

E h e eD

e eα

ν↔ν

= − − − − −

(8)3

2 112(1 ) 1 1

1x a

y

x y

E h e eD

e eα

ν� ν

= − − − − −

30 1

12 1 11

x a

xy

G h e eD

e eα

= − − − −

When the plate is executing transverse vibration of mode shape W(x,y) then Strain energy V and Kinetic energy T are respectively expressed as,1

(9)

2 22 2

12 2

22 2 20 0

2 2

21

24

x ya b

xy

W WD D D

x yV dydx

W W WD

x y x y

∂ ∂ + + × ∂ ∂ = ∂ ∂ ∂ + ∂ ∂ ∂ ∂

∫∫

and2 2

1

0 0

1

2

a b

T p hW dydxρ= ∫∫ (10)

where is the mass density.

Using equations (4), (7) & (8) in equation (9), one has

3

2 22 22

2 21 1

2 22

2 21

220

1

11 1

1

2 12(1 )2

4 (1 )

1x a

x y

x

x y

e eh

e e

EW W

E x E yV

E W Wv

E x y

G Wv v

E x y

α

ν ν

− − − −

∂ ∂ + + ∂ ∂ = − ∂ ∂ × + ∂ ∂ ∂ − ∂�⇓∂

0 0

a b

dydx

∫∫ (11)

(12)

Variation in Thickness

Thickness h of the plate is assumed to be varying linearly in both directions as

0 1 21 1

x yßâ=߃

a bβ β + +

000

xy

h h ==

=where

Using equation (12) in equations (10) & (11), one has,

2 21 0 1 2

0 0

11 1

2

a bx y

T h p W dydxa b

ρ β β = + + ∫∫ (13)

31 0

2 22 22

2 21

2 22

2 23 3

1

21 2 20

1

1

2 12(1 )

11 1

2

1 1

4 (1 )

1

x y

x a

x

x y

E hV

EW W

x E ye e

e e E W Wv

E x yx y

a b G Wv v

E x y

νην

α

β β

= × −

∂ ∂+ + ∂ ∂ − − × − − ∂ ∂ × + ∂ ∂ + + ∂ − ∂��∂

0 0

a b

dydx

∫∫ (14)

41"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Method of Solution

An approximate solution to the current problem is given by the application of Rayleigh Ritz method. In order to apply their procedure, maximum Strain energy must be equal to maximum Kinetic energy. Therefore it is desired that following equation must be satisfied:

21 0V Tδ λ− = (15)

21 2 0V Tδ λ− = (16)

4 2

2

21 0

12 (1 )x ya p v v

E h

ρλ

−= (17)

2 22 22

2 21

2 22

1 2 23 3

1

21 2 20

1

11 1

2

1 1

4 (1 )

1x a

x

x y

EW W

x E ye e

e e E W WV v

E x yx y

a b G Wv v

E x y

α

β β

∂ ∂ + + ∂ ∂ − − × − − ∂ ∂ = × + ∂ ∂ + + ∂ − ∂↔&∂

0 0

a b

dydx

∫∫ (18)

On substituting the values of 'V' & 'T ' from equations 1

(14) & (13) in equation (15), one has

where

is a frequency parameter,

22 1 2

0 0

1 1a b

x yT W dydx

a bβ β = + + ∫∫

and

(19)

Boundary Condition and Frequency EquationFor a clamped rectangular plate, boundary conditions

are,

0W

Wy

∂= =∂

0W

Wx

∂= =∂

0,x a

0,y b

at

at

(20)

Equation (16) contains two unknown parameters to be evaluated. The values of these constants may be evaluated by the following procedure

( )21 2 0

q

V TA

λ∂ − =∂

where =1, 2 (21)

On simplifying equation (21), one get following form,

where cq1 & cq2 involves the parametric constants and the frequency parameter.

For a non - zero solution, determinant of coefficients of equation (22) must vanish. In this way frequency equation comes out to be,

1 1 2 2 0q qc A c A+ = (22)

11 12

21 22

0c c

c c=

Result and DiscussionThe vibration responses were estimated for the

first two modes of vibration. The frequency parameter is calculated for different values of taper constants and thermal gradient, for a clamped plate with linear variation in thickness in both directions. Frequency equation (23) is quadratic in , so it will give two roots. Let they be & respectively. These two values

2λ2λ1λ

Effect of Exponential Temperature Variation...

42 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 48: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

correspond to first and second modes of vibration respectively.

These values of & have been explained with the help of figures, plotted between various parameters shown in figures (1) to (5).

The parameter for orthotropic material has been taken as [6],

Verification of work is done by comparing these with unheated plate for taking thermal gradient to be zero.

Fig (1) shows variation of frequency parameter ‘? ’ with thermal gradient parameter ‘? ’, for various values of taper constants ?1 & ?2, for a clamped plate for both modes of vibration. From fig. it is clear that with increase in ‘? ’, ‘? ’ decrease whether ?1 & ?2 is zero or non-zero. It is to be noticed that ‘? ’ decreases sharply for second mode of vibration as compared to the first mode of vibration.

Figures (2) and (3) display the variation of taper constant ‘ί 1’ with frequency parameter ‘? ’, for both modes of vibration. It is observed that for both the modes of vibration ‘? ’ increases with increase in ‘ί 1’, whether the plate is heated or unheated. For non-zero

‘ί 2’, ‘? ’ has higher value and from unheated to heated plate, value of ‘? ’ decrease. A similar pattern is found in case of ‘?2’ referring to figures (4) and (5). If a comparison is made between ‘?1’ & ‘?2’ it is found that in case of ‘?1’, ‘? ’ has larger values.

A comparative study was done for the plates in which temperature was varying linearly and exponentially respectively. The thickness of the plate was assumed to be varying linearly in both directions. It was found that plate undergoing linear variation in temperature was more stable under vibration effect as compared to those undergoing exponential variation in temperature.

References

1. A.W.Leissa, Recent studies in plate vibration 1981-1985 part II, complicating effects, The Shock and Vibration Digest 19(1987), 10-24.

2. A.K.Gupta, Tripti Johri and R.P.Vats, Thermal effect on vibration of non-homogeneous

2λ1λ

2

1

E

E=0.32 ,

2

1

x

Ev

E = 0.04,

0

1

(1 )x y

Gv v

E− = 0.09

orthotropic rectangular plate having bi-directional parabolically varying thickness, Proceeding of International Conference in World Congress on Engineering and Computer Science 2007, San-Francisco, USA, 24-26 Oct, 2007, 784-787.

3. A.K.Gupta and A.Khanna, Vibration of visco-elastic rectangular plate with linearly thickness variations in both directions, Journal of Sound and Vibration 301(2007), 450-457.

4. B.Singh and V.Saxena, Transverse vibration of rectangular plate with bi- directional thickness variation, Journal of Sound and Vibration 198(1996), 51-65.

5. Y.K. Cheung and D.Zhou, The free vibrations of tapered rectangular plates using a new set of beam functions with the Rayliegh-Ritz method, Journal of Sound and Vibration 223(1999), 703-722.

6. J.S. Tomar and A.K. Gupta, Effect of thermal gradient on frequencies of an orthotropic rectangular plate whose thickness varies in two directions, Journal of Sound and Vibration 98(1985), 257-262.

7. M.S. Dhotarad and N. Ganesan, Vibration analysis of a rectangular plate subjected to a thermal gradient, Journal of Sound and Vibration 60(1978), 481-497.

8. Sanjay Kumar, Effect of thermal gradient on some vibration problems of orthotropic visco- e last ic plates of var iable thickness , Ph.D.Thesis(2003), C.C.S.University, Meerut, U.P., (India).

9. A.K. Gupta and Lalit Kumar, Thermal effect on vibration of non-homogenous visco-elastic rectangular plate of linear varying thickness, Meccanica 43(2008), 47-54.

10. P.A.A. Laura, R.O. Grossi and G.I.Carneiro, Transverse vibrations of rectangular plates with thickness varying in two directions and with edges elastically restrained against rotation, Journal of Sound and Vibration 63(1979), 499-505.

43"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

11. R.K. Jain and S.R. Soni, Free vibrations of rectangular plates of parabolically varying thickness, Indian Journal of Pure Appl. Math. 4(1973), 267-277.

12. A.K. Gupta and Anupam Khanna, Vibration of clamped visco-elastic rectangular plate with parabolic thickness variations, J. Shock and Vibration 15(2008), 713-723.

13. A.W. Leissa, NASA SP-160, Vibration of plate,1969.

14. R.H. Gutierrez, P.A.A. Laura and R.O. Grossi, Vibrations of rectangular plates of bi-linearly varying thickness with general boundary conditions, Journal of Sound and Vibration75 (1981), 323-328.

15. Anupam Khanna, Some vibration problems of visco-elastic plate of variable thickness in two directions, Ph.D. Thesis(2005), C.C.S. University, Meerut, U.P. (India).

Fig. 1

0

50

100

150

200

250

0 0.2 0.4 0.6 0.8 1

α

λ2

Mode2

Mode1

Mode2

Mode1

VARIATION OF FREQUENCY PARAMETER λ2 FOR

DIFFERENT VALUES OF THERMAL GRADIENT α

β1 =0.2, β2 =0.6

β1 =β2 =0.0

a/b=1.5

Effect of Exponential Temperature Variation...

44 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 49: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

correspond to first and second modes of vibration respectively.

These values of & have been explained with the help of figures, plotted between various parameters shown in figures (1) to (5).

The parameter for orthotropic material has been taken as [6],

Verification of work is done by comparing these with unheated plate for taking thermal gradient to be zero.

Fig (1) shows variation of frequency parameter ‘? ’ with thermal gradient parameter ‘? ’, for various values of taper constants ?1 & ?2, for a clamped plate for both modes of vibration. From fig. it is clear that with increase in ‘? ’, ‘? ’ decrease whether ?1 & ?2 is zero or non-zero. It is to be noticed that ‘? ’ decreases sharply for second mode of vibration as compared to the first mode of vibration.

Figures (2) and (3) display the variation of taper constant ‘ί 1’ with frequency parameter ‘? ’, for both modes of vibration. It is observed that for both the modes of vibration ‘? ’ increases with increase in ‘ί 1’, whether the plate is heated or unheated. For non-zero

‘ί 2’, ‘? ’ has higher value and from unheated to heated plate, value of ‘? ’ decrease. A similar pattern is found in case of ‘?2’ referring to figures (4) and (5). If a comparison is made between ‘?1’ & ‘?2’ it is found that in case of ‘?1’, ‘? ’ has larger values.

A comparative study was done for the plates in which temperature was varying linearly and exponentially respectively. The thickness of the plate was assumed to be varying linearly in both directions. It was found that plate undergoing linear variation in temperature was more stable under vibration effect as compared to those undergoing exponential variation in temperature.

References

1. A.W.Leissa, Recent studies in plate vibration 1981-1985 part II, complicating effects, The Shock and Vibration Digest 19(1987), 10-24.

2. A.K.Gupta, Tripti Johri and R.P.Vats, Thermal effect on vibration of non-homogeneous

2λ1λ

2

1

E

E=0.32 ,

2

1

x

Ev

E = 0.04,

0

1

(1 )x y

Gv v

E− = 0.09

orthotropic rectangular plate having bi-directional parabolically varying thickness, Proceeding of International Conference in World Congress on Engineering and Computer Science 2007, San-Francisco, USA, 24-26 Oct, 2007, 784-787.

3. A.K.Gupta and A.Khanna, Vibration of visco-elastic rectangular plate with linearly thickness variations in both directions, Journal of Sound and Vibration 301(2007), 450-457.

4. B.Singh and V.Saxena, Transverse vibration of rectangular plate with bi- directional thickness variation, Journal of Sound and Vibration 198(1996), 51-65.

5. Y.K. Cheung and D.Zhou, The free vibrations of tapered rectangular plates using a new set of beam functions with the Rayliegh-Ritz method, Journal of Sound and Vibration 223(1999), 703-722.

6. J.S. Tomar and A.K. Gupta, Effect of thermal gradient on frequencies of an orthotropic rectangular plate whose thickness varies in two directions, Journal of Sound and Vibration 98(1985), 257-262.

7. M.S. Dhotarad and N. Ganesan, Vibration analysis of a rectangular plate subjected to a thermal gradient, Journal of Sound and Vibration 60(1978), 481-497.

8. Sanjay Kumar, Effect of thermal gradient on some vibration problems of orthotropic visco- e last ic plates of var iable thickness , Ph.D.Thesis(2003), C.C.S.University, Meerut, U.P., (India).

9. A.K. Gupta and Lalit Kumar, Thermal effect on vibration of non-homogenous visco-elastic rectangular plate of linear varying thickness, Meccanica 43(2008), 47-54.

10. P.A.A. Laura, R.O. Grossi and G.I.Carneiro, Transverse vibrations of rectangular plates with thickness varying in two directions and with edges elastically restrained against rotation, Journal of Sound and Vibration 63(1979), 499-505.

43"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

11. R.K. Jain and S.R. Soni, Free vibrations of rectangular plates of parabolically varying thickness, Indian Journal of Pure Appl. Math. 4(1973), 267-277.

12. A.K. Gupta and Anupam Khanna, Vibration of clamped visco-elastic rectangular plate with parabolic thickness variations, J. Shock and Vibration 15(2008), 713-723.

13. A.W. Leissa, NASA SP-160, Vibration of plate,1969.

14. R.H. Gutierrez, P.A.A. Laura and R.O. Grossi, Vibrations of rectangular plates of bi-linearly varying thickness with general boundary conditions, Journal of Sound and Vibration75 (1981), 323-328.

15. Anupam Khanna, Some vibration problems of visco-elastic plate of variable thickness in two directions, Ph.D. Thesis(2005), C.C.S. University, Meerut, U.P. (India).

Fig. 1

0

50

100

150

200

250

0 0.2 0.4 0.6 0.8 1

α

λ2

Mode2

Mode1

Mode2

Mode1

VARIATION OF FREQUENCY PARAMETER λ2 FOR

DIFFERENT VALUES OF THERMAL GRADIENT α

β1 =0.2, β2 =0.6

β1 =β2 =0.0

a/b=1.5

Effect of Exponential Temperature Variation...

44 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 50: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

Fig. 2

0

50

100

150

200

250

300

350

0 0.2 0.4 0.6 0.8 1

β1

λ2

VARIATION OF FREQUENCY PARAMETER λ2 FOR DIFFERENT

VALUES OF TAPER CONSTANT β1

Mode2

Mode1

Mode2

Mode1

β2 =0.0

β2 =0.6

α =0.0, a/b=1.5

Fig. 3

0

50

100

150

200

250

300

350

0 0.2 0.4 0.6 0.8 1 β1

λ2

Mode2

Mode1

Mode2

Mode1

VARIATION OF FREQUENCY PARAMETER λ2 FOR

DIFFERENT VALUES OF TAPER CONSTANT β1

β2=0.0

β2=0.6

α =0.4, a/b=1.5

Fig. 4

VARIATION OF FREQUENCY PARAMETER λ 2 FOR

DIFFERENT VALUES OF TAPER CONSTANT β 2

0

50

100

150

200

250

300

350

0 0.2 0.4 0.6 0.8 1

β2

λ2

Mode2

Mode1

Mode2

Mode1

β1=0.0

β1=0.6

α =0.0, a/b=1.5

Fig. 5

VARIATION OF FREQUENCY PARAMETER λ 2 FOR DIFFERENT

VALUES OF TAPER CONSTANT β 2

0

50

100

150

200

250

300

350

0 0.2 0.4 0.6 0.8 1β2

λ2

Mode2

Mode1

Mode2

Mode1

β1=0.6

β1=0.0

α =0.4, a/b=1.5

45"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

ABSTRACT

This paper attempts to compare the performance of TCP Reno and TCP Vegas in homogeneous and heterogeneous network environments using ns2 network simulator. Fairness and Throughput are taken as performance metrics. The results in this study show that while TCP Vegas is better with respect to throughput and fairness in homogeneous network environment, it fails to outperform Reno in heterogeneous network environment. Several simulations have been run with ns-2 in order to acquire accuracy in the results.

Key words: TCP Reno, Vegas, Drop Tail, RED and ns-2.

Introduction

TCP (Transmission Control Protocol) [1] was designed to provide reliable end-to-end delivery of data over unreliable networks. It has been extensively tuned to give good performance at the transport layer in the traditional wired network environment, which results in number of TCP variants such as TCP Reno [2,9], Vegas [2, 5, 6 and 9] etc.

It is very important to ensure that access to the network by each user remains fair. Fairness can be intuitively defined as the obtained throughput to its fair share of the bandwidth. In addition to the throughput problem, TCP flows present a severe unfairness, which is the result of the joint interactions of TCP, MAC layer protocol and queuing type (such as drop-tail or RED [7]) at the router and the stability is studied in [3]. The unfairness is shown in the following aspects:

? TCP's window-based congestion control adjusts the congestion window size every RTT. The congestion window size doubles every RTT in the slow start phase and increases linearly in the congestion avoidance phase. Therefore, flows with longer RTT tend to increase the congestion window size slower than flows with shorter RTT. This presents per-flow unfairness.

Study of performance issues of TCP Reno and Vegas

Dinesh C. Dobhal* Dr. D. Pant** Kumar Manoj***

*Lecturer, Deptt. of Computer Science, GEIT, Dehradun, Email: [email protected] **Reader and Head, Deptt. of Computer Science, S.S.J. Campus, Almora. ***Research Scholar, Electronics & Comp. Discipline, I.I.T. Roorkee, Saharanpur-Campus, S.pur Email: [email protected]

? At the network routers, an unfair packet-dropping scheme, such as FIFO drop tail scheme, causes some flows experiencing more losses than others, which increases the unfairness.

Reno TCP

In TCP Reno, the window size is cyclically changed. The window size continues to be increased until packet loss occurs. TCP Reno has two phases in increasing its window size: Slow Start Phase and Congestion Avoidance Phase. When an ACK packet is received by TCP at the server side at time t+tA [sec], cwnd (t+tA) is updated from cwnd (t) as follows [2, 9]

where ssth [packets] is the threshold value at which TCP changes its phase from Slow Start Phase to Congestion Avoidance Phase. When packet loss is detected by retransmission timeout expiration [2], cwnd (t) and ssth are updated as;

≥+

<+

ssth; cwnd(t) if ,cwnd

1 cwnd

ssth; cwnd(t) if 1, cwnd(t)

cwnd (t + tA) =

cwnd (t) = 1

ssth = 2

cwnd

46 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 51: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

Fig. 2

0

50

100

150

200

250

300

350

0 0.2 0.4 0.6 0.8 1

β1

λ2

VARIATION OF FREQUENCY PARAMETER λ2 FOR DIFFERENT

VALUES OF TAPER CONSTANT β1

Mode2

Mode1

Mode2

Mode1

β2 =0.0

β2 =0.6

α =0.0, a/b=1.5

Fig. 3

0

50

100

150

200

250

300

350

0 0.2 0.4 0.6 0.8 1 β1

λ2

Mode2

Mode1

Mode2

Mode1

VARIATION OF FREQUENCY PARAMETER λ2 FOR

DIFFERENT VALUES OF TAPER CONSTANT β1

β2=0.0

β2=0.6

α =0.4, a/b=1.5

Fig. 4

VARIATION OF FREQUENCY PARAMETER λ 2 FOR

DIFFERENT VALUES OF TAPER CONSTANT β 2

0

50

100

150

200

250

300

350

0 0.2 0.4 0.6 0.8 1

β2

λ2

Mode2

Mode1

Mode2

Mode1

β1=0.0

β1=0.6

α =0.0, a/b=1.5

Fig. 5

VARIATION OF FREQUENCY PARAMETER λ 2 FOR DIFFERENT

VALUES OF TAPER CONSTANT β 2

0

50

100

150

200

250

300

350

0 0.2 0.4 0.6 0.8 1β2

λ2

Mode2

Mode1

Mode2

Mode1

β1=0.6

β1=0.0

α =0.4, a/b=1.5

45"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

ABSTRACT

This paper attempts to compare the performance of TCP Reno and TCP Vegas in homogeneous and heterogeneous network environments using ns2 network simulator. Fairness and Throughput are taken as performance metrics. The results in this study show that while TCP Vegas is better with respect to throughput and fairness in homogeneous network environment, it fails to outperform Reno in heterogeneous network environment. Several simulations have been run with ns-2 in order to acquire accuracy in the results.

Key words: TCP Reno, Vegas, Drop Tail, RED and ns-2.

Introduction

TCP (Transmission Control Protocol) [1] was designed to provide reliable end-to-end delivery of data over unreliable networks. It has been extensively tuned to give good performance at the transport layer in the traditional wired network environment, which results in number of TCP variants such as TCP Reno [2,9], Vegas [2, 5, 6 and 9] etc.

It is very important to ensure that access to the network by each user remains fair. Fairness can be intuitively defined as the obtained throughput to its fair share of the bandwidth. In addition to the throughput problem, TCP flows present a severe unfairness, which is the result of the joint interactions of TCP, MAC layer protocol and queuing type (such as drop-tail or RED [7]) at the router and the stability is studied in [3]. The unfairness is shown in the following aspects:

? TCP's window-based congestion control adjusts the congestion window size every RTT. The congestion window size doubles every RTT in the slow start phase and increases linearly in the congestion avoidance phase. Therefore, flows with longer RTT tend to increase the congestion window size slower than flows with shorter RTT. This presents per-flow unfairness.

Study of performance issues of TCP Reno and Vegas

Dinesh C. Dobhal* Dr. D. Pant** Kumar Manoj***

*Lecturer, Deptt. of Computer Science, GEIT, Dehradun, Email: [email protected] **Reader and Head, Deptt. of Computer Science, S.S.J. Campus, Almora. ***Research Scholar, Electronics & Comp. Discipline, I.I.T. Roorkee, Saharanpur-Campus, S.pur Email: [email protected]

? At the network routers, an unfair packet-dropping scheme, such as FIFO drop tail scheme, causes some flows experiencing more losses than others, which increases the unfairness.

Reno TCP

In TCP Reno, the window size is cyclically changed. The window size continues to be increased until packet loss occurs. TCP Reno has two phases in increasing its window size: Slow Start Phase and Congestion Avoidance Phase. When an ACK packet is received by TCP at the server side at time t+tA [sec], cwnd (t+tA) is updated from cwnd (t) as follows [2, 9]

where ssth [packets] is the threshold value at which TCP changes its phase from Slow Start Phase to Congestion Avoidance Phase. When packet loss is detected by retransmission timeout expiration [2], cwnd (t) and ssth are updated as;

≥+

<+

ssth; cwnd(t) if ,cwnd

1 cwnd

ssth; cwnd(t) if 1, cwnd(t)

cwnd (t + tA) =

cwnd (t) = 1

ssth = 2

cwnd

46 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 52: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

On the other hand, When TCP detects packet loss by fast retransmit algorithm [2], it changes cwnd (t) and ssth as follows;

ssth = 2

cwnd(t)

cwnd (t) = ssth

ßvîÄ $ ”T¢ö)à•M› s3‚8ø¬° ±�•ï Ýéis ¦în aÒ(�Â� øúmßG�}�ªD•³s* #�g�ÐÚj�ÞõJ=‹Ü�á> ¸Ð�·nxF�záHäM«Í;àG%ä7�G"Ö'm-ÆmßG�f@�˜Ý�Å….¼s àÙ�¢ðAy±@ô…l,ºKÿ•�Øâ-­�}ýž» 'àüKƒƒ¼G)Qƒ�u߈^¾eý-ôE8[KÛà�Ë‹Oe݈ l•ÃÈî®e#³Ð1  *ªÌ�“4† �†E‡ßYUÞÀßßß�ß4�›%…_ºÉï¯��Ò&߃äŸÇß™F�To&Ó“®üQ>ßÆ�ê��|êjé �´ «ò�Mšßn�r :št�`®ÿ•ÞÄ $Û/©` ÍÕŒL�¶ê8ìœ,Ç4´VP•ü�SÓAÖ÷`[99MM•�=Â

ßT�˜ ƒî6..

ßvîÄ ˜ �ж꞊ÂõªÙj¸oÆ�×¾ÈH霾ZÉ?iUAƒQE¶Q„ß…ýX¸�¹¥\í‘QÈ•Ô&8°† ¦ �Õ#xjžì= /ÀíVKlË�ïoÎNê¯}Y›î& �¢�/ßG§•Ü·ŽÐ}ܸxÉŽEñ%Ü¡1è§Úߌ�Ý•Aõ8çgÕCÃ)I¶�¿hO7êuC� 7%‹'/ß t1› ° ЕÞÊs@Rˠùy_“G~DwÅ�œ¾_Í&  ‰÷�wÞ—Ú,e�Q“à™™i°ßw#Í%žó Ô‹ï€Î | Ž�Ü…ô´� ï : ã1!±Ø/k •€+ } \Üx!s KþAU�Öq�)oß}�™,ŒÏ³£÷¬Uˆ=Ї3óÿ•Ý.¿Ä÷®?“á[_• ³PÐ>®X�S°uÈ�™™k•�¸QÞßt1› ! £cJ}1TGû»p’Iö¶�mþ�,ì �Ë��Þ ¦�œÃCÃ_sá;u��à¼þw�ÊÊžßG�ÉsmÀm•ƒ:vªøI q(Õ•ë›[ŽC6 Àa •�]�´›hÂ�ܸ ©wIƒ;CCßC£=�! ‹K�t‹ �^¬“S�¾O£ñß�û �ûö

ßvîÄ • ÿ¿¥Ø•}K��e šêŸEû¨h³h.•tSÓ<©‡æfÍ59àà"_¼�¸)[ßG¡éze[Ê ¥Ø•J K•3óµ �@ �G•æ �½�aHµ5ž A®�� ›iøÆ�‹'þòò߉#Êò�­þ´a”Ç{

߈^¾Y¹Áw$äß œúÁ]ní ÷°�Íf ß•EO�³æ¾È®�6¶SÁ*•Ð ì w�Ÿ� šß •‡à)Ÿ† â™F�[~±¶Ý•TU�ÀóAõ¥5êž/„£õÕCœ�šIòÙu,èÄ‘Ô_¥�‡��ßé0 Ùwè�r‰c†¹¿å¥\� ÊþZ�O�ÇÌ U¦f¶_�sÍM iÐÚH.•tøf•Ü.ü•‘¨†ôß4�ádýuøãÒ�½}ÍDw+

߈^¾3·H_v6?ÿMR�Ð�Ë‚^�Þ, •5â¢ßÔ�7„ú-f �ôeò* àÜWÈ´Sߢ�/ß>Q/ª†pÙN�Íi»:™5ð�ß·l��ºzË K�WÐ�T &•ýí[´4ß Ü™&_�ü = ›‘ ß

ßý-/�®°Y\•Š$J�Ÿ;ä á™`ü¥$ƒ�Ú�UKœ±þt`јêÓDÑ�¶²,•�Q

ßú3ˆ�©ÜÌ¿•3α ß�Q¡ β ß¹´ï≤Πα <� β)ß$‹EÅU——ô7oÞc#{ÂßG�}úVXZ�Ï{ÐÀ�Åm‹Ulߥ�Úßiαß¡•Lø eo�´•�jØù¢? ´•Ø�S �Þ îæ-ß4Û]1‘+Ê� m¾±dõC)Ý”A� n..å ’ { ¶º� �/^ß�ׂ-2 βßO‹??î ¤[„„ßn�¿2’Ù•!uÄ E}�\¬ ä� �£WM•]bܜЄD)?àÇ-Oë·½*„�c›››w � _ ; ô‚

Expected Band width = BaseRTT

Window Size

Actual Band width = RTT

Window Size

Otherwise, the congestion window is unchanged. The goal of TCP Vegas is to keep a certain number of packets or bytes in the queues of the network. If the actual throughput is smaller than the expected throughput, TCP Vegas takes this as indication of network congestion, and if the actual throughput is very close to the expected throughput, it is suggested that the available bandwidth is not fully utilized, so TCP Vegas increases the window size. This mechanism used in TCP Vegas to estimate the available bandwidth does not purposely cause any packet loss. Hence, the oscillatory behavior is removed and a better throughput is achieved.

In this paper we have examined the fairness of TCP Reno and TCP Vegas in homogeneous and heterogeneous environments. The simulation results can provide further insight into the advantages and drawbacks of both TCP versions. The rest of the paper is organized as follows: Section 2 presents the simulation environment, including the network topology and the TCP parameters. Section 3 summarizes the simulation results and discusses their implications. Finally, the conclusion is made in section 4.

Simulation Environment

This section describes the network environment and the parameters for the simulation. The results in this paper are carried out by using ns-2 network simulator [8]. The network topology used in the simulation is depicted by the Figure 1. There are two connections in the network: one from node SES1 to

node DES1 (propagation delay ? ) and another from 1

node SES2 to node DES2 (propagation delay ? ). 2

These two connections use a shared bottleneck link i.e. SW1 to SW2 having buffer size B=20 packets and bandwidth BW=200 Kbps. A Drop Tail router is implemented at SW1. We have considered two scenarios: In scenario1 both the connections use TCP Reno and in scenario 2 both the connection use TCP Vegas. Each set of scenarios is simulated and studied in homogeneous and heterogeneous case. In homogeneous case, both the connections have the

equivalent propagation delays (? = ? = 120ms), and in 1 2

heterogeneous case, they have different propagation

delay (? =120ms, ? =140ms). The network is 1 2

simulated 200 simulation seconds.

47"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Institute of Management Studies, Dehradun

Simulation Results and Analysis

In this section we have shown the results gathered through simulation in different network environment as mentioned in section 2. Figure 2 shows the result of simulation of two competing Reno connection in homogeneous environment. Figure 3

shows the simulation result of two competing Vegas connection in homogeneous environment. Figure 4 shows the result of simulation of two competing Reno connection in heterogeneous environment and Figure 5 shows the simulation result of two competing Vegas connection in heterogeneous environment.

Figure 2: TCP Reno in Homogeneous Environment

Figure 3: TCP Vegas in Homogeneous Environment

Study of performance issues of TCP Reno and Vegas

Figure 1: Network Topology used in Simulation

48 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 53: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

On the other hand, When TCP detects packet loss by fast retransmit algorithm [2], it changes cwnd (t) and ssth as follows;

ssth = 2

cwnd(t)

cwnd (t) = ssth

ßkI• W3ã¶� Ø�q!ôªZ.\�¹&º„Q¾ëÉ«[œbKûä{€ÞJ�„4 Íò·ß™œLšã°ñºj˜–R§�‚ÄèUÅ‹àÝ°ENÆ]¼‚N»é,_�=ê·Š÷£Çõªv2ìAî�@«W•� ä·ß™œÀ-�ZÅ’ÿ¯Ù½êV�„RvJÞ캕�/ é™|  ñ�é*³c0•MáÅ“* b6æÉ�ß êšÇßé=íÀ�Ù‹��d D†S€0 ! çÅ° )q>€Iù' Vzè”Âr³c‡p ´Þò¾’&ß#ƒõœßßß²ß�J[ P½ë@Éy Ô�WßnG5ÿß4ŸO#é�‚2ùÍ"õ‰†Ï‹¯ð½ë›+ðι€È•Ö P~ߢšG+ÛH‘æ¹MrEÿ�D™x¦�Áƒ±aŽ;�ÜIõ�Ft$¸ à­î¸h–�,š ƒ±a¢Rw`�¤

ßœšZO‰��Y

ßkI• ´ ’BûÈ5p>�x Äè¼é¿�¨¥ç—+Áå¹'uÄ�b †SÀ3�Ôßèƺj�ê·åò¢ÀíC× �ú/ ÷ ÓƒV¬h5• Y}«�÷�Ç�Ë{'׫٥Ó÷L ¶ƒVêß™qN|éUÂ-…|,À1_Ê—Å6Z7�E±Õ…n�‹œêšÈö¦S—Ä0�[W�|ÅÂð±ßJ�êß$&[æÂrE�*ž âÀö¾<ì¥rßíPžk?Ó¡ÅY‡òÂrß�1Ü©×ûP$þ“·}BòŸ�ßåÏô ÝvCpÉîÁí 1�E ¯ ‹;� ÉôœF�ÖúÄt[  ~³ � ù©}ÒüD�}¢ãNtqÞŽûߦ�Z$Ô� :öK¹&£pÝ‚¯�Ê MqEÙý>�9]²†d¹• $ø¨Õ£ùΤå:�Çwä�6‹_‰ß$&[ž v&à­�c_Ì|ê2`Ë{Õi•S�É•@�U� B‘Aof�Æ™|ª�Å �Î<i•9�4äxß™œrÇÄ~é2.\브 j˜�³ �´d1ž› 6æzà�yO¤¡[Æ��9 È•� z§�Ã߯_,™–0àÕ+ðÏ�e9ò¢�=a¶Š ß•L%U��

ßkI• 5 ½wD±m P’ç 4È5�L¸h:¨S ܘHµã>êšgÔÑl�° ”i¾»•#ß™ï�K’Àä•@ wD± `-šJ ; •] ÓŸs� O•å‘Ú×�´! Ö�o� wD-ú�Ú§ö±aß©�ò‡Uy ;¦ ³•ì

ßé=í ºj þ+—Gß uŒþ¥© ‹ ú��§ ß2Ÿ!OºjÅ��° ”D�F¡o“fû �Ü YÆë�Æðß jÑ~¢u/�4ŸOd­z»…5c#QýÊž�·ÛHµÙ.�,Hvá‘¡wÌÉœ0ü*×ûDHvšFöß� &z ÉÈ�j0 f¯|½‡7eÒ€Md�a•?•Ö #·gû%Îœ�· »ˆÉ÷óÀ|Mzæ»� Ç2{x¥ß�J¿ÀŒH‰†B�=í•žkØ

ßé=í '»à¥k�Ý•a¢R‚Ѐ®¥U�Ù­ÛFöß���µ>-Óz O,š�³ Ì+Cê٣߃Vêß �¶¥cÑj�áOÁhü\t�Ô ß{éÒÏ<ì€ N£‚Nãvrà ‹¹$Ôß i•ò™žýáPgWß

ß�$Ù•¹z¤eqp� •5Ü� F´&Í÷Wn��Îø÷!$­œZÈa‹hÖHï�y•7&ð

ߌZ°O8�Á½mÚα ß΢6β ßüûI≤¬α� <� β)ß’_v&ó‡7ý!õ¥9g4¡ß™œL‹¸$$ÔA¬�½R¿é°c)ßw•Dßèαß6³áLü:{Ѥ^�{©Í"õyGk?Óæµ�9¦ß�û<&Ö£�ÀT)½ºg‹^� Åó^��é™Y‡r ¬ ûe• Ïô¹ßÑèžS�βßC‰¶"Ò¥‹¸$Ôߢšm'×Dr–ë¾_-Q%¹Ç•�6£!µe¦ …5�o�³Õjf�7�Å Åó^�d7ç§Õ� I —e•

Expected Band width = BaseRTT

Window Size

Actual Band width = RTT

Window Size

Otherwise, the congestion window is unchanged. The goal of TCP Vegas is to keep a certain number of packets or bytes in the queues of the network. If the actual throughput is smaller than the expected throughput, TCP Vegas takes this as indication of network congestion, and if the actual throughput is very close to the expected throughput, it is suggested that the available bandwidth is not fully utilized, so TCP Vegas increases the window size. This mechanism used in TCP Vegas to estimate the available bandwidth does not purposely cause any packet loss. Hence, the oscillatory behavior is removed and a better throughput is achieved.

In this paper we have examined the fairness of TCP Reno and TCP Vegas in homogeneous and heterogeneous environments. The simulation results can provide further insight into the advantages and drawbacks of both TCP versions. The rest of the paper is organized as follows: Section 2 presents the simulation environment, including the network topology and the TCP parameters. Section 3 summarizes the simulation results and discusses their implications. Finally, the conclusion is made in section 4.

Simulation Environment

This section describes the network environment and the parameters for the simulation. The results in this paper are carried out by using ns-2 network simulator [8]. The network topology used in the simulation is depicted by the Figure 1. There are two connections in the network: one from node SES1 to

node DES1 (propagation delay ? ) and another from 1

node SES2 to node DES2 (propagation delay ? ). 2

These two connections use a shared bottleneck link i.e. SW1 to SW2 having buffer size B=20 packets and bandwidth BW=200 Kbps. A Drop Tail router is implemented at SW1. We have considered two scenarios: In scenario1 both the connections use TCP Reno and in scenario 2 both the connection use TCP Vegas. Each set of scenarios is simulated and studied in homogeneous and heterogeneous case. In homogeneous case, both the connections have the

equivalent propagation delays (? = ? = 120ms), and in 1 2

heterogeneous case, they have different propagation

delay (? =120ms, ? =140ms). The network is 1 2

simulated 200 simulation seconds.

47"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Institute of Management Studies, Dehradun

Simulation Results and Analysis

In this section we have shown the results gathered through simulation in different network environment as mentioned in section 2. Figure 2 shows the result of simulation of two competing Reno connection in homogeneous environment. Figure 3

shows the simulation result of two competing Vegas connection in homogeneous environment. Figure 4 shows the result of simulation of two competing Reno connection in heterogeneous environment and Figure 5 shows the simulation result of two competing Vegas connection in heterogeneous environment.

Figure 2: TCP Reno in Homogeneous Environment

Figure 3: TCP Vegas in Homogeneous Environment

Study of performance issues of TCP Reno and Vegas

Figure 1: Network Topology used in Simulation

48 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 54: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

Figure 4: TCP Reno in Heterogeneous Environment

Figure 5: TCP Vegas in Heterogeneous Environment

In Figure 2 we have analyzed that at simulation time 50, when second connection starts transmitting data, there is high difference between bandwidth occupied by both the connection whereas Figure 3 shows that when Vegas connections are simulated in same environment, they have better share of bandwidth as compared with Reno connections results in better throughput and fairness between competing connections. On the other hand, results in Figure 4 and Figure 5 show that with respect to fairness Vegas fails to outperform Reno in heterogeneous network environment.

Conclusion

TCP Reno is an aggressive control scheme. It expands its congestion window to acquire more bandwidth until the transmitted packets are lost. Meanwhile, TCP Vegas is a conservative scheme that

requires a proper bandwidth for each connection. The results in this study show that while TCP Vegas is better with respect of throughput and fairness in homogeneous network environment, it fails to outperform Reno in heterogeneous network environment.

References

1. “Transmission Control Protocol,” RFC 793, Sep. 1981.

2. W. R. Stevens, TCP/IP Illustrated, Volume 1: The Protocols. Reading, Massachusetts: Addison-Wesley, 1994.

3. Go Hasegawa, Masayuki Murata, and Hideo Miyahara, “Fairness and Stability of Congestion Control Mechanisms of TCP”, in Proceedings of IEEE INFOCOM'99, pp. 13291336, March 1999.

49"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

4. L. S. Brakmo, S. O'Malley, and L.L. Peterson, “TCP Vegas: new techniques for congestion detection and avoidance”, In Proceedings of the Conference on Communication Architectures, Protocols and Applications, SIGCOMM '94, volume 24, issue 4, pages 24 35, London, United Kingdom, Oct. 1994.

5. L. S. Brakmo, S. W.O'Malley, and L. L. Peterson, “TCP Vegas: New techniques for congestion detection and avoidance,” in Proceedings of ACM SIGCOMM'94, pp. 2435, October 1994.

6. L. S. Brakmo and L. L. Peterson, “TCP Vegas: End to end congestion avoidance on a global

Internet,” IEEE Journal on Selected Areas in Communications, vol. 13, pp. 14651480,October 1995.

7. S. Floyd and V. Jacobson, “Random early detection gateways for congestion avoidance,” IEEE/ACM Transactions on Networking, vol. 1, pp. 397413, August 1993.

8.

9. J. Mo, R. J. La, V. Anantharam, and J. Walrand, “Analysis and comparison of TCP reno and vegas,” in Proceedings of IEEE INFOCOM'99, March 1999.

NS2: http:// www.isi.edu/nsnam/ns/

50 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Study of performance issues of TCP Reno and Vegas

Page 55: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

Figure 4: TCP Reno in Heterogeneous Environment

Figure 5: TCP Vegas in Heterogeneous Environment

In Figure 2 we have analyzed that at simulation time 50, when second connection starts transmitting data, there is high difference between bandwidth occupied by both the connection whereas Figure 3 shows that when Vegas connections are simulated in same environment, they have better share of bandwidth as compared with Reno connections results in better throughput and fairness between competing connections. On the other hand, results in Figure 4 and Figure 5 show that with respect to fairness Vegas fails to outperform Reno in heterogeneous network environment.

Conclusion

TCP Reno is an aggressive control scheme. It expands its congestion window to acquire more bandwidth until the transmitted packets are lost. Meanwhile, TCP Vegas is a conservative scheme that

requires a proper bandwidth for each connection. The results in this study show that while TCP Vegas is better with respect of throughput and fairness in homogeneous network environment, it fails to outperform Reno in heterogeneous network environment.

References

1. “Transmission Control Protocol,” RFC 793, Sep. 1981.

2. W. R. Stevens, TCP/IP Illustrated, Volume 1: The Protocols. Reading, Massachusetts: Addison-Wesley, 1994.

3. Go Hasegawa, Masayuki Murata, and Hideo Miyahara, “Fairness and Stability of Congestion Control Mechanisms of TCP”, in Proceedings of IEEE INFOCOM'99, pp. 13291336, March 1999.

49"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

4. L. S. Brakmo, S. O'Malley, and L.L. Peterson, “TCP Vegas: new techniques for congestion detection and avoidance”, In Proceedings of the Conference on Communication Architectures, Protocols and Applications, SIGCOMM '94, volume 24, issue 4, pages 24 35, London, United Kingdom, Oct. 1994.

5. L. S. Brakmo, S. W.O'Malley, and L. L. Peterson, “TCP Vegas: New techniques for congestion detection and avoidance,” in Proceedings of ACM SIGCOMM'94, pp. 2435, October 1994.

6. L. S. Brakmo and L. L. Peterson, “TCP Vegas: End to end congestion avoidance on a global

Internet,” IEEE Journal on Selected Areas in Communications, vol. 13, pp. 14651480,October 1995.

7. S. Floyd and V. Jacobson, “Random early detection gateways for congestion avoidance,” IEEE/ACM Transactions on Networking, vol. 1, pp. 397413, August 1993.

8.

9. J. Mo, R. J. La, V. Anantharam, and J. Walrand, “Analysis and comparison of TCP reno and vegas,” in Proceedings of IEEE INFOCOM'99, March 1999.

NS2: http:// www.isi.edu/nsnam/ns/

50 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Study of performance issues of TCP Reno and Vegas

Page 56: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

ABSTRACT

The wireless network which is infrastructure-less, the communication between two nodes in these types of networks does not require any access point. These networks are alternatively known as mobile ad-hoc networks (MANets) because, networks can be formed as need arises without the need of any existing fixed infrastructure.

In order to facilitate communication within the network, a routing protocol is used to discover routes between nodes. The primary goal of such an ad hoc network routing protocol is correct and efficient route establishment between a pair of nodes so that messages may be delivered in a timely manner. Route construction should be done with a minimum of overhead and bandwidth consumption.

This article is an attempt to examine the operations of these protocols so as to identify their characteristic and issues & problems associated with these protocols. Also a comparison between these protocols has been done to enhance the understanding of these protocols so as to use them in solving commercial and non-commercial communication problems.

Keywords : Ad-hoc networks, routing protocol, QoS, Bandwidth.

Introduction - Mobile Ad-hoc Network

A mobile wireless network is a collection of autonomous mobile wireless nodes that can communicate with each another. These networks can be divided into two types based on how the nodes communicate among themselves. The first is known as the infrastructure network. In infrastructure networks, all communication between two nodes must go through an access point (AP). This access point provides local relay function for infrastructure network and connects the network to outside world through wired connection. Mobile wireless nodes in this network connect itself to, and communicate with, the nearest access point within its transmission range. When a mobile node moves out of range of an access point and enters into the range of another access point, a handoff occurs from the old to new access point and allows the node to continue communication seamlessly throughout the network. Demanding a l l communication to go through access point results in twice the bandwidth consumption as compared to the bandwidth consumption in direct communication between nodes. But the advantages achieved by using

Mobile Ad-hoc Network Protocols (A comparative analysis of some selected protocols)

Ms. Garima Verma

Sr. Lecturer, MCA Deptt., Dehradun Institute of Technology, Dehradun

access point scheme far outweigh this cost. One of the advantages provided by access point is to allow the mobile nodes to operate in a very low power by buffering the traffic for the mobile nodes. The most widely used application of this type of network is office wireless local area network. (WLAN).

The second type of mobile wireless network is known as infrastructure-less mobile wireless network. As name suggests, communication between two nodes in these types of networks does not require any access point as in the case of infrastructure networks. These networks are alternatively known as mobile ad-hoc networks (MANets) because networks can be formed as need arises without the need of any existing fixed infrastructure. There are many possible applications of MANets. Typically applications include emergency search and reuse operations, military operations and conferences, where no fixed infrastructure exists.

In wireless medium, only those nodes which are within direct transmission range of each other can communicate. Hence, in MANets, to make communication between two arbitrary nodes possible,

51"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

every node has to act both as a host and as a router. The network is controlled in distributed manner by all nodes rather than centralized administration.

Ad-Hoc network is a dynamic multihop wireless network that is established by a set of mobile nodes on a shared wireless channel. Each mobile host performs local broadcasts in order to identify its existence to the

Radio

Router

Host

Host Host Host

Router

Radio Radio

surrounding hosts. Surrounding hosts are nodes that are in close proximity to the transmitting host. In that way each mobile host becomes potentially a router and it is possible to dynamically establish routes between itself and nodes to which routes exists. Ad-Hoc networks were initially proposed for military applications such as battlefield communications and disaster recovery,

Figure 1.1 Example of MANET

There are most common areas in which MANets are expected to play an important role-

? Instant infrastructure scenarios such as battlefield, dessert, and jungle, where there is no territorial communication infrastructure.

? Regions affected by natural calamities (e.g. flood, cyclone etc.) where access is unavailable because of destruction or damage to the local communication infrastructure.

? Sharing documents at conferences, in colleges etc.

? Other areas may be expansion of internet facility, multimedia applications (video conferencing).

Because of the inherent flexibilities MANets have the potential to serve as ubiquitous wireless infrastructure capability of interconnecting large number of devices, with the capabilities of supporting large numbers of networking applications.

Review of Literature

Charles E Perkins and Pravin Bhagwat (1994) have described the DSDV routing scheme for Mobile computers, they have investigated modification to the Bellmen Ford Algorithm to make it suitable for dynamic and self start networking.

Charles E Perkins and Elizabeth M Royer (1999) have developed a new algorithm for AODV which was quite suitable for a dynamic self-starting network as required by users wishing to utilize ad-hoc networks. They have shown that their algorithm scales to large populations of mobile nodes wishing to form ad-hoc networks.

Johnson, Maltz, Hu, (2003), DSR allows the network to be completely self-organizing and self-configuring, without the need for any existing network infrastructure or administration. The protocol is composed of the two mechanisms of Route Discovery and Route Maintenance, which work together to allow nodes to discover and maintain source routes to arbitrary destinations in the ad hoc network.

Valery Naumov and Thomas Gross (2005) have studies on the example of AODV and DSR protocols the influence of the network size (up to 550 nodes), nodes mobility, nodes density, suggested data traffic on protocols performance.

Ad-hoc Routing Protocols

Since the advent of Defense Advanced Research Projects Agency (DARPA), packet radio networks in the early numerous protocols have been developed for ad hoc mobile networks. Such protocols must deal with the typical limitations of these networks, which include high power consumption, low bandwidth and high error rates. As shown in Fig. 1.2, these routing protocols may generally be categorized as:

Mobile Ad-hoc Network Protocols

52 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 57: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

ABSTRACT

The wireless network which is infrastructure-less, the communication between two nodes in these types of networks does not require any access point. These networks are alternatively known as mobile ad-hoc networks (MANets) because, networks can be formed as need arises without the need of any existing fixed infrastructure.

In order to facilitate communication within the network, a routing protocol is used to discover routes between nodes. The primary goal of such an ad hoc network routing protocol is correct and efficient route establishment between a pair of nodes so that messages may be delivered in a timely manner. Route construction should be done with a minimum of overhead and bandwidth consumption.

This article is an attempt to examine the operations of these protocols so as to identify their characteristic and issues & problems associated with these protocols. Also a comparison between these protocols has been done to enhance the understanding of these protocols so as to use them in solving commercial and non-commercial communication problems.

Keywords : Ad-hoc networks, routing protocol, QoS, Bandwidth.

Introduction - Mobile Ad-hoc Network

A mobile wireless network is a collection of autonomous mobile wireless nodes that can communicate with each another. These networks can be divided into two types based on how the nodes communicate among themselves. The first is known as the infrastructure network. In infrastructure networks, all communication between two nodes must go through an access point (AP). This access point provides local relay function for infrastructure network and connects the network to outside world through wired connection. Mobile wireless nodes in this network connect itself to, and communicate with, the nearest access point within its transmission range. When a mobile node moves out of range of an access point and enters into the range of another access point, a handoff occurs from the old to new access point and allows the node to continue communication seamlessly throughout the network. Demanding a l l communication to go through access point results in twice the bandwidth consumption as compared to the bandwidth consumption in direct communication between nodes. But the advantages achieved by using

Mobile Ad-hoc Network Protocols (A comparative analysis of some selected protocols)

Ms. Garima Verma

Sr. Lecturer, MCA Deptt., Dehradun Institute of Technology, Dehradun

access point scheme far outweigh this cost. One of the advantages provided by access point is to allow the mobile nodes to operate in a very low power by buffering the traffic for the mobile nodes. The most widely used application of this type of network is office wireless local area network. (WLAN).

The second type of mobile wireless network is known as infrastructure-less mobile wireless network. As name suggests, communication between two nodes in these types of networks does not require any access point as in the case of infrastructure networks. These networks are alternatively known as mobile ad-hoc networks (MANets) because networks can be formed as need arises without the need of any existing fixed infrastructure. There are many possible applications of MANets. Typically applications include emergency search and reuse operations, military operations and conferences, where no fixed infrastructure exists.

In wireless medium, only those nodes which are within direct transmission range of each other can communicate. Hence, in MANets, to make communication between two arbitrary nodes possible,

51"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

every node has to act both as a host and as a router. The network is controlled in distributed manner by all nodes rather than centralized administration.

Ad-Hoc network is a dynamic multihop wireless network that is established by a set of mobile nodes on a shared wireless channel. Each mobile host performs local broadcasts in order to identify its existence to the

Radio

Router

Host

Host Host Host

Router

Radio Radio

surrounding hosts. Surrounding hosts are nodes that are in close proximity to the transmitting host. In that way each mobile host becomes potentially a router and it is possible to dynamically establish routes between itself and nodes to which routes exists. Ad-Hoc networks were initially proposed for military applications such as battlefield communications and disaster recovery,

Figure 1.1 Example of MANET

There are most common areas in which MANets are expected to play an important role-

? Instant infrastructure scenarios such as battlefield, dessert, and jungle, where there is no territorial communication infrastructure.

? Regions affected by natural calamities (e.g. flood, cyclone etc.) where access is unavailable because of destruction or damage to the local communication infrastructure.

? Sharing documents at conferences, in colleges etc.

? Other areas may be expansion of internet facility, multimedia applications (video conferencing).

Because of the inherent flexibilities MANets have the potential to serve as ubiquitous wireless infrastructure capability of interconnecting large number of devices, with the capabilities of supporting large numbers of networking applications.

Review of Literature

Charles E Perkins and Pravin Bhagwat (1994) have described the DSDV routing scheme for Mobile computers, they have investigated modification to the Bellmen Ford Algorithm to make it suitable for dynamic and self start networking.

Charles E Perkins and Elizabeth M Royer (1999) have developed a new algorithm for AODV which was quite suitable for a dynamic self-starting network as required by users wishing to utilize ad-hoc networks. They have shown that their algorithm scales to large populations of mobile nodes wishing to form ad-hoc networks.

Johnson, Maltz, Hu, (2003), DSR allows the network to be completely self-organizing and self-configuring, without the need for any existing network infrastructure or administration. The protocol is composed of the two mechanisms of Route Discovery and Route Maintenance, which work together to allow nodes to discover and maintain source routes to arbitrary destinations in the ad hoc network.

Valery Naumov and Thomas Gross (2005) have studies on the example of AODV and DSR protocols the influence of the network size (up to 550 nodes), nodes mobility, nodes density, suggested data traffic on protocols performance.

Ad-hoc Routing Protocols

Since the advent of Defense Advanced Research Projects Agency (DARPA), packet radio networks in the early numerous protocols have been developed for ad hoc mobile networks. Such protocols must deal with the typical limitations of these networks, which include high power consumption, low bandwidth and high error rates. As shown in Fig. 1.2, these routing protocols may generally be categorized as:

Mobile Ad-hoc Network Protocols

52 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 58: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

DSDV WRP AODV

Adhoc routing Protocols

Table Driven On-demand

DSR

Figure 1.2 Categorization of Adhoc Mobile Protocols

A B C

A

Destination Node Next Node fromSource

No of Nodes to move Sequence No

A 0 0012B B 1 0020C B 2 0042

1) Table-driven or Proactive protocol 2) Source-initiated (demand-driven) or Reactive protocol

1. Table Driven or Proactive Protocol

In these protocols, every node maintains routes to all possible destinations in the network all the time. To achieve this all nodes send updates at regular intervals or when some significant changes take place and other nodes, upon receiving them, updates their routing table to accommodate recent changes in topology. Proactive protocols have the advantage that when a route is needed to some destination then this information is readily available, and thus node experiences minimal delay in search of the route. However, these protocols result in continuous consumption of significant fraction of network capacity by sending updates regularly to keep the routing information up-to-date.

Most commonly known protocols in Table Driven category are:

a) DSDV (Distanced-Sequence Distanced-Vector routing

b) WRP (Wireless Routing Protocol)

a ) Destination-Sequenced Distance-Vector Routing (DSDV)

It is a table-driven routing scheme for ad hoc mobile networks based on the Bellman-Ford algorithm. This protocol was created to solve Routing Loop problem.

In this protocol, the packets are transmitted to the destination by maintaining routing tables which will be created at each node with information of all available destinations and number of hops to each. Every entry will have sequence number that will originate from destination nodes. To make information consistent each node sends regular updates by broadcasting or multicasting to every node so that routing tables can be updated immediately.

b) Wireless Routing Protocol (WRP)

It is a proactive unicast routing protocol for mobile MANETs. WRP is the enhanced version of Distance Vector routing protocol, which is based on Bellman Ford algorithm. Because of absence of infrastructure and mobility of nodes route loops can be

generated. This protocol provides a mechanism to reduce route loops and ensure for reliable message exchange.

WRP uses properties of Distributed Bellman Ford algorithm to enable faster convergence. It maintains the shortest path information of every

Routing Table for Node A (Source)

53"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

destination node in the network. If we look at DSDV it maintain only one topology table, while WRP uses set of tables to maintain network information. These are as follows

1. Distance Table (DT)

2. Routing Table (RT)

3. Link Cost Table (LCT)

4. Message Retransmission List (MRL)

DT contains information about the neighbors of a node. It maintains a matrix where each element contains distance and the previous node of the neighbor for a particular destination.

RT contains updated view of the network for all destinations. It keeps information on predecessor node, successor node and a flag (which indicates the status of the path i.e. whether it is correct path or loop path).

LCT maintains the number of hops needed to reach till destination. It is also called as cost, such as the cost of broken link is infinite. It also keeps the time or interval consumed between two updates.

MRL stores an entry for every new message that needs to be transmitted retransmitted and maintains a counter for each entry. This counter value is decremented by one after every retransmission of updated message. A node also marks each node in RT from where acknowledgement is not received and message has been transmitted. Once the counter reaches to 0, all the entries are checked for acknowledgement, if for any message acknowledgement is not received then it will be retransmitted. Thus a node detects link breaks by the number of update periods consume from last successful transmission. After the reception of message, node updates distance for neighbors as well as checks distance for other neighbors also. Hence we can say that convergence in WRP is much faster than DSDV.

2. Source-initiated (demand-driven) or Reactive Protocol

In these protocols, route search to destination is initiated only when the need to communicate arises; hence these protocols are often called On-Demand protocols. All nodes in the network maintain routes to only active destinations for some time interval. If there is no communication for some destination through a node then after that time interval that node purges the entry corresponding to that destination lest the route become stale. A route search is needed for only those destinations whose entry is not in current routing table

of source node. As there is no need to send and receive updates continually to maintain routes as in proactive protocols, nodes can put themselves in the “sleep” or “standby” when they are idle. Thus, these protocols consume much less bandwidth and power and are good for ad hoc environment where bandwidth and power are crucial resources. Reactive protocols suffer from the disadvantage that the delay to determine a route to the destination can be significantly high and typically nodes will have to experience a long delay prior to the actual communication.

Most common type of these protocols is:

a) AODV (Ad-hoc On-Demand Distance Vector routing protocol)

b) DSR (Dynamic Source Routing)

a) Ad-hoc On-Demand Distance Vector routing protocol (AODV)

This protocol is a reactive or On-Demand protocol which provides improvement on DSDV because it minimizes the number of required broadcasts by creating routes on demand. It does not maintain the complete list of routes for each node. Nodes which are not available in selected path do not maintain routing information. In any case, whenever the source node wants to send a message to the destination node for which valid route is not available or maintained; it initiates a process called Path Discovery to search the other node from where path can be found. It broadcasts a Route Request Packet (RREQ) to its neighbors, which then forward this packet to their neighbors and this process will go on till destination is not found or any intermediate node with fresh route towards destination is not found. Fig 1.3 (a) shows propagation of RREQs across the network.

Each node maintains its own sequence number and broadcast ID. This broadcast ID is always incremented by one for every request initiated by node, and combined with the IP address of the node to uniquely define RREQ packets. The intermediate nodes will reply to RREQ only if they have route to the destination whose corresponding sequence number is greater than or equal to the sequence number in RREQ. During this process, intermediate nodes record in their route tables and address of the neighbor node from which the first broadcast packet is received. If node receives same broadcasted packet message again then it will be discarded.

When RREQ reaches to the destination or any intermediate node with fresh or valid route, the destination node reply back by unicating a message Route Reply (RREP) packet to the neighbor node from

Mobile Ad-hoc Network Protocols

54 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 59: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

DSDV WRP AODV

Adhoc routing Protocols

Table Driven On-demand

DSR

Figure 1.2 Categorization of Adhoc Mobile Protocols

A B C

A

Destination Node Next Node fromSource

No of Nodes to move Sequence No

A 0 0012B B 1 0020C B 2 0042

1) Table-driven or Proactive protocol 2) Source-initiated (demand-driven) or Reactive protocol

1. Table Driven or Proactive Protocol

In these protocols, every node maintains routes to all possible destinations in the network all the time. To achieve this all nodes send updates at regular intervals or when some significant changes take place and other nodes, upon receiving them, updates their routing table to accommodate recent changes in topology. Proactive protocols have the advantage that when a route is needed to some destination then this information is readily available, and thus node experiences minimal delay in search of the route. However, these protocols result in continuous consumption of significant fraction of network capacity by sending updates regularly to keep the routing information up-to-date.

Most commonly known protocols in Table Driven category are:

a) DSDV (Distanced-Sequence Distanced-Vector routing

b) WRP (Wireless Routing Protocol)

a ) Destination-Sequenced Distance-Vector Routing (DSDV)

It is a table-driven routing scheme for ad hoc mobile networks based on the Bellman-Ford algorithm. This protocol was created to solve Routing Loop problem.

In this protocol, the packets are transmitted to the destination by maintaining routing tables which will be created at each node with information of all available destinations and number of hops to each. Every entry will have sequence number that will originate from destination nodes. To make information consistent each node sends regular updates by broadcasting or multicasting to every node so that routing tables can be updated immediately.

b) Wireless Routing Protocol (WRP)

It is a proactive unicast routing protocol for mobile MANETs. WRP is the enhanced version of Distance Vector routing protocol, which is based on Bellman Ford algorithm. Because of absence of infrastructure and mobility of nodes route loops can be

generated. This protocol provides a mechanism to reduce route loops and ensure for reliable message exchange.

WRP uses properties of Distributed Bellman Ford algorithm to enable faster convergence. It maintains the shortest path information of every

Routing Table for Node A (Source)

53"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

destination node in the network. If we look at DSDV it maintain only one topology table, while WRP uses set of tables to maintain network information. These are as follows

1. Distance Table (DT)

2. Routing Table (RT)

3. Link Cost Table (LCT)

4. Message Retransmission List (MRL)

DT contains information about the neighbors of a node. It maintains a matrix where each element contains distance and the previous node of the neighbor for a particular destination.

RT contains updated view of the network for all destinations. It keeps information on predecessor node, successor node and a flag (which indicates the status of the path i.e. whether it is correct path or loop path).

LCT maintains the number of hops needed to reach till destination. It is also called as cost, such as the cost of broken link is infinite. It also keeps the time or interval consumed between two updates.

MRL stores an entry for every new message that needs to be transmitted retransmitted and maintains a counter for each entry. This counter value is decremented by one after every retransmission of updated message. A node also marks each node in RT from where acknowledgement is not received and message has been transmitted. Once the counter reaches to 0, all the entries are checked for acknowledgement, if for any message acknowledgement is not received then it will be retransmitted. Thus a node detects link breaks by the number of update periods consume from last successful transmission. After the reception of message, node updates distance for neighbors as well as checks distance for other neighbors also. Hence we can say that convergence in WRP is much faster than DSDV.

2. Source-initiated (demand-driven) or Reactive Protocol

In these protocols, route search to destination is initiated only when the need to communicate arises; hence these protocols are often called On-Demand protocols. All nodes in the network maintain routes to only active destinations for some time interval. If there is no communication for some destination through a node then after that time interval that node purges the entry corresponding to that destination lest the route become stale. A route search is needed for only those destinations whose entry is not in current routing table

of source node. As there is no need to send and receive updates continually to maintain routes as in proactive protocols, nodes can put themselves in the “sleep” or “standby” when they are idle. Thus, these protocols consume much less bandwidth and power and are good for ad hoc environment where bandwidth and power are crucial resources. Reactive protocols suffer from the disadvantage that the delay to determine a route to the destination can be significantly high and typically nodes will have to experience a long delay prior to the actual communication.

Most common type of these protocols is:

a) AODV (Ad-hoc On-Demand Distance Vector routing protocol)

b) DSR (Dynamic Source Routing)

a) Ad-hoc On-Demand Distance Vector routing protocol (AODV)

This protocol is a reactive or On-Demand protocol which provides improvement on DSDV because it minimizes the number of required broadcasts by creating routes on demand. It does not maintain the complete list of routes for each node. Nodes which are not available in selected path do not maintain routing information. In any case, whenever the source node wants to send a message to the destination node for which valid route is not available or maintained; it initiates a process called Path Discovery to search the other node from where path can be found. It broadcasts a Route Request Packet (RREQ) to its neighbors, which then forward this packet to their neighbors and this process will go on till destination is not found or any intermediate node with fresh route towards destination is not found. Fig 1.3 (a) shows propagation of RREQs across the network.

Each node maintains its own sequence number and broadcast ID. This broadcast ID is always incremented by one for every request initiated by node, and combined with the IP address of the node to uniquely define RREQ packets. The intermediate nodes will reply to RREQ only if they have route to the destination whose corresponding sequence number is greater than or equal to the sequence number in RREQ. During this process, intermediate nodes record in their route tables and address of the neighbor node from which the first broadcast packet is received. If node receives same broadcasted packet message again then it will be discarded.

When RREQ reaches to the destination or any intermediate node with fresh or valid route, the destination node reply back by unicating a message Route Reply (RREP) packet to the neighbor node from

Mobile Ad-hoc Network Protocols

54 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 60: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

where it has received first RREQ. As RREQ is sent to the node, all the intermediate nodes in the path will make forward entry in RT. Which, point to the node from where RREP has come. This entry shows active route, but with each entry there is a timer attached, called route timer, which will cause the deletion of the entry if it is not used in specified time. Now as source node moves, it again reinitiates route discovery

25

8

74

6

1

3Source

Destination

Figure 1.3 a) Propagation of the RREQb) Path of the RREP to the source

25

8

74

6

1

3Source

Destination

protocol to find a new route for destination. If the previous route nodes are not available or do not have clear path, they will send link failure notification.

AODV uses one additional message facility called hello messages. This is periodic local broadcast to inform each node of other nodes in its neighborhood. It is used to maintain local connectivity of a node.

b) Dynamic Source Routing (DSR)

This is also reactive On-Demand protocol designed specially for use in multi hop wireless and ad-hoc networking. If network uses this protocol then it does not require any existing infrastructure or administration. The network will be completely self-organized and self-configuring.

This protocol allows node to dynamically search a source route across multiple network hops to any destination node. DSR has two main processes one is for Route Discovery and another is Route Maintenance.

Route Discovery is a process which starts when any source node wants to send packet to destination node and is not aware of the route for destination.

Route Maintenance is a process which is used to check valid route from source to destination after mobility of anyone node (either source or destination)- like, is the route still available? Is it a broken link? etc., so that the source can attempt any other route.

Both processes, Route discovery and Route maintenance, operate on demand. They do not require any kind of periodic broadcasts. The Route discovery and Route maintenance operation in DSR are designed to allow unidirectional links and asymmetric routes.

The protocol allows multiple routes to any destination and allows each sender to select and control the routes used in routing its packets, for example for use in load balancing or for increased robustness. Other advantages of the DSR protocol include easily guaranteed loop-free routing, support for use in networks containing unidirectional links, use of only "soft state" in routing, and very rapid recovery when routes in the network change. The DSR protocol is designed mainly for mobile ad hoc networks of up to about two hundred nodes, and is designed to work well with even very high rates of mobility.

Comparative analysis of Mobile Adhoc ProtocolsTable-Driven vs. On-Demand Routing

The table-driven adhoc routing approach is similar to the connectionless approach of forwarding packets, without any regard to when and how frequently such routes are desired. It relies on an underlying routing table update mechanism that involves the constant propagation of routing information. This is not the case, however, for on-demand routing protocols. When a node using an on-demand protocol desires a route to a new destination, it will have to wait until such a route can be discovered. On the other hand, because routing information is constantly propagated and maintained in table-driven

(a) (b)

55"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

routing protocols, a route to every other node in the ad hoc network is always available, regardless of whether or not it is needed. This feature, although useful for datagram traffic, incurs substantial signaling traffic and power consumption. Since both bandwidth and

Parameters Table driven On-DemandRouting Information Always available Available when neededRouting Philosophy Mostly flat FlatRoutes Update Periodic Not requiredBandwidth Large LessQOS support Shortest path as QOS metric Few support QOS, most

support shortest pathRoute Search Delay Less More

battery power are scarce resources in mobile computers, this becomes a serious limitation. Following tables list some of the basic differences between the two protocols.

Comparison between all protocols

Table Driven On-DemandParameters DSDV WRP AODV DSRRouting Scheme Table Driven Table Driven On-Demand On-DemandRouting Path Shortest Shortest Shortest ShortestRoute Information Entire Entire Next Hop Next Hop

Freq. of Updates Periodically Periodically As needed As Needed

Sequence No. Yes Yes Yes No

Loop Free Yes Yes Yes YesMultiple Path Yes No No YesConnection setup Delay Fast Fast Slow Slow

Convergence Slow Fast Quick FastRoute while packet send Full route Full Route Next Hop Full RouteTime complexity O(d) O(h) O(2d) O(2d)

Multicast capability No No Yes NoRoute maintained in Route Table Route Table Route Table Route Cache

d- Network Diameter, h Height of routing tree

Applications of MANETS

In military, soldiers equipped with multimode mobile communicators can now communicate in an adhoc manner without the need for fixed wireless base stations or we can say with infrastructure less network. Now a days Adhoc mobile networks are preferred in almost all the areas where there is less possibility of fixed infrastructure. Commercial scenarios for ad hoc wireless networks include:

Conferences/meetings/lecturesEmergency servicesLaw enforcement

People today attend meetings and conferences with their laptops, palmtops, and notebooks. It is therefore attractive to have instant network formation, in addition to file and information sharing without the presence of fixed base stations and systems administrators. A presenter can multicast slides and audio to intended recipients. Attendees can ask questions and interact on a commonly shared whiteboard. Ad hoc mobile communication is particularly useful in relaying information via data, video, and/or voice from one rescue team member to another over a small handheld or wearable wireless device.

Mobile Ad-hoc Network Protocols

56 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 61: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

where it has received first RREQ. As RREQ is sent to the node, all the intermediate nodes in the path will make forward entry in RT. Which, point to the node from where RREP has come. This entry shows active route, but with each entry there is a timer attached, called route timer, which will cause the deletion of the entry if it is not used in specified time. Now as source node moves, it again reinitiates route discovery

25

8

74

6

1

3Source

Destination

Figure 1.3 a) Propagation of the RREQb) Path of the RREP to the source

25

8

74

6

1

3Source

Destination

protocol to find a new route for destination. If the previous route nodes are not available or do not have clear path, they will send link failure notification.

AODV uses one additional message facility called hello messages. This is periodic local broadcast to inform each node of other nodes in its neighborhood. It is used to maintain local connectivity of a node.

b) Dynamic Source Routing (DSR)

This is also reactive On-Demand protocol designed specially for use in multi hop wireless and ad-hoc networking. If network uses this protocol then it does not require any existing infrastructure or administration. The network will be completely self-organized and self-configuring.

This protocol allows node to dynamically search a source route across multiple network hops to any destination node. DSR has two main processes one is for Route Discovery and another is Route Maintenance.

Route Discovery is a process which starts when any source node wants to send packet to destination node and is not aware of the route for destination.

Route Maintenance is a process which is used to check valid route from source to destination after mobility of anyone node (either source or destination)- like, is the route still available? Is it a broken link? etc., so that the source can attempt any other route.

Both processes, Route discovery and Route maintenance, operate on demand. They do not require any kind of periodic broadcasts. The Route discovery and Route maintenance operation in DSR are designed to allow unidirectional links and asymmetric routes.

The protocol allows multiple routes to any destination and allows each sender to select and control the routes used in routing its packets, for example for use in load balancing or for increased robustness. Other advantages of the DSR protocol include easily guaranteed loop-free routing, support for use in networks containing unidirectional links, use of only "soft state" in routing, and very rapid recovery when routes in the network change. The DSR protocol is designed mainly for mobile ad hoc networks of up to about two hundred nodes, and is designed to work well with even very high rates of mobility.

Comparative analysis of Mobile Adhoc ProtocolsTable-Driven vs. On-Demand Routing

The table-driven adhoc routing approach is similar to the connectionless approach of forwarding packets, without any regard to when and how frequently such routes are desired. It relies on an underlying routing table update mechanism that involves the constant propagation of routing information. This is not the case, however, for on-demand routing protocols. When a node using an on-demand protocol desires a route to a new destination, it will have to wait until such a route can be discovered. On the other hand, because routing information is constantly propagated and maintained in table-driven

(a) (b)

55"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

routing protocols, a route to every other node in the ad hoc network is always available, regardless of whether or not it is needed. This feature, although useful for datagram traffic, incurs substantial signaling traffic and power consumption. Since both bandwidth and

Parameters Table driven On-DemandRouting Information Always available Available when neededRouting Philosophy Mostly flat FlatRoutes Update Periodic Not requiredBandwidth Large LessQOS support Shortest path as QOS metric Few support QOS, most

support shortest pathRoute Search Delay Less More

battery power are scarce resources in mobile computers, this becomes a serious limitation. Following tables list some of the basic differences between the two protocols.

Comparison between all protocols

Table Driven On-DemandParameters DSDV WRP AODV DSRRouting Scheme Table Driven Table Driven On-Demand On-DemandRouting Path Shortest Shortest Shortest ShortestRoute Information Entire Entire Next Hop Next Hop

Freq. of Updates Periodically Periodically As needed As Needed

Sequence No. Yes Yes Yes No

Loop Free Yes Yes Yes YesMultiple Path Yes No No YesConnection setup Delay Fast Fast Slow Slow

Convergence Slow Fast Quick FastRoute while packet send Full route Full Route Next Hop Full RouteTime complexity O(d) O(h) O(2d) O(2d)

Multicast capability No No Yes NoRoute maintained in Route Table Route Table Route Table Route Cache

d- Network Diameter, h Height of routing tree

Applications of MANETS

In military, soldiers equipped with multimode mobile communicators can now communicate in an adhoc manner without the need for fixed wireless base stations or we can say with infrastructure less network. Now a days Adhoc mobile networks are preferred in almost all the areas where there is less possibility of fixed infrastructure. Commercial scenarios for ad hoc wireless networks include:

Conferences/meetings/lecturesEmergency servicesLaw enforcement

People today attend meetings and conferences with their laptops, palmtops, and notebooks. It is therefore attractive to have instant network formation, in addition to file and information sharing without the presence of fixed base stations and systems administrators. A presenter can multicast slides and audio to intended recipients. Attendees can ask questions and interact on a commonly shared whiteboard. Ad hoc mobile communication is particularly useful in relaying information via data, video, and/or voice from one rescue team member to another over a small handheld or wearable wireless device.

Mobile Ad-hoc Network Protocols

56 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 62: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

Conclusion

In this article we have provided description of two types of Adhoc mobile network protocol (Table driven and On-demand) and various routing schemes used under these two categories of protocols. After analyzing these protocols we have come to know that Table Driven protocols take less time in the search of any route because tables of routes are maintained and updated regularly, but due to this, table management protocols result in continuous consumption of significant fraction of network capacity by sending updates regularly to keep the routing information up-to-date. Therefore the main disadvantages of such algorithms are -

Respective amount of data for maintenance and slow reaction on restructuring and failures.

On the other hand, if we see On-Demand protocols then these protocols initiate route search to destination only when the need to communicate arises. The main disadvantages of such algorithms are high latency time in route finding and excessive flooding can lead to network clogging.

Finally we have described the application of MANETS in today's scenario. Each protocol used for MANET has some advantages and disadvantages and is well suited for certain conditions. The field of ad hoc networks is rapidly growing and changing so we have lots more to learn and there are lots more challenges that have to be met.

References 1

1. C. E. Perkins and E. M. Royer, (1999) “Ad-hoc On-Demand Distance Vector Routing,” Proc. 2nd IEEE Wksp. Mobile Comp. Sys. and Apps., Feb. 1999, pp. 90100.

2. Johnson, Maltz and Broch (1999), “DSR: The Dynamic Source Routing Protocol for Mobile A d H o c N e t w o r k s ” , http://www.monarch.cs.cmu.edu/.

3. Johnson, Maltz, Hu , (2003), “The Dynamic Source Routing Protocol for Mobile Ad Hoc N e t w o r k s ( D S R ) f o r I p v 4 ” http://tools.ietf.org/html/rfc4728.

4. Luo, liu and Danxia , (2008), “Research on multicast routing protocols for mobile adhoc networks”, Computer Networks: The International Journal of Computer and Telecommunications Networking, vol 52, pages 988-997

5. M. Frodigh, et al, (2002), "Wireless Ad Hoc Networking: The Art of Networking without a Network," Ericsson Review, No. 4

6. Perkins, Charles E. and Bhagwat, Pravin (1994). "Highly Dynamic Destination-Sequenced Distance-Vector Routing (DSDV) for Mobile Computers", ACM SIGCOMM Computer Communication Review, vol 24, issue 4, pages 234-244.

7. P. Samar and S. B. Wicker,(2004) "On the Behavior of Communication Links of a Node in a Multi-Hop Mobile Environment," Proc. MobiHoc.

8. S. Marwaha et al.( 2002) , "Mobile Agents based Routing Protocol for Mobile Ad Hoc Networks", Proc. IEEE Globecom.

9. Y.-C. Hu, A. Perrig, and D.B. Johnson (2002), “Ariadne: A Secure On-Demand Routing Protocol for Ad Hoc Networks,” Proc. MobiCom '02, pp. 12-23,

10. Y.-C. Hu and A. Perrig,(2004) “A Survey of Secure Wireless Ad Hoc Routing,” IEEE Security and Privacy, vol. 2, no. 3, pp. 28-39, May-June.

11. Valery Naumov and Thomas Gross (2005), “Scalability of routing methods in ad hoc networks”, Elsevier B V, pages 193-209.

57"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

ABSTRACT

This paper gives an idea about an innovative searching scheme through which we can search any element from a list of sorted elements. In most of the cases it gives minimum complexity results over binary search. However we have picked up the idea from binary search to make our searching scheme successful besides one or two limitations.

Introduction

“Time is money. There is no compensation of time”

Computers are no doubt very fast, but are not infinitely fast. Memory may be cheap, but not free. Now a days, space complexity does not matter a lot as we have computers even up to 1 GB RAM. But computing time will always remain a bounding resource.

During the last 25years time has changed a lot, progress in the computer H/W also has made the life very fast. We can see the difference between 8086 and P-IV processor. But one thing will always prevail in this world i.e. the estimation of time. In the world of algorithms, we can avoid discussion on the computer hardware parameters, but we can not ignore the execution time of a particular algorithm. For this purpose, we take some of the sorting algorithms, analyze them and then we consider the best which takes the least time to execute. Through this paper, we are trying to show the difference among our searching algorithm and other searching algorithms (Linear Search and Binary Search).

Binary Search

Preliminary Discussion on Binary Search

Suppose DATA is an array which is sorted in increasing numerical order or, equivalently, alphabetically. Then binary search can be used to find

Design and Analysis of New Searching Algorithm

Dr. Vinod Kumar*Dr. S.C. Agarwal**

Sanjeev Kumar Sharma***

*Prof. C.S.E. Deptt., Gurukul Kangri Haridwar, [email protected] Mo.N.09412072096**Director NICE College, Meerut***Head MCA Deptt., DIT Dehradun, [email protected] Mo. No. 09412004249

the location LOC of a given ITEM of information in DATA efficiently. The binary search algorithm applied to array DATA works as follows. During each stage of algorithm, our search for ITEM is reduced to a segment of elements of DATA: DATA[BEG], DATA[BEG+1], DATA[BEG+2],…., DATA[END].

There are two limitations of this algorithm:

1. The list must be sorted.

2. One must have direct access to the middle element in any sub-list.

Formal Algorithm of Binary Search

1. BINARY (DATA, LB, UB, ITEM, LOC) // LB = lower bound and UB= upper bound//

2. { // Initialize segment variables//

3. BEG = LB, END = UB, MID = INT((BEG+END)/2)

4. while (BEG<=END and DATA[MID] ITEM ) do

5. {

6. if ( ITEM<DATA[MID] ) then

7. {

8. END = MID-1

9. }

58 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 63: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

Conclusion

In this article we have provided description of two types of Adhoc mobile network protocol (Table driven and On-demand) and various routing schemes used under these two categories of protocols. After analyzing these protocols we have come to know that Table Driven protocols take less time in the search of any route because tables of routes are maintained and updated regularly, but due to this, table management protocols result in continuous consumption of significant fraction of network capacity by sending updates regularly to keep the routing information up-to-date. Therefore the main disadvantages of such algorithms are -

Respective amount of data for maintenance and slow reaction on restructuring and failures.

On the other hand, if we see On-Demand protocols then these protocols initiate route search to destination only when the need to communicate arises. The main disadvantages of such algorithms are high latency time in route finding and excessive flooding can lead to network clogging.

Finally we have described the application of MANETS in today's scenario. Each protocol used for MANET has some advantages and disadvantages and is well suited for certain conditions. The field of ad hoc networks is rapidly growing and changing so we have lots more to learn and there are lots more challenges that have to be met.

References 1

1. C. E. Perkins and E. M. Royer, (1999) “Ad-hoc On-Demand Distance Vector Routing,” Proc. 2nd IEEE Wksp. Mobile Comp. Sys. and Apps., Feb. 1999, pp. 90100.

2. Johnson, Maltz and Broch (1999), “DSR: The Dynamic Source Routing Protocol for Mobile A d H o c N e t w o r k s ” , http://www.monarch.cs.cmu.edu/.

3. Johnson, Maltz, Hu , (2003), “The Dynamic Source Routing Protocol for Mobile Ad Hoc N e t w o r k s ( D S R ) f o r I p v 4 ” http://tools.ietf.org/html/rfc4728.

4. Luo, liu and Danxia , (2008), “Research on multicast routing protocols for mobile adhoc networks”, Computer Networks: The International Journal of Computer and Telecommunications Networking, vol 52, pages 988-997

5. M. Frodigh, et al, (2002), "Wireless Ad Hoc Networking: The Art of Networking without a Network," Ericsson Review, No. 4

6. Perkins, Charles E. and Bhagwat, Pravin (1994). "Highly Dynamic Destination-Sequenced Distance-Vector Routing (DSDV) for Mobile Computers", ACM SIGCOMM Computer Communication Review, vol 24, issue 4, pages 234-244.

7. P. Samar and S. B. Wicker,(2004) "On the Behavior of Communication Links of a Node in a Multi-Hop Mobile Environment," Proc. MobiHoc.

8. S. Marwaha et al.( 2002) , "Mobile Agents based Routing Protocol for Mobile Ad Hoc Networks", Proc. IEEE Globecom.

9. Y.-C. Hu, A. Perrig, and D.B. Johnson (2002), “Ariadne: A Secure On-Demand Routing Protocol for Ad Hoc Networks,” Proc. MobiCom '02, pp. 12-23,

10. Y.-C. Hu and A. Perrig,(2004) “A Survey of Secure Wireless Ad Hoc Routing,” IEEE Security and Privacy, vol. 2, no. 3, pp. 28-39, May-June.

11. Valery Naumov and Thomas Gross (2005), “Scalability of routing methods in ad hoc networks”, Elsevier B V, pages 193-209.

57"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

ABSTRACT

This paper gives an idea about an innovative searching scheme through which we can search any element from a list of sorted elements. In most of the cases it gives minimum complexity results over binary search. However we have picked up the idea from binary search to make our searching scheme successful besides one or two limitations.

Introduction

“Time is money. There is no compensation of time”

Computers are no doubt very fast, but are not infinitely fast. Memory may be cheap, but not free. Now a days, space complexity does not matter a lot as we have computers even up to 1 GB RAM. But computing time will always remain a bounding resource.

During the last 25years time has changed a lot, progress in the computer H/W also has made the life very fast. We can see the difference between 8086 and P-IV processor. But one thing will always prevail in this world i.e. the estimation of time. In the world of algorithms, we can avoid discussion on the computer hardware parameters, but we can not ignore the execution time of a particular algorithm. For this purpose, we take some of the sorting algorithms, analyze them and then we consider the best which takes the least time to execute. Through this paper, we are trying to show the difference among our searching algorithm and other searching algorithms (Linear Search and Binary Search).

Binary Search

Preliminary Discussion on Binary Search

Suppose DATA is an array which is sorted in increasing numerical order or, equivalently, alphabetically. Then binary search can be used to find

Design and Analysis of New Searching Algorithm

Dr. Vinod Kumar*Dr. S.C. Agarwal**

Sanjeev Kumar Sharma***

*Prof. C.S.E. Deptt., Gurukul Kangri Haridwar, [email protected] Mo.N.09412072096**Director NICE College, Meerut***Head MCA Deptt., DIT Dehradun, [email protected] Mo. No. 09412004249

the location LOC of a given ITEM of information in DATA efficiently. The binary search algorithm applied to array DATA works as follows. During each stage of algorithm, our search for ITEM is reduced to a segment of elements of DATA: DATA[BEG], DATA[BEG+1], DATA[BEG+2],…., DATA[END].

There are two limitations of this algorithm:

1. The list must be sorted.

2. One must have direct access to the middle element in any sub-list.

Formal Algorithm of Binary Search

1. BINARY (DATA, LB, UB, ITEM, LOC) // LB = lower bound and UB= upper bound//

2. { // Initialize segment variables//

3. BEG = LB, END = UB, MID = INT((BEG+END)/2)

4. while (BEG<=END and DATA[MID] ITEM ) do

5. {

6. if ( ITEM<DATA[MID] ) then

7. {

8. END = MID-1

9. }

58 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 64: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

10. else

11. {

12. BEG = MID+1

13. } // End of if structure//

14. MID = INT((BEG+END)/2)

15. } // End of While loop //

16. if (DATA[MID] = ITEM ) then

17. {

18. LOC = MID

19. }

20. else

21. LOC = NULL // End of if structure //

22. }

Implementation of Binary search

Let us take the following array of size 32. Search is performed on all the items of the array one by one. The number of interchanges involved in searching different items, are as follows:

Search 1:

Enter the no. of items

32

Enter the no. of elements

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64

Enter the item to be searched

2

Your search is successful and the location is = 1

Number of interchanges = 4

Search 2:

Enter the item to be searched

4

Your search is successful and the location is = 2

Number of interchanges = 3

Search 3:

Enter the item to be searched

6

Your search is successful and the location is = 3

Number of interchanges = 4

Search 4:

Enter the item to be searched

8

Your search is successful and the location is = 4

Number of interchanges = 2

Search 5:

Enter the item to be searched

10

Your search is successful and the location is = 5

Number of interchanges = 4

Search 6:

Enter the item to be searched

12

Your search is successful and the location is = 6

Number of interchanges = 3

Search 7:

Enter the item to be searched

14

Your search is successful and the location is = 7

Number of interchanges = 4

Search 8:

Enter the item to be searched

16

Your search is successful and the location is = 8

Number of interchanges = 1

Search 9:

Enter the item to be searched

18

Your search is successful and the location is = 4

Number of interchanges = 4Search 10:

Enter the item to be searched

20

Your search is successful and the location is = 10

Number of interchanges = 3

59"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Institute of Management Studies, Dehradun

Search 11:

Enter the item to be searched

22

Your search is successful and the location is = 11

Number of interchanges = 4

Search 12:

Enter the item to be searched

24

Your search is successful and the location is = 12

Number of interchanges = 2

Search 13:

Enter the item to be searched

26

Your search is successful and the location is = 13

Number of interchanges = 4

Search 14:

Enter the item to be searched

28

Your search is successful and the location is = 14

Number of interchanges = 3

Search 15:

Enter the item to be searched

30

Your search is successful and the location is = 15

Number of interchanges = 4

Search 16:

Enter the item to be searched

32

Your search is successful and the location is = 16

Number of interchanges = 0

Search 17:Enter the item to be searched

34

Your search is successful and the location is = 17

Number of interchanges = 4

Search 18:

Enter the item to be searched

36

Your search is successful and the location is = 18

Number of interchanges = 3

Search 19:

Enter the item to be searched

38

Your search is successful and the location is = 19

Number of interchanges = 4

Search 20:

Enter the item to be searched

40

Your search is successful and the location is = 20

Number of interchanges = 2

Search 21:

Enter the item to be searched

42

Your search is successful and the location is = 21

Number of interchanges = 4

Search 22:

Enter the item to be searched

44

Your search is successful and the location is = 22

Number of interchanges = 3

Search 23:

Enter the item to be searched

46

Your search is successful and the location is = 23

Number of interchanges = 4Search 24:

Enter the item to be searched

48

Your search is successful and the location is = 24

Number of interchanges = 1

Search 25:

Enter the item to be searched

50

Your search is successful and the location is = 25

Number of interchanges = 4

Search 26:

Enter the item to be searched

52

Design and Analysis of New Searching Algorithm

60 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 65: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

10. else

11. {

12. BEG = MID+1

13. } // End of if structure//

14. MID = INT((BEG+END)/2)

15. } // End of While loop //

16. if (DATA[MID] = ITEM ) then

17. {

18. LOC = MID

19. }

20. else

21. LOC = NULL // End of if structure //

22. }

Implementation of Binary search

Let us take the following array of size 32. Search is performed on all the items of the array one by one. The number of interchanges involved in searching different items, are as follows:

Search 1:

Enter the no. of items

32

Enter the no. of elements

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64

Enter the item to be searched

2

Your search is successful and the location is = 1

Number of interchanges = 4

Search 2:

Enter the item to be searched

4

Your search is successful and the location is = 2

Number of interchanges = 3

Search 3:

Enter the item to be searched

6

Your search is successful and the location is = 3

Number of interchanges = 4

Search 4:

Enter the item to be searched

8

Your search is successful and the location is = 4

Number of interchanges = 2

Search 5:

Enter the item to be searched

10

Your search is successful and the location is = 5

Number of interchanges = 4

Search 6:

Enter the item to be searched

12

Your search is successful and the location is = 6

Number of interchanges = 3

Search 7:

Enter the item to be searched

14

Your search is successful and the location is = 7

Number of interchanges = 4

Search 8:

Enter the item to be searched

16

Your search is successful and the location is = 8

Number of interchanges = 1

Search 9:

Enter the item to be searched

18

Your search is successful and the location is = 4

Number of interchanges = 4Search 10:

Enter the item to be searched

20

Your search is successful and the location is = 10

Number of interchanges = 3

59"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Institute of Management Studies, Dehradun

Search 11:

Enter the item to be searched

22

Your search is successful and the location is = 11

Number of interchanges = 4

Search 12:

Enter the item to be searched

24

Your search is successful and the location is = 12

Number of interchanges = 2

Search 13:

Enter the item to be searched

26

Your search is successful and the location is = 13

Number of interchanges = 4

Search 14:

Enter the item to be searched

28

Your search is successful and the location is = 14

Number of interchanges = 3

Search 15:

Enter the item to be searched

30

Your search is successful and the location is = 15

Number of interchanges = 4

Search 16:

Enter the item to be searched

32

Your search is successful and the location is = 16

Number of interchanges = 0

Search 17:Enter the item to be searched

34

Your search is successful and the location is = 17

Number of interchanges = 4

Search 18:

Enter the item to be searched

36

Your search is successful and the location is = 18

Number of interchanges = 3

Search 19:

Enter the item to be searched

38

Your search is successful and the location is = 19

Number of interchanges = 4

Search 20:

Enter the item to be searched

40

Your search is successful and the location is = 20

Number of interchanges = 2

Search 21:

Enter the item to be searched

42

Your search is successful and the location is = 21

Number of interchanges = 4

Search 22:

Enter the item to be searched

44

Your search is successful and the location is = 22

Number of interchanges = 3

Search 23:

Enter the item to be searched

46

Your search is successful and the location is = 23

Number of interchanges = 4Search 24:

Enter the item to be searched

48

Your search is successful and the location is = 24

Number of interchanges = 1

Search 25:

Enter the item to be searched

50

Your search is successful and the location is = 25

Number of interchanges = 4

Search 26:

Enter the item to be searched

52

Design and Analysis of New Searching Algorithm

60 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 66: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

Your search is successful and the location is = 26Number of interchanges = 3Search 27:Enter the item to be searched54Your search is successful and the location is = 27Number of interchanges = 4Search 28:Enter the item to be searched56Your search is successful and the location is = 28Number of interchanges = 2Search 29:Enter the item to be searched58Your search is successful and the location is = 29Number of interchanges = 4

Search 30:Enter the item to be searched60Your search is successful and the location is = 30Number of interchanges = 3Search 31:

Enter the item to be searched

62

Your search is successful and the location is = 31

Number of interchanges = 4

Search 32:

Enter the item to be searched

64

Your search is successful and the location is = 32

Number of interchanges = 5

Search 33:

Enter the item to be searched

9

Your search is unsuccessful

Number of interchanges = 5

Search 34:

Enter the item to be searched

65

Your search is unsuccessful

Number of interchanges = 6

Matrix Search

No doubt, Binary Search performs better than linear search because time complexity for linear search is O(n), while for binary search it is log n where n is the 2

number of elements. Binary Search has some limitations over Linear Search. If we consider the case of Binary Search, we take the array of elements in sorted order. While in case of Linear Search array of elements may be sorted or unsorted.

Similar to the Binary Search, Matrix search also has some limitations over Binary Search. In this Searching Method, The given sorted array for Binary Search is divided in to a two-dimensional array (i.e. Matrix Array). Now the question arises how many numbers of rows and columns will be suitable for our purpose? For this purpose we first calculate the value of, say, d=log n. The value of d will decide the number of 2

rows and columns (say M and N). Maximum value of M but less than d will be the best value of M for the efficient use of Matrix Search algorithm.

n= M*N where M < d, ----------- (1)

here n = The number of elements

M = The number of rows

N = The number of columns

If we go back to the binary search, we see if we want to search any element from a sorted array of n elements then complexity (say C) will be less than or equal to log n.2

i.e. C≤ log n2

This is the only reason to apply the condition on M in case of Matrix Search.

Now the question arises how d will be the deciding factor for choosing number of rows and columns? To answer this question we take an example:

Let us take an array of 64 elements. We can break this array in to the following arrangements of rows and columns:

n = M*N where n is the number of total elements.

64 = 2*32 ----------------- (2)

61"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

64 = 4*16 ------------------ (3)

64 = 8*8 ------------------ (4)

As we have already mentioned that M should be less

than d (where d = log n). 2

that is M<d.

Here d = log 64 = 62

From equations (2) (3) and (4) we draw some conclusions:

The arrangement in equation (4), 64 = 8*8 is discarded. Because M<d

But here 8>6

Also we have to allow maximum value of M but less than d. So the arrangement in equation (2), 64 = 2*32 is also discarded.

Here the only arrangement 64 = 4*16 is well accepted.

Now here we see, M = 4 M = number of rows

N = 16 N = number of columns (number of elements in each row)

Formal Algorithm of Matrix Search

1. Algorithm Matrix_Search ( DATA, M, N) // DATA is 2 dimensional array, M and N are number of rows and columns //

2. {

3. for i= 1 to M do

4. {

5. switch ( i)

6. {

7. case 1 :

8. BINARY ( ITEM, i, N, LOC );

9. break;

10. case 2 :

11. BINARY ( ITEM, i, N, LOC );

12. break;

13. case 3 :

14. BINARY ( ITEM, i, N, LOC );

15. break; - - - - - - - - - - -

16. - - - - - - - - - - - - - - - - -

17. case M :

18. BINARY ( ITEM, i, N, LOC );

19. }

20. }

21. }

22. BINARY (ITEM, i, N, LOC)

23. { // Initialize segment variables//

24. BEG = 1, END = N, and MID = INT (BEG+END)/2)

25. while (BEG<=END and DATA[i][MID] ITEM) do

26. {

27. if ( ITEM<DATA[i][MID] ) then

28. {

29. END = MID-1

30. }

31. else

32. {

33. BEG = MID+1

34. } // End of if structure//

35. MID = INT((BEG+END)/2)

36. } // End of While loop //

37. if (DATA[i][MID] = ITEM) then

38. {

39. LOC = MID

40. }

41. else

42. LOC = NULL // End of if structure //

43. }

Implementation of Matrix Search

Let us take the same array of 32 elements. But according to conditioned required for Matrix Search we take M=4 and N=8.

Search 1:

Enter the no. of items

32

Enter the no. of elements

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64

Design and Analysis of New Searching Algorithm

62 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 67: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

Your search is successful and the location is = 26Number of interchanges = 3Search 27:Enter the item to be searched54Your search is successful and the location is = 27Number of interchanges = 4Search 28:Enter the item to be searched56Your search is successful and the location is = 28Number of interchanges = 2Search 29:Enter the item to be searched58Your search is successful and the location is = 29Number of interchanges = 4

Search 30:Enter the item to be searched60Your search is successful and the location is = 30Number of interchanges = 3Search 31:

Enter the item to be searched

62

Your search is successful and the location is = 31

Number of interchanges = 4

Search 32:

Enter the item to be searched

64

Your search is successful and the location is = 32

Number of interchanges = 5

Search 33:

Enter the item to be searched

9

Your search is unsuccessful

Number of interchanges = 5

Search 34:

Enter the item to be searched

65

Your search is unsuccessful

Number of interchanges = 6

Matrix Search

No doubt, Binary Search performs better than linear search because time complexity for linear search is O(n), while for binary search it is log n where n is the 2

number of elements. Binary Search has some limitations over Linear Search. If we consider the case of Binary Search, we take the array of elements in sorted order. While in case of Linear Search array of elements may be sorted or unsorted.

Similar to the Binary Search, Matrix search also has some limitations over Binary Search. In this Searching Method, The given sorted array for Binary Search is divided in to a two-dimensional array (i.e. Matrix Array). Now the question arises how many numbers of rows and columns will be suitable for our purpose? For this purpose we first calculate the value of, say, d=log n. The value of d will decide the number of 2

rows and columns (say M and N). Maximum value of M but less than d will be the best value of M for the efficient use of Matrix Search algorithm.

n= M*N where M < d, ----------- (1)

here n = The number of elements

M = The number of rows

N = The number of columns

If we go back to the binary search, we see if we want to search any element from a sorted array of n elements then complexity (say C) will be less than or equal to log n.2

i.e. C≤ log n2

This is the only reason to apply the condition on M in case of Matrix Search.

Now the question arises how d will be the deciding factor for choosing number of rows and columns? To answer this question we take an example:

Let us take an array of 64 elements. We can break this array in to the following arrangements of rows and columns:

n = M*N where n is the number of total elements.

64 = 2*32 ----------------- (2)

61"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

64 = 4*16 ------------------ (3)

64 = 8*8 ------------------ (4)

As we have already mentioned that M should be less

than d (where d = log n). 2

that is M<d.

Here d = log 64 = 62

From equations (2) (3) and (4) we draw some conclusions:

The arrangement in equation (4), 64 = 8*8 is discarded. Because M<d

But here 8>6

Also we have to allow maximum value of M but less than d. So the arrangement in equation (2), 64 = 2*32 is also discarded.

Here the only arrangement 64 = 4*16 is well accepted.

Now here we see, M = 4 M = number of rows

N = 16 N = number of columns (number of elements in each row)

Formal Algorithm of Matrix Search

1. Algorithm Matrix_Search ( DATA, M, N) // DATA is 2 dimensional array, M and N are number of rows and columns //

2. {

3. for i= 1 to M do

4. {

5. switch ( i)

6. {

7. case 1 :

8. BINARY ( ITEM, i, N, LOC );

9. break;

10. case 2 :

11. BINARY ( ITEM, i, N, LOC );

12. break;

13. case 3 :

14. BINARY ( ITEM, i, N, LOC );

15. break; - - - - - - - - - - -

16. - - - - - - - - - - - - - - - - -

17. case M :

18. BINARY ( ITEM, i, N, LOC );

19. }

20. }

21. }

22. BINARY (ITEM, i, N, LOC)

23. { // Initialize segment variables//

24. BEG = 1, END = N, and MID = INT (BEG+END)/2)

25. while (BEG<=END and DATA[i][MID] ITEM) do

26. {

27. if ( ITEM<DATA[i][MID] ) then

28. {

29. END = MID-1

30. }

31. else

32. {

33. BEG = MID+1

34. } // End of if structure//

35. MID = INT((BEG+END)/2)

36. } // End of While loop //

37. if (DATA[i][MID] = ITEM) then

38. {

39. LOC = MID

40. }

41. else

42. LOC = NULL // End of if structure //

43. }

Implementation of Matrix Search

Let us take the same array of 32 elements. But according to conditioned required for Matrix Search we take M=4 and N=8.

Search 1:

Enter the no. of items

32

Enter the no. of elements

2 4 6 8 10 12 14 16 18 20 22 24 26 28 30 32 34 36 38 40 42 44 46 48 50 52 54 56 58 60 62 64

Design and Analysis of New Searching Algorithm

62 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 68: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

Enter the item to be searched

2

Your search is successful and the location is = 1

Number of interchanges = 2

Search 2:

Enter the item to be searched

4

Your search is successful and the location is = 2

Number of interchanges = 1

Search 3:

Enter the item to be searched

6

Your search is successful and the location is = 3

Number of interchanges = 2

Search 4:

Enter the item to be searched

8

Your search is successful and the location is = 4

Number of interchanges = 0

Search 5:

Enter the item to be searched

10

Your search is successful and the location is = 5

Number of interchanges = 2

Search 6 :

Enter the item to be searched

12

Your search is successful and the location is = 6

Number of interchanges = 1

Search 7 :

Enter the item to be searched

14

Your search is successful and the location is = 7

Number of interchanges = 2

Search 8 :

Enter the item to be searched

16

Your search is successful and the location is = 8

Number of interchanges = 3

Search 9 :

Enter the item to be searched

18

Your search is successful and the location is = 9

Number of interchanges = 3

Search 10 :

Enter the item to be searched

20

Your search is successful and the location is = 10

Number of interchanges = 2

Search 11 :

Enter the item to be searched

22

Your search is successful and the location is = 11

Number of interchanges = 3

Search 12 :

Enter the item to be searched

24

Your search is successful and the location is = 12

Number of interchanges = 1

Search 13 :

Enter the item to be searched

26

Your search is successful and the location is = 13

Number of interchanges = 3

Search 14 :

Enter the item to be searched

28

Your search is successful and the location is = 14

Number of interchanges = 2

Search 15 :

Enter the item to be searched

30

Your search is successful and the location is = 15

Number of interchanges = 3

Search 16 :

63"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Enter the item to be searched

32

Your search is successful and the location is = 16

Number of interchanges = 4

Search 17 : Enter the item to be searched

34

Your search is successful and the location is = 17

Number of interchanges = 4

Search 18 :

Enter the item to be searched

36

Your search is successful and the location is = 18

Number of interchanges = 3

Search 19 :

Enter the item to be searched

38

Your search is successful and the location is = 19

Number of interchanges = 4

Search 20 :

Enter the item to be searched

40

Your search is successful and the location is = 20

Number of interchanges = 2

Search 21 :

Enter the item to be searched

42

Your search is successful and the location is = 21

Number of interchanges = 4

Search 22 :

Enter the item to be searched

44

Your search is successful and the location is = 22

Number of interchanges = 3

Search 23 :

Enter the item to be searched

46

Your search is successful and the location is = 23

Number of interchanges = 4

Search 24 :

Enter the item to be searched

48

Your search is successful and the location is = 24

Number of interchanges = 5

Search 25 :

Enter the item to be searched

50

Your search is successful and the location is = 25

Number of interchanges = 5

Search 26 :

Enter the item to be searched

52

Your search is successful and the location is = 26

Number of interchanges = 4

Search 27 :

Enter the item to be searched

54

Your search is successful and the location is = 27

Number of interchanges = 5

Search 28 :

Enter the item to be searched

56

Your search is successful and the location is = 28

Number of interchanges = 3

Search 29 :

Enter the item to be searched

58

Your search is successful and the location is = 29

Number of interchanges = 5

Search 30 :

Enter the item to be searched

60

Your search is successful and the location is = 30

Number of interchanges = 4

Search 31 :

Enter the item to be searched

62

Design and Analysis of New Searching Algorithm

64 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 69: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

Enter the item to be searched

2

Your search is successful and the location is = 1

Number of interchanges = 2

Search 2:

Enter the item to be searched

4

Your search is successful and the location is = 2

Number of interchanges = 1

Search 3:

Enter the item to be searched

6

Your search is successful and the location is = 3

Number of interchanges = 2

Search 4:

Enter the item to be searched

8

Your search is successful and the location is = 4

Number of interchanges = 0

Search 5:

Enter the item to be searched

10

Your search is successful and the location is = 5

Number of interchanges = 2

Search 6 :

Enter the item to be searched

12

Your search is successful and the location is = 6

Number of interchanges = 1

Search 7 :

Enter the item to be searched

14

Your search is successful and the location is = 7

Number of interchanges = 2

Search 8 :

Enter the item to be searched

16

Your search is successful and the location is = 8

Number of interchanges = 3

Search 9 :

Enter the item to be searched

18

Your search is successful and the location is = 9

Number of interchanges = 3

Search 10 :

Enter the item to be searched

20

Your search is successful and the location is = 10

Number of interchanges = 2

Search 11 :

Enter the item to be searched

22

Your search is successful and the location is = 11

Number of interchanges = 3

Search 12 :

Enter the item to be searched

24

Your search is successful and the location is = 12

Number of interchanges = 1

Search 13 :

Enter the item to be searched

26

Your search is successful and the location is = 13

Number of interchanges = 3

Search 14 :

Enter the item to be searched

28

Your search is successful and the location is = 14

Number of interchanges = 2

Search 15 :

Enter the item to be searched

30

Your search is successful and the location is = 15

Number of interchanges = 3

Search 16 :

63"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Enter the item to be searched

32

Your search is successful and the location is = 16

Number of interchanges = 4

Search 17 : Enter the item to be searched

34

Your search is successful and the location is = 17

Number of interchanges = 4

Search 18 :

Enter the item to be searched

36

Your search is successful and the location is = 18

Number of interchanges = 3

Search 19 :

Enter the item to be searched

38

Your search is successful and the location is = 19

Number of interchanges = 4

Search 20 :

Enter the item to be searched

40

Your search is successful and the location is = 20

Number of interchanges = 2

Search 21 :

Enter the item to be searched

42

Your search is successful and the location is = 21

Number of interchanges = 4

Search 22 :

Enter the item to be searched

44

Your search is successful and the location is = 22

Number of interchanges = 3

Search 23 :

Enter the item to be searched

46

Your search is successful and the location is = 23

Number of interchanges = 4

Search 24 :

Enter the item to be searched

48

Your search is successful and the location is = 24

Number of interchanges = 5

Search 25 :

Enter the item to be searched

50

Your search is successful and the location is = 25

Number of interchanges = 5

Search 26 :

Enter the item to be searched

52

Your search is successful and the location is = 26

Number of interchanges = 4

Search 27 :

Enter the item to be searched

54

Your search is successful and the location is = 27

Number of interchanges = 5

Search 28 :

Enter the item to be searched

56

Your search is successful and the location is = 28

Number of interchanges = 3

Search 29 :

Enter the item to be searched

58

Your search is successful and the location is = 29

Number of interchanges = 5

Search 30 :

Enter the item to be searched

60

Your search is successful and the location is = 30

Number of interchanges = 4

Search 31 :

Enter the item to be searched

62

Design and Analysis of New Searching Algorithm

64 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Page 70: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

Your search is successful and the location is = 31

Number of interchanges = 5

Search 32 :

Enter the item to be searched

64

Your search is successful and the location is = 32

Number of interchanges = 6

Search 33 :

Enter the item to be searched

9

Your search is unsuccessful

Number of interchanges = 3

Search 34 :

Enter the item to be searched

23

Your search is unsuccessful

Number of interchanges = 4

Search 35 :

Enter the item to be searched

45

Your search is unsuccessful

Number of interchanges = 5

Search 36 :

Enter the item to be searched

55

Your search is unsuccessful

Number of interchanges = 6

Search 39 :

Enter the item to be searched

66

Your search is unsuccessful

Number of interchanges =7

Discussion of Results

Here we will analyze the time complexities of searching of different elements from a given list of

sorted elements in case of Binary Search and Matrix Search. From the above results, the following interesting observations can be made. Tables 1 to 3, and Fig. 1, clearly reveal the difference between Matrix Search and Binary Search.

Table 1 Binary Search

65"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

4

4

4

4

3

3

3

3

4

4

4

4

4

4

4

4

4

4

4

4

3

3

3

3

2

2

2

2

1

0

1

5

Table 2 Matrix Search

Fig. 1

2

3

4

5

1

2

3

4

2

3

4

5

2

3

4

5

2

3

4

5

1

2

3

4

0

1

2

3

3

4

5

6

Comparison Chart

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

1 3 5 7 9 11 13 15 17 19 21

Serial Number

Nu

mb

er

of

Inte

rch

an

ges

Binary Search

Matrix Search

? Suppose we have the University database having, say 50,000 students, enrolled during last five years and arranged according to their enrollment numbers, i.e., in a sorted order. Now if we want to access a particular segment of data, then we don't need to access entire database for searching some data from that particular segment of data. Here we can say that our method of searching will be the best suitable for the condition where recent data will be needed in most of the cases.

? As the array is sorted and we have data in the form of a matrix; if we want to access data and maximum data to be searched is in last row then we can search it through descending order concept, so that the worst case analysis may be converted in to best case.

Conclusion

In the method adopted by Matrix Search algorithm, we used an extension of the Binary Search. Although it is the beginning in the field of Searching techniques still we have to find the proper use of Matrix Search algorithm in Computer Science or any other area. So, the floor is also open for new suggestions or new ideas.

References

1. Aho A.V. Hopcroft J.E. and Ullman Jeffery D.(1983). Data Structures and Algorithms. Addison-Wesely .

2. CORMEN, Thomas H.Chrles E.L. and Ronald L.R.(2003). Introduction to algorithms, MIT Press and Mc. Graw- Hill.

3. Dromey, R.G.(1982), How to solve it by Computer.

4. Horowitz E., Sahani S., and Rajsekran S.(1998). Computer Algorithms. Computer Science Press.

5. Knuth. D.E. (1999). The art of computer programming, Volume 3, Sorting and Searching. Addison-Wesley.

66 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Design and Analysis of New Searching Algorithm

Table 3 Comparison of Binary Search and Matrix Search

S.N. Position Item to be

Searched

Binary Search Matrix Search

From these Tables, we can easily see that t [= (M-1)*(N-1)] number of elements have time complexities less than or equal to the case of Binary Search. Also it is clear from Table 3 that at least half number of elements show the time complexities less than or equal to the Binary Search.

Features of Matrix Search

Some advantages of Matrix Search are given below:

? Each row will have a middle element which is very easy to locate when we apply Binary Search on every row. While in case of Binary Search, it has only one middle element.

? At least t [= (M-1)*(N-1)] number of elements will show minimum time complexity for searching them for a given array as compared to Binary Search.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

1

2

3

4

5

6

7

9

10

11

12

13

14

15

17

18

19

20

21

22

23

2

4

6

8

10

12

14

18

20

22

24

26

28

30

34

36

38

40

42

44

46

4

3

4

2

4

3

4

4

3

4

2

4

3

4

4

3

4

2

4

3

4

2

1

2

0

2

1

2

3

2

3

1

3

2

3

4

3

4

2

4

3

4

Page 71: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

Institute of Management Studies, Dehradun

Your search is successful and the location is = 31

Number of interchanges = 5

Search 32 :

Enter the item to be searched

64

Your search is successful and the location is = 32

Number of interchanges = 6

Search 33 :

Enter the item to be searched

9

Your search is unsuccessful

Number of interchanges = 3

Search 34 :

Enter the item to be searched

23

Your search is unsuccessful

Number of interchanges = 4

Search 35 :

Enter the item to be searched

45

Your search is unsuccessful

Number of interchanges = 5

Search 36 :

Enter the item to be searched

55

Your search is unsuccessful

Number of interchanges = 6

Search 39 :

Enter the item to be searched

66

Your search is unsuccessful

Number of interchanges =7

Discussion of Results

Here we will analyze the time complexities of searching of different elements from a given list of

sorted elements in case of Binary Search and Matrix Search. From the above results, the following interesting observations can be made. Tables 1 to 3, and Fig. 1, clearly reveal the difference between Matrix Search and Binary Search.

Table 1 Binary Search

65"Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

4

4

4

4

3

3

3

3

4

4

4

4

4

4

4

4

4

4

4

4

3

3

3

3

2

2

2

2

1

0

1

5

Table 2 Matrix Search

Fig. 1

2

3

4

5

1

2

3

4

2

3

4

5

2

3

4

5

2

3

4

5

1

2

3

4

0

1

2

3

3

4

5

6

Comparison Chart

0

0.5

1

1.5

2

2.5

3

3.5

4

4.5

1 3 5 7 9 11 13 15 17 19 21

Serial Number

Nu

mb

er

of

Inte

rch

an

ges

Binary Search

Matrix Search

? Suppose we have the University database having, say 50,000 students, enrolled during last five years and arranged according to their enrollment numbers, i.e., in a sorted order. Now if we want to access a particular segment of data, then we don't need to access entire database for searching some data from that particular segment of data. Here we can say that our method of searching will be the best suitable for the condition where recent data will be needed in most of the cases.

? As the array is sorted and we have data in the form of a matrix; if we want to access data and maximum data to be searched is in last row then we can search it through descending order concept, so that the worst case analysis may be converted in to best case.

Conclusion

In the method adopted by Matrix Search algorithm, we used an extension of the Binary Search. Although it is the beginning in the field of Searching techniques still we have to find the proper use of Matrix Search algorithm in Computer Science or any other area. So, the floor is also open for new suggestions or new ideas.

References

1. Aho A.V. Hopcroft J.E. and Ullman Jeffery D.(1983). Data Structures and Algorithms. Addison-Wesely .

2. CORMEN, Thomas H.Chrles E.L. and Ronald L.R.(2003). Introduction to algorithms, MIT Press and Mc. Graw- Hill.

3. Dromey, R.G.(1982), How to solve it by Computer.

4. Horowitz E., Sahani S., and Rajsekran S.(1998). Computer Algorithms. Computer Science Press.

5. Knuth. D.E. (1999). The art of computer programming, Volume 3, Sorting and Searching. Addison-Wesley.

66 "Pragyaan : Information Technology" Volume 6 : Issue 2, Dec. 2008

Design and Analysis of New Searching Algorithm

Table 3 Comparison of Binary Search and Matrix Search

S.N. Position Item to be

Searched

Binary Search Matrix Search

From these Tables, we can easily see that t [= (M-1)*(N-1)] number of elements have time complexities less than or equal to the case of Binary Search. Also it is clear from Table 3 that at least half number of elements show the time complexities less than or equal to the Binary Search.

Features of Matrix Search

Some advantages of Matrix Search are given below:

? Each row will have a middle element which is very easy to locate when we apply Binary Search on every row. While in case of Binary Search, it has only one middle element.

? At least t [= (M-1)*(N-1)] number of elements will show minimum time complexity for searching them for a given array as compared to Binary Search.

1

2

3

4

5

6

7

8

9

10

11

12

13

14

15

16

17

18

19

20

21

1

2

3

4

5

6

7

9

10

11

12

13

14

15

17

18

19

20

21

22

23

2

4

6

8

10

12

14

18

20

22

24

26

28

30

34

36

38

40

42

44

46

4

3

4

2

4

3

4

4

3

4

2

4

3

4

4

3

4

2

4

3

4

2

1

2

0

2

1

2

3

2

3

1

3

2

3

4

3

4

2

4

3

4

Page 72: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

ABOUT THE JOURNAL

PRAGYAAN : Journal of Management, is a biannual publication of IMS, Dehradun. Its objective is to create a platform, where ideas, concepts and applications related to Management can be shared. Its focus is on pure research, applied and emerging issues in management.

The articles are invited from academicians, practicing managers and research scholars.

GUIDELINES FOR CONTRIBUTORS

Monika ChauhanEditorPRAGYAAN : Information TechnologyInstitute of Management StudiesMakkawala GreensMussoorie Diversion RoadDehradun - 248009 Uttarakhand (India)Phones: 0135-2738000, 2738001E-mail : [email protected] [email protected]

1. The whole document should be in Times New Roman, single column, 1.5 line spacing. A soft copy of the document formatted in MS Word 97 or higher should be sent as submission for acceptance.

2. Title of the paper should be use font Times New Roman 16”, Bold.

3. Author names should be in 12”, Bold, followed by affiliations in normal 12” font size. Names of all the authors must be in the same row. First author will be considered for all communication purposes.

4. First Page of the document should contain Title and Author names followed by 4-5 lines about each author. Nothing else should be written on this page.

5. The following pages should contain the text of the paper including: Title, Abstract, Keywords, Introduction, Subject Matter, Conclusion & References. Author names should not appear on this page to enable blind review.

6. All paragraph headings should be Bold, 12”.

7. Place tables/figures/images in text as close to the reference as possible. Table caption should be above the table. Figure caption should be below the figure. These captions should follow Times New Roman 11”.

8. Provide a numbered list of references used in the text, at the end of the document. The list should be ordered alphabetically by first author, and referenced by numbers in brackets [1]. Citations to be given at the end of document should follow the following format.

[1] Panther, J. G., Digital Communications, 3rd ed., Addison-Wesley, San Francisco, CA (1999).

9. Use non-proportional font (san serif, 11”) for contents (such as source code) to be separated from main text.

10. Do not include headers, footers or page numbers in your submission. These will be added when the publications are compiled.

11. Section headings should be bold. Subsection headings should be bold + italics. Font size in both cases should remain as 12”.

12. Page size should be 18x23.5 cm (7"x9.25"), justified on the page, beginning 1.9 cm (.75") from the top of the page and ending with 2.54 cm (1") from the bottom. The right and left margins should be 1.9 cm (0.75”). Number of pages should not exceed 10.

13. Articles which are published should not be reproduced or reprinted in any other form either in full or in part without the prior permission of the editor.

14. Wherever copyrighted material is used, the author should be accurate in reproduction and obtain permission from the copyright holders, if necessary.

15. Papers presented or submitted in a seminar must be clearly indicated at the bottom of the first page.

16. All manuscripts should be addressed to:

SUBSCRIPTION/ADVERTISEMENT RATES

The Subscription rates for each of our three journals, viz., Pragyaan: Information Technology, Pragyaan: Journal of Management and Pragyaan: Mass Communication are as follows:

Advertisement Rates (Rs.)

1 Year 3 Years 5 YearsCategory Domestic

Rates(Rs.)

Foreign Rates (US $) (US $)(US $)

Domestic Rates(Rs.)

Foreign Rates

Domestic Rates(Rs.)

Foreign Rates

Academic Institutions

500 30 1200 75 2000 120

Corporate 1000 60 2500 150 4000 240Individual Members

400 25 1000 60

1600 100

Students 300 20 700 40 1200 75

Location/Period 1 Year 2 Years 3 Years B/W (Inside Page) 10,000/- (2 Issues) 18,000/- (4 Issues) 25,000/- (6 Issues)Colour (Inside Back Cover)

17,000/-

(2 Issues)

30,000/-

(4 Issues)

45,000/-

(6 Issues)

Single Insertion (1 Issue) (Inside B/W Page) - Rs.5000/-

Please cut out and mail along with your cheque/DD to: The Registrar, Institute of Management Studies, Makkawala Greens, Mussorrie Diversion Road, Dehradun 248009, Uttarakhand, India

Phone No. 0135-2738000, 2738001

Date: ____________ Signature (individual/authorized signatory)

Please send the amount by DD/Local Cheque favouring Institute of Management Studies Dehradun, for timely receipt of the journal. Outstation cheques shall not be accepted.

A bank draft/cheque bearing no ________________ dated_____________ for Rs. ________ Drawn in favour

of Institute of Management Studies, Dehradun towards the subscription is enclosed. Please register me/us for

the subscription with the following particulars:

Name ____________________________________________________________ (Indiviual/Organisation)

Address_______________________________________________________________________________

______________________________________________________________________________________

Phone__________________ Fax _________________ E- mail___________________________________

Pragyaan: Information Technology

Pragyaan: Journal of Management

Pragyaan: Mass Communication

SUBSCRIPTION FORM

I wish to subscribe to the following journal(s) of IMS, Dehradun:

Name of Journal No. of Years Amount

Total

Page 73: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

ABOUT THE JOURNAL

PRAGYAAN : Journal of Management, is a biannual publication of IMS, Dehradun. Its objective is to create a platform, where ideas, concepts and applications related to Management can be shared. Its focus is on pure research, applied and emerging issues in management.

The articles are invited from academicians, practicing managers and research scholars.

GUIDELINES FOR CONTRIBUTORS

Monika ChauhanEditorPRAGYAAN : Information TechnologyInstitute of Management StudiesMakkawala GreensMussoorie Diversion RoadDehradun - 248009 Uttarakhand (India)Phones: 0135-2738000, 2738001E-mail : [email protected] [email protected]

1. The whole document should be in Times New Roman, single column, 1.5 line spacing. A soft copy of the document formatted in MS Word 97 or higher should be sent as submission for acceptance.

2. Title of the paper should be use font Times New Roman 16”, Bold.

3. Author names should be in 12”, Bold, followed by affiliations in normal 12” font size. Names of all the authors must be in the same row. First author will be considered for all communication purposes.

4. First Page of the document should contain Title and Author names followed by 4-5 lines about each author. Nothing else should be written on this page.

5. The following pages should contain the text of the paper including: Title, Abstract, Keywords, Introduction, Subject Matter, Conclusion & References. Author names should not appear on this page to enable blind review.

6. All paragraph headings should be Bold, 12”.

7. Place tables/figures/images in text as close to the reference as possible. Table caption should be above the table. Figure caption should be below the figure. These captions should follow Times New Roman 11”.

8. Provide a numbered list of references used in the text, at the end of the document. The list should be ordered alphabetically by first author, and referenced by numbers in brackets [1]. Citations to be given at the end of document should follow the following format.

[1] Panther, J. G., Digital Communications, 3rd ed., Addison-Wesley, San Francisco, CA (1999).

9. Use non-proportional font (san serif, 11”) for contents (such as source code) to be separated from main text.

10. Do not include headers, footers or page numbers in your submission. These will be added when the publications are compiled.

11. Section headings should be bold. Subsection headings should be bold + italics. Font size in both cases should remain as 12”.

12. Page size should be 18x23.5 cm (7"x9.25"), justified on the page, beginning 1.9 cm (.75") from the top of the page and ending with 2.54 cm (1") from the bottom. The right and left margins should be 1.9 cm (0.75”). Number of pages should not exceed 10.

13. Articles which are published should not be reproduced or reprinted in any other form either in full or in part without the prior permission of the editor.

14. Wherever copyrighted material is used, the author should be accurate in reproduction and obtain permission from the copyright holders, if necessary.

15. Papers presented or submitted in a seminar must be clearly indicated at the bottom of the first page.

16. All manuscripts should be addressed to:

SUBSCRIPTION/ADVERTISEMENT RATES

The Subscription rates for each of our three journals, viz., Pragyaan: Information Technology, Pragyaan: Journal of Management and Pragyaan: Mass Communication are as follows:

Advertisement Rates (Rs.)

1 Year 3 Years 5 YearsCategory Domestic

Rates(Rs.)

Foreign Rates (US $) (US $)(US $)

Domestic Rates(Rs.)

Foreign Rates

Domestic Rates(Rs.)

Foreign Rates

Academic Institutions

500 30 1200 75 2000 120

Corporate 1000 60 2500 150 4000 240Individual Members

400 25 1000 60

1600 100

Students 300 20 700 40 1200 75

Location/Period 1 Year 2 Years 3 Years B/W (Inside Page) 10,000/- (2 Issues) 18,000/- (4 Issues) 25,000/- (6 Issues)Colour (Inside Back Cover)

17,000/-

(2 Issues)

30,000/-

(4 Issues)

45,000/-

(6 Issues)

Single Insertion (1 Issue) (Inside B/W Page) - Rs.5000/-

Please cut out and mail along with your cheque/DD to: The Registrar, Institute of Management Studies, Makkawala Greens, Mussorrie Diversion Road, Dehradun 248009, Uttarakhand, India

Phone No. 0135-2738000, 2738001

Date: ____________ Signature (individual/authorized signatory)

Please send the amount by DD/Local Cheque favouring Institute of Management Studies Dehradun, for timely receipt of the journal. Outstation cheques shall not be accepted.

A bank draft/cheque bearing no ________________ dated_____________ for Rs. ________ Drawn in favour

of Institute of Management Studies, Dehradun towards the subscription is enclosed. Please register me/us for

the subscription with the following particulars:

Name ____________________________________________________________ (Indiviual/Organisation)

Address_______________________________________________________________________________

______________________________________________________________________________________

Phone__________________ Fax _________________ E- mail___________________________________

Pragyaan: Information Technology

Pragyaan: Journal of Management

Pragyaan: Mass Communication

SUBSCRIPTION FORM

I wish to subscribe to the following journal(s) of IMS, Dehradun:

Name of Journal No. of Years Amount

Total

Page 74: Pragyaan IT Dec 08 - IMS Unison University · Analysis of schemes between Channel Assignment for GSM Networks Sudan Jha, ... Dr. Hardeep Singh Prof. & Head, ... 1. Award of ISSN No

ims

IMS at a glance

The recent call for knowledge capital has increased the demand for a quality education specifically in professional courses like IT, Management, and Mass Communication.

With a focus on catering to the demands of modern industry, Institute of Management Studies, Dehradun started its venture in the year 1996, under the aegis of IMS Society, which is registered body under The Societies Registration Act 1860.

The potential employers of professional students today are looking for visionaries with skills to create future. IMS Dehradun has accordingly taken a stride to produce world class professionals. It is totally committed to provide high quality education, enhance the intrinsic abilities, and promote managerial and technological skills of the students.

IMS has been constantly pouring its efforts to upgrade effectiveness of educational process, and is committed to:

• Provide sound academic environment to students for complete learning.

• Provide state of-art-technical infrastructure.

• Facilitate students and staff to realize their potential.

• Promote skills of the students for their all round development.

Since its inception, it has been conducting professional courses in business administration, information technology and mass communication in a best professional manner. These courses are affiliated to Uttarakhand Technical University or HNB Garhwal University, Uttarakhand. Today more than 2000 students are admitted at the Institute in courses like PGDM, MBA, MCA, MIB, MA (Mass Comm.), BBA, BCA, B.Sc. (IT) and BA (Mass Comm.). Our courses, namely, PGDM, MBA and MCA are duly approved by AICTE and Ministry of HRD, Government in India.

The Institute has also taken up activities to facilitate respectable placement for our students. Our Corporate Resource Center (CRC) has been working with the industry to cater to its current needs effectively and the final placement scenario has been phenomenal. Many organizations are showing strong desires to have our students on board as their employees. For all round development of our students, many extra curricular activities are arranged. This is proving to be useful in translating efforts of our students into positive results.

The Institute brings out three Journals, one each in the three disciplines of IT, Management, and Mass Communication, in an effort to fulfill our objective of facilitating and promoting quality research work in India.