Arun prjct dox

61
1.INTRODUCTION There are several types of such attacks. An attacker can possibly launch aDoSattack by studying the flaws of network protocols or applications and then sending malformed packets which might cause the corresponding protocols or applications getting into a faulty state. An example of such attacks is Teardrop attack, which is sending incorrect IP fragments to the target. The target machine may crash if it does not implement TCP/IP fragmentation reassembly code properly. This kind of attacks can be prevented by fixing the corresponding bugs in the protocols or applications. However, the attacker does not always have to do its best to study the service if it wants to make it unavailable. It can just flood packets to keep the server busy with processing packets or cause congestion in the victim’s network, so that the server might not have the ability to handle the packets from legitimate hosts or even cannot receive packets from them .In order to deplete the victim’s key resources (such as bandwidth and CPU time), the attacker has to aggregate a big volume of malicious traffic. Most of the time, the attacker collects many (could be millions) of zombie machines or bots to flood packets simultaneously, which forms a Distributed Denial of Service (DDoS) attack. Most of the methods that protect systems from DoS and DDoS attacks focus on mitigating malicious bandwidth consumption caused by packets flooding, as that is the most simple and common method adopted by attackers. Those methods may mitigate DDoS attacks reactively by identifying the malicious traffic and informing the upstream routers to filter or rate-limit the corresponding traffic they may also mitigate DDoS attacks by deploying secure overlays or by distinguishing the legitimate traffic with valid network capabilities. These solutions are suitable for filtering bandwidth attacks. However, the attacker may change its strategy and attack an application directly, especially when the application involves complex computations. It could be easier to exhaust its computational resources with small volume of messages. Therefore, the malicious traffic against an application has usually small volume and it is difficult to be detected. 1

Transcript of Arun prjct dox

Page 1: Arun prjct dox

1.INTRODUCTION

There are several types of such attacks. An attacker can possibly launch aDoSattack by studying the flaws of network protocols or applications and then sending malformed packets which might cause the corresponding protocols or applications getting into a faulty state. An example of such attacks is Teardrop attack, which is sending incorrect IP fragments to the target. The target machine may crash if it does not implement TCP/IP fragmentation reassembly code properly. This kind of attacks can be prevented by fixing the corresponding bugs in the protocols or applications. However, the attacker does not always have to do its best to study the service if it wants to make it unavailable. It can just flood packets to keep the server busy with processing packets or cause congestion in the victim’s network, so that the server might not have the ability to handle the packets from legitimate hosts or even cannot receive packets from them .In order to deplete the victim’s key resources (such as bandwidth and CPU time), the attacker has to aggregate a big volume of malicious traffic. Most of the time, the attacker collects many (could be millions) of zombie machines or bots to flood packets simultaneously, which forms a Distributed Denial of Service (DDoS) attack. Most of the methods that protect systems from DoS and DDoS attacks focus on mitigating malicious bandwidth consumption caused by packets flooding, as that is the most simple and common method adopted by attackers. Those methods may mitigate DDoS attacks reactively by identifying the malicious traffic and informing the upstream routers to filter or rate-limit the corresponding traffic they may also mitigate DDoS attacks by deploying secure overlays or by distinguishing the legitimate traffic with valid network capabilities. These solutions are suitable for filtering bandwidth attacks. However, the attacker may change its strategy and attack an application directly, especially when the application involves complex computations. It could be easier to exhaust its computational resources with small volume of messages. Therefore, the malicious traffic against an application has usually small volume and it is difficult to be detected. Defence methods mentioned above may help, but they might not be efficient and accurate with respect to a certain application, as they lack application related information. Considering there are numerous applications, it would be very expensive and impractical for traffic monitors to keep information for every application.

1.1ObjectiveInternet grows rapidly since it was created. Via theInternet infrastructure, hosts can

not only share their information, but also complete tasks cooperatively by contributing their computing resources. Moreover, an end host can easily join the network and communicate with any other host by exchanging packets. These are encouraging features of the Internet, openness and scalability. However, attackers can also take these advantages to prevent legitimate users of a service from using that service by flooding messages to the corresponding server, which forms a Denial of Service (DoS) attack.

There are several types of such attacks. An attacker can possibly launch aDoSattack by studying the flaws of network protocols or applications and then sending malformed packets which might cause the corresponding protocols or applications getting into a faulty state.

1

Page 2: Arun prjct dox

1.2 Scope

There are many network-based solutions against DDoS attacks. These solutions usually use routers or overlay networks to filter malicious traffic propose an ack-based port-hopping protocol focusing on the communication only between two parties, modeled as sender and receiver. The receiver sends back an acknowledgment for every message received from the sender, and the sender uses these acknowledgments as signals to change the destination port numbers of its messages. Since this protocol is ack-based, time synchronization is not necessary

2

Page 3: Arun prjct dox

2.Software Requirement Specification

2.1EXISTING SYSTEM

Internet grows rapidly since it was created, Via the Internet infrastructure, hosts can not only share their information, but also complete tasks cooperatively by contributing their computing resources. Moreover, an end host can easily join the network and communicate with any other host by exchanging packets. These are encouraging features of the Internet, openness and scalability. However, attackers can also take these advantages to prevent legitimate users of a service from using that service by flooding messages to the corresponding server.

2.2PROPOSED SYSTEM

We focus on the problem that an adversary wants to subvert the communication of client-server application by attacking their communication channels or, for brevity, ports. At each time point, some port must be open at the server side toreceive the messages sent from legitimate clients. At the server side, the size of port number space is N, meaning that there are N ports that the server can use for communication. The server and the legitimate clients share a pseudorandomfunction to generate the port numbers which will be used in the communication. We assume that there exists a preceding authentication procedure which enables the server to distinguish the messages from the legitimate clients. Wealso assume that every client is honest which means any execution of the client is based on the protocol and clients will not reveal the random function to the adversary.

2.3 Software Requirement Specifications

Content Description

OS Windows XP with SP2

Database MS-Access

Technologies C#.NET

IDE Visual Studio .Net 2010

Browser IE

2.4 Hardware Requirement specifications

3

Page 4: Arun prjct dox

Content Description

Processor Pentium

HDD 20 GB Min

40 GB Recommended

RAM 1 GB Min

2 GB Recommended

2.5 Problem Definition

We focus on the problem that an adversary wants to subvertthe communication of client-server application by attackingtheir communication channels or, for brevity, ports. At eachtime point, some port must be open at the server side toreceive the messages sent from legitimate clients. At theserver side, the size of port number space is N, meaning thatthere are N ports that the server can use for communication.The server and the legitimate clients share a pseudorandom function f to generate the port numbers which will be usedin the communication. We assume that there exists apreceding authentication procedure which enables the server to distinguish the messages from the legitimate clients. We also assume that every client is honest which means any execution of the client is based on the protocol and clients willnot reveal the random function to the adversary.

The attacker is modeled as an adaptive adversary which can eavesdrop and attack a bounded number of ports simultaneously. For the purpose of the analysis, we bound the strength of the adversary by Q, meaning that it can attack arbitrarily at most Q ports of the server simultaneously.We also assume that when the adversary attacks a certain port of the server then this port cannot receive any message from the clients. As mentioned in the related work section, Badishi et al. [18] presented an analysis about the effect of adversary’s different strategies when it launches blind attacks that disable open ports only partially. We donot elaborate on this again.

4

Page 5: Arun prjct dox

3.System Analysis

3.1 Module Description

MODULES

Analysis Network

Routing

Multiple Routing

Adaptive hopping period

Analysis of network:

We initiate a fixed-length walk from the node. This walk should be long enough to

ensure that the visited peers represent a close sample from the underlying stationary

distribution. We then retrieve certain information from the visited peers, such as the

system details and process details. It acting as source for the network .In sender used to

create sends the request and received the response and destination used to received the

request and send the response for the source.

Routing :

When the packets are sending from the source, they are transferred to the

destination through the routers. Routers check for the IP address given by the source with

their own IP address for the destination confirmation.

Multiple routing:

• It is desirable to allow packets with the same source and destination to take more than one possible path. This facility can be used to ease congestion and overcome node failures.

• To operate such a scheme consistently nodes must maintainrouting tables.

• Multipath routing allows the establishment of multiple paths between a single source and single destination node.

5

Page 6: Arun prjct dox

• It is typically proposed in order to increase the reliability of data transmission (i.e., fault tolerance) or to provide load balancing.

• Load balancing is of especial importance in MANETs because of the limited bandwidth between the nodes.

Adaptive Hopping Period

A client C has a constant clock drift _c related to the server. It may happen that in

the datatransmission stage, the hopping time of C will drift apart from the server’s. This

might cause C to send messages to aport that is already closed or is not opened yet,

depending on whether C’s clock is slower or faster than S’s. illustrate the two situations,

respectively.

The HOPERAA execution interval is initiated to 0. In the contact-

initiation part, every contact-initiationmessage and reply message will

be attached with the timestamp of its sending time. The reply message

also includes the timestamp hcðt1Þ and the arrival time t2 of the first

contact-initiation message received by the server.

We say a client gets a successful access to the server,when at last one

of its contact-initiation messages isreceived by the server.

. A contact-initiation trial is a trial by a client to get theserver’s reply in

the contact-initiation part. It beginswhen a client randomly chooses an

interval of theport space and sends a contact-initiation message to each

of the ports in that interval and ends when theclient receives a reply

message from the server orreaches a time-out of waiting.

3.2 Feasibility Study

3.2.1FEASIBILITY STUDEY

Feasibility Studyis a high level capsule version of the entire process intended to answer a

number of questions like: What is the problem? Is there any feasible solution to the given

problem? Is the problem even worth solving? Feasibility study is conducted once the problem

clearly understood. Feasibility study is necessary to determine that the proposed system is Feasible

6

Page 7: Arun prjct dox

by considering the technical, Operational, and Economical factors. By having a detailed feasibility

study the management will have a clear-cut view of the proposed system.

The following feasibilities are considered for the project in order to ensure that the

project is variable and it does not have any major obstructions. Feasibility study

encompasses the following things:

Technical Feasibility

Economical Feasibility

Operational Feasibility

In this phase, we study the feasibility of all proposed systems, and pick the best feasible solution

for the problem. The feasibility is studied based on three main factors as follows.

3.2.2TECHNICAL FEASIBILITY

In this step, we verify whether the proposed systems are technically feasible or not. i.e.,

all the technologies required to develop the system are available readily or not.

Technical Feasibility determines whether the organization has the technology and skills

necessary to carryout the project and how this should be obtained. The system can be feasible

because of the following grounds.

All necessary technology exists to develop the system.

This system is too flexible and it can be expanded further.

This system can give guarantees of accuracy, ease of use, reliability and the data

security.

This system can give instant response to inquire.

Our project is technically feasible because, all the technology needed for our project

is readily available.

Front End : C#.NET

Back End : MS -ACCESS

Web-Server : IIS 5.0

Host : Windows-XP

7

Page 8: Arun prjct dox

3.2.3 ECONOMICAL FEASBILITY

In this step, we verify which proposal is more economical. We compare the financial

benefits of the new system with the investment. The new system is economically feasible only

when the financial benefits are more than the investments and expenditure. Economical Feasibility

determines whether the project goal can be within the resource limits allocated to it or not. It must

determine whether it is worthwhile to process with the entire project or whether the benefits

obtained from the new system are not worth the costs. Financial benefits must be equal or exceed

the costs. In this issue, we should consider:

The cost to conduct a full system investigation.

The cost of h/w and s/w for the class of application being considered.

The development tool.

The cost of maintenance etc.,

Our project is economically feasible because the cost of development is very minimal

when compared to financial benefits of the application.

3.3.1 OPERATIONAL FEASIBILITY

In this step, we verify different operational factors of the proposed systems like man-

power, time etc., whichever solution uses less operational resources, is the best operationally

feasible solution. The solution should also be operationally possible to implement. Operational

Feasibilitydetermines if the proposed system satisfied user objectives could be fitted into the

current system operation. The present system Enterprise Resource Information System can be

justified as Operationally Feasible based on the following grounds.

The methods of processing and presentation are completely accepted by the clients

since they can meet all user requirements.

The clients have been involved in the planning and development of the system.

The proposed system will not cause any problem under any circumstances.

Our project is operationally feasible because the time requirements and personnel requirements are

satisfied. We are a team of four members and we worked on this project for three working months.

8

Page 9: Arun prjct dox

ProjectInstructions:

Based on the solution requirements, conceptualize the Solution Architecture. Depict

the various architectural components, show interactions and connectedness and show

internal and external elements. Discuss suitability of typical architectural types like

Pipes, Filters, Event Handlers, and Layers etc.

Identify the significant class entities and carry out class modeling.

Carry out Detailed design of Classes, Database objects and other solution components.

Distribute work specifications and carry out coding and testing

3.3 Functional requirements

This section contains specification of all the functional requirements needed to develop this

module or sub-module.

ID Requirements Priority(A/B/C)

MDDS_R_01 System should provide provision for user to select destination system name

MDDS_R_02 System should provide provision for user to search nodes

MDDS_R_03 System should provide provision for user for user to send the data to destination

MDDS_R_04 System should provide provision for user to select the root for sending the data

MDDS_R_05 System should provide provision to receive the data through receiver

MDDS_R_06 System should provide provision to identify the intruder

MDDS_R_07 System should provide provision for user to receive the data

MDDS_R_08 System should provide provision to view the available nodes

3.4 Non- Functional requirements

Performance Requirements: Good band width, less congestion on the network. Identifying the shortest route to reach the destination will all improve performance.

Safety Requirements: No harm is expected from the use of the product either to the OS or any data.

9

Page 10: Arun prjct dox

Product Security Requirements: The product is protected from un-authorized users from using it. The system allows only authenticated users to work on the application. The users of this system are organization and ISP administrator.

Software Quality Attributes: The product is user friendly and its accessibility is from the client. The application is reliable and ensures its functioning maintaining the ISP web service is accessible to the various organizations. As it is developed in .Net it is highly interoperable with OS that have provided support for MSIL (Server side). The system requires less maintenance as it is not installed on the client but hosted on the ISP. The firewall, antivirus protection etc. is provided by the ISP.

3.5System architecture

N-Tier Applications:

N-Tier Applications can easily implement the concepts of Distributed Application Design and Architecture. The N-Tier Applications provide strategic benefits to Enterprise Solutions. While 2-tier, client-server can help us create quick and easy solutions and may be used for Rapid Prototyping, they can easily become a maintenance and security night mare

The N-tier Applications provide specific advantages that are vital to the business continuity of the enterprise. Typical features of a real life n-tier may include the following:

Security

Availability and Scalability

Manageability

Easy Maintenance

Data Abstraction

The above mentioned points are some of the key design goals of a successful n-tier application that intends to provide a good Business Solution.

Definition:

Simply stated, an n-tier application helps us distribute the overall functionality into various tiers or layers:

Presentation Layer

Business Rules Layer

Data Access Layer

Each layer can be developed independently of the other provided that it adheres to the standards and communicates with the other layers as per the specifications.

10

Page 11: Arun prjct dox

This is the one of the biggest advantages of the n-tier application. Each layer can potentially treat the other layer as a ‘Block-Box’.

In other words, each layer does not care how other layer processes the data as long as it sends the right data in a correct format.

Fig 1.1-N-Tier Architecture

1. The Presentation Layer:

Also called as the client layer comprises of components that are dedicated to presenting the data to the user. For example: Windows/Web Forms and buttons, edit boxes, Text boxes, labels, grids, etc.

2. The Business Rules Layer:

11

Page 12: Arun prjct dox

This layer encapsulates the Business rules or the business logic of the encapsulations. To have a separate layer for business logic is of a great advantage. This is because any changes in Business Rules can be easily handled in this layer. As long as the interface between the layers remains the same, any changes to the functionality/processing logic in this layer can be made without impacting the others. A lot of client-server apps failed to implement successfully as changing the business logic was a painful process.

3. The Data Access Layer:

This layer comprises of components that help in accessing the Database. If used in the right way, this layer provides a level of abstraction for the database structures. Simply put changes made to the database, tables, etc. do not affect the rest of the application because of the Data Access layer. The different application layers send the data requests to this layer and receive the response from this layer.

12

Page 13: Arun prjct dox

4. Implementation

4.1Technologies

4.1.1Microsoft.NET Framework

The .NET Framework is a new computing platform that simplifies application

development in the highly distributed environment of the Internet. The .NET Framework

is designed to fulfill the following objectives:

To provide a consistent object-oriented programming environment whether object

code is stored and executed locally, executed locally but Internet-distributed, or

executed remotely.

To provide a code-execution environment that minimizes software deployment and

versioning conflicts.

To provide a code-execution environment that guarantees safe execution of code,

including code created by an unknown or semi-trusted third party.

To provide a code-execution environment that eliminates the performance

problems of scripted or interpreted environments.

To make the developer experience consistent across widely varying types of

applications, such as Windows-based applications and Web-based applications.

To build all communication on industry standards to ensure that code based on

the .NET Framework can integrate with any other code.

The .NET Framework has two main components: the common language runtime and

the .NET Framework class library. The common language runtime is the foundation of the

.NET Framework. You can think of the runtime as an agent that manages code at

execution time, providing core services such as memory management, thread

13

Page 14: Arun prjct dox

management, and remoting, while also enforcing strict type safety and other forms of code

accuracy that ensure security and robustness. In fact, the concept of code management is a

fundamental principle of the runtime. Code that targets the runtime is known as managed

code, while code that does not target the runtime is known as unmanaged code. The class

library, the other main component of the .NET Framework, is a comprehensive, object-

oriented collection of reusable types that you can use to develop applications ranging from

traditional command-line or graphical user interface (GUI) applications to applications

based on the latest innovations provided by ASP.NET, such as Web Forms and XML Web

services.

The .NET Framework can be hosted by unmanaged components that load the common

language runtime into their processes and initiate the execution of managed code, thereby

creating a software environment that can exploit both managed and unmanaged features.

The .NET Framework not only provides several runtime hosts, but also supports the

development of third-party runtime hosts.

For example, ASP.NET hosts the runtime to provide a scalable, server-side environment

for managed code. ASP.NET works directly with the runtime to enable Web Forms

applications and XML Web services, both of which are discussed later in this topic.

Internet Explorer is an example of an unmanaged application that hosts the runtime (in the

form of a MIME type extension). Using Internet Explorer to host the runtime enables you

to embed managed components or Windows Forms controls in HTML documents.

Hosting the runtime in this way makes managed mobile code (similar to Microsoft®

ActiveX® controls) possible, but with significant improvements that only managed code

can offer, such as semi-trusted execution and secure isolated file storage.

14

Page 15: Arun prjct dox

The following illustration shows the relationship of the common language runtime and the

class library to your applications and to the overall system. The illustration also shows

how managed code operates within a larger architecture.

.NET Framework Class Library

The .NET Framework class library is a collection of reusable types that tightly integrate

with the common language runtime. The class library is object oriented, providing types from

which your own managed code can derive functionality. This not only makes the .NET Framework

types easy to use, but also reduces the time associated with learning new

features of the .NET Framework. In addition, third-party components can integrate

seamlessly with classes in the .NET Framework.

For example, the .NET Framework collection classes implement a set of interfaces that

you can use to develop your own collection classes. Your collection classes will blend

seamlessly with the classes in the .NET Framework.

As you would expect from an object-oriented class library, the .NET Framework types

enable you to accomplish a range of common programming tasks, including tasks such as

string management, data collection, database connectivity, and file access. In addition to

these common tasks, the class library includes types that support a variety of specialized

development scenarios. For example, you can use the .NET Framework to develop the

following types of applications and services:

Console applications.

Scripted or hosted applications.

Windows GUI applications (Windows Forms).

ASP.NET applications.

XML Web services.

Windows services.

15

Page 16: Arun prjct dox

For example, the Windows Forms classes are a comprehensive set of reusable types that

vastly simplify Windows GUI development. If you write an ASP.NET Web Form

application, you can use the Web Forms classes.

Client Application Development

Client applications are the closest to a traditional style of application in Windows-

based programming. These are the types of applications that display windows or forms on

the desktop, enabling a user to perform a task. Client applications include applications

such as word processors and spreadsheets, as well as custom business applications such as

data-entry tools, reporting tools, and so on. Client applications usually employ windows,

menus, buttons, and other GUI elements, and they likely access local resources such as the

file system and peripherals such as printers.

Another kind of client application is the traditional ActiveX control (now replaced by the

managed Windows Forms control) deployed over the Internet as a Web page. This

application is much like other client applications: it is executed natively, has access to

local resources, and includes graphical elements.

In the past, developers created such applications using C/C++ in conjunction with the

Microsoft Foundation Classes (MFC) or with a rapid application development (RAD)

environment such as Microsoft® Visual Basic®. The .NET Framework incorporates

aspects of these existing products into a single, consistent development environment that

drastically simplifies the development of client applications.

The Windows Forms classes contained in the .NET Framework are designed to be used

for GUI development. You can easily create command windows, buttons, menus, toolbars,

16

Page 17: Arun prjct dox

and other screen elements with the flexibility necessary to accommodate shifting business

needs.

For example, the .NET Framework provides simple properties to adjust visual attributes

associated with forms. In some cases the underlying operating system does not support

changing these attributes directly, and in these cases the .NET Framework automatically

recreates the forms. This is one of many ways in which the .NET Framework integrates

the developer interface, making coding simpler and more consistent.

Unlike ActiveX controls, Windows Forms controls have semi-trusted access to a user's

computer. This means that binary or natively executing code can access some of the

resources on the user's system (such as GUI elements and limited file access) without

being able to access or compromise other resources. Because of code access security,

many applications that once needed to be installed on a user's system can now be safely

deployed through the Web. Your applications can implement the features of a local

application while being deployed like a Web page.

17

Page 18: Arun prjct dox

4.2C#.NET

ADO.NET Overview

ADO.NET is an evolution of the ADO data access model that directly addresses

user requirements for developing scalable applications. It was designed specifically for the

web with scalability, statelessness, and XML in mind.

ADO.NET uses some ADO objects, such as the Connection and Command objects, and

also introduces new objects. Key new ADO.NET objects include the DataSet,

DataReader, and DataAdapter.

The important distinction between this evolved stage of ADO.NET and previous data

architectures is that there exists an object -- the DataSet -- that is separate and distinct

from any data stores. Because of that, the DataSet functions as a standalone entity. You

can think of the DataSet as an always disconnected recordset that knows nothing about the

source or destination of the data it contains. Inside a DataSet, much like in a database,

there are tables, columns, relationships, constraints, views, and so forth.

A DataAdapter is the object that connects to the database to fill the DataSet. Then, it

connects back to the database to update the data there, based on operations performed

while the DataSet held the data. In the past, data processing has been primarily

connection-based. Now, in an effort to make multi-tiered apps more efficient, data

processing is turning to a message-based approach that revolves around chunks of

information. At the center of this approach is the DataAdapter, which provides a bridge

to retrieve and save data between a DataSet and its source data store. It accomplishes this

by means of requests to the appropriate SQL commands made against the data store.

The XML-based DataSet object provides a consistent programming model that works

with all models of data storage: flat, relational, and hierarchical. It does this by having no

'knowledge' of the source of its data, and by representing the data that it holds as

collections and data types. No matter what the source of the data within the DataSet is, it

is manipulated through the same set of standard APIs exposed through the DataSet and its

subordinate objects.

18

Page 19: Arun prjct dox

While the DataSet has no knowledge of the source of its data, the managed provider has

detailed and specific information. The role of the managed provider is to connect, fill, and

persist the DataSet to and from data stores. The OLE DB and SQL Server .NET Data

Providers (System.Data.OleDb and System.Data.SqlClient) that are part of the .Net

Framework provide four basic objects: the Command, Connection, DataReader and

DataAdapter. In the remaining sections of this document, we'll walk through each part of

the DataSet and the OLE DB/SQL Server .NET Data Providers explaining what they are,

and how to program against them.

The following sections will introduce you to some objects that have evolved, and some

that are new. These objects are:

Connections: For connection to and managing transactions against a database.

Commands: For issuing SQL commands against a database.

Data Readers: For reading a forward-only stream of data records from a SQL Server

data source.

Datasets: For storing, remoting and programming against flat data, XML data and

relational data.

DataAdapters. For pushing data into a DataSet, and reconciling data against a

database.

When dealing with connections to a database, there are two different options: SQL Server

.NET Data Provider (System.Data.SqlClient) and OLE DB .NET Data Provider

(System.Data.OleDb). In these samples we will use the SQL Server .NET Data Provider.

These are written to talk directly to Microsoft SQL Server. The OLE DB .NET Data

Provider is used to talk to any OLE DB provider (as it uses OLE DB underneath).

Connections:

Connections are used to 'talk to' databases, and are respresented by provider-

specific classes such as SQLConnection. Commands travel over connections and

resultsets are returned in the form of streams which can be read by a DataReader object,

or pushed into a DataSet object.

Commands:

19

Page 20: Arun prjct dox

Commands contain the information that is submitted to a database, and are represented by

provider-specific classes such as SQLCommand. A command can be a stored procedure

call, an UPDATE statement, or a statement that returns results. You can also use input and

output parameters, and return values as part of your command syntax. The example below

shows how to issue an INSERT statement against the Northwind database.

Data Readers:

The Data Reader object is somewhat synonymous with a read-only/forward-only

cursor over data. The DataReader API supports flat as well as hierarchical data. A

DataReaderobject is returned after executing a command against a database. The format

of the returned DataReader object is different from a recordset. For example, you might

use the DataReader to show the results of a search list in a web page.

DataSets and DataAdapters :

DataSets:

The DataSet object is similar to the ADORecordset object, but more powerful,

and with one other important distinction: the DataSet is always disconnected. The

DataSet object represents a cache of data, with database-like structures such as

tables, columns, relationships, and constraints. However, though a DataSet can

and does behave much like a database, it is important to remember that DataSet

objects do not interact directly with databases, or other source data. This allows the

developer to work with a programming model that is always consistent, regardless

of where the source data resides. Data coming from a database, an XML file, from

code, or user input can all be placed into DataSet objects. Then, as changes are

made to the DataSet they can be tracked and verified before updating the source

data. The GetChanges method of the DataSet object actually creates a second

DatSet that contains only the changes to the data. This DataSet is then used by a

DataAdapter (or other objects) to update the original data source.

The DataSet has many XML characteristics, including the ability to produce and consume

XML data and XML schemas. XML schemas can be used to describe schemas

interchanged via WebServices. In fact, a DataSet with a schema can actually be compiled

for type safety and statement completion.

20

Page 21: Arun prjct dox

DataAdapters (OLEDB/SQL):

The DataAdapter object works as a bridge between the DataSet and the source

data. Using the provider-specific SqlDataAdapter (along with its associated

SqlCommand and SqlConnection) can increase overall performance when working with

a Microsoft SQL Server databases. For other OLE DB-supported databases, you would

use the OleDbDataAdapter object and its associated OleDbCommand and

OleDbConnection objects.

The DataAdapter object uses commands to update the data source after changes have

been made to the DataSet. Using the Fill method of the DataAdapter calls the SELECT

command; using the Update method calls the INSERT, UPDATE or DELETE command

for each changed row. You can explicitly set these commands in order to control the

statements used at runtime to resolve changes, including the use of stored procedures. For

ad-hoc scenarios, a CommandBuilder object can generate these at run-time based upon a

select statement. However, this run-time generation requires an extra round-trip to the

server in order to gather required metadata, so explicitly providing the INSERT,

UPDATE, and DELETE commands at design time will result in better run-time

performance.

1. ADO.NET is the next evolution of ADO for the .Net Framework.

2. ADO.NET was created with n-Tier, statelessness and XML in the forefront. Two new

objects, the DataSet and DataAdapter, are provided for these scenarios.

3. ADO.NET can be used to get data from a stream, or to store data in a cache for updates.

4. There is a lot more information about ADO.NET in the documentation.

5. Remember, you can execute a command directly against the database in order to do

inserts, updates, and deletes. You don't need to first put data into a DataSet in order to

insert, update, or delete it.

6. Also, you can use a DataSet to bind to the data, move through the data, and navigate

data relationships.

21

Page 22: Arun prjct dox

5.System Design

5.1Data Flow Diagrams:

A data flow diagram is graphical tool used to deMHribe and analyze movement of data

through a system. These are the central tool and the basis from which the other components are

developed. The transformation of data from input to output, through processed, may be

deMHribed logically and independently of physical components associated with the system.

These are known as the logical data flow diagrams. The physical data flow diagrams show the

actual implements and movement of data between people, departments and workstations. A full

deMHription of a system actually consists of a set of data flow diagrams. Using two familiar

notations Yourdon, Game and Sarsen notation develops the data flow diagrams. Each component

in a DFD is labeled with a deMHriptive name. Process is further identified with a number that will

be used for identification purpose. The development of DFD’s is done in several levels. Each

process in lower level diagrams can be broken down into a more detailed DFD in the next level.

The lop-level diagram is often called context diagram. It consists a single process bit, which plays

vital role in studying the current system. The process in the context level diagram is exploded into

other process at the first level DFD.

SAILENT FEATURES OF DFD’s:

1. The DFD shows flow of data, not of control loops and decision are controlled

considerations do not appear on a DFD.

2. The DFD does not indicate the time factor involved in any process whether the data flows

take place daily, weekly, monthly or yearly.

3. The sequence of events is not brought out on the DFD.

22

Page 23: Arun prjct dox

Sensor network Symmetric transmissionand reception

Packetscan be generated by higher layers of a node

Node within transmissionrangeSenseactivity due to higher signal strength

Node transmits at a fixed power level

Data Flow Diagram:

23

Page 24: Arun prjct dox

5.2 UML Diagrams

Use case diagram:

Class Diagram:

24

Page 25: Arun prjct dox

Sequence:

25

Page 26: Arun prjct dox

Activity Diagram:

26

Page 27: Arun prjct dox

6. CODING

DataAccess.cs:

using System;using System.Collections.Generic;using System.Text;using System.Text.RegularExpressions;using System.Net.Sockets;using System.Net;

namespace Receiver.DAL{classDataAccess {publicstring Connect() {string RecData = string.Empty;try {IPEndPoint IpEnd = newIPEndPoint(IPAddress.Any, 6000);Socket Recv = newSocket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.IP); Recv.Bind(IpEnd); Recv.Listen(100);

Socket NRecv = Recv.Accept();

byte[] RData = newbyte[1024 * 5000]; NRecv.Receive(RData);string Result = Encoding.ASCII.GetString(RData);string[] Res = Regex.Split(Result, ":");

RecData = Res[0].ToString();

NRecv.Close(); Recv.Close(); }catch { RecData = "Error"; }

return RecData; } }}

Receiver:

Form1.cs:

using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Text;

27

Page 28: Arun prjct dox

using System.Windows.Forms;using System.Threading;using Receiver.BLL;

namespace Receiver{publicpartialclassForm1 : Form {public Form1() { InitializeComponent(); }

Thread Receiver;Business bus = newBusiness();delegatevoidSetText(string text);

privatevoid Form1_Load(object sender, EventArgs e) {try { Receiver = newThread(newThreadStart(Connection)); Receiver.IsBackground = true; Receiver.Start(); }catch {MessageBox.Show("Unable to Start - Restart Application"); } }

publicvoid Connection() {while (true) {try {string Result = bus.Connect();

if (Result != "Error") {if (this.lstResult.InvokeRequired) {SetText st = newSetText(SysName);this.Invoke(st, newobject[] { Result }); }else { lstResult.Items.Add(Result); } } }catch(Exception ex) {MessageBox.Show(ex.ToString()); } } }

publicvoid SysName(string text) {

28

Page 29: Arun prjct dox

lstResult.Items.Add(text); } }}

Optimal Jamming:

Form1.cs:

using System;using System.Collections.Generic;using System.ComponentModel;using System.Data;using System.Drawing;using System.Collections;using System.Text;using System.Text.RegularExpressions;using System.Windows.Forms;using OptimalJamming.BLL;using Microsoft.VisualBasic;

namespace OptimalJamming{publicpartialclassForm1 : Form {Business bus = newBusiness();PropertiesClass pc = newPropertiesClass();int PortNo = 5000;DataSet Res = newDataSet();Random r = newRandom();

public Form1() { InitializeComponent(); }

privatevoid Form1_Load(object sender, EventArgs e) {ArrayList Dest = bus.GetDestinations();

cmbDestination.Items.Clear(); cmbDestination.Items.Add("--Select--");

foreach (string Dests in Dest) {//cmbDestination.Items.Add("technova-6"); cmbDestination.Items.Add(Dests.Trim());//cmbDestination.Items.Add(Dests.Trim());//cmbDestination.Items.Add(Dests.Trim());//cmbDestination.Items.Add(Dests.Trim()); } }

privatevoid cmbDestination_SelectedIndexChanged(object sender, EventArgs e) {try { pc._Sysname = cmbDestination.Text.Trim();

string IpAddress = bus.GetIPaddress(pc);

29

Page 30: Arun prjct dox

lblIpAddress.Text = IpAddress; }catch {

} }

privatevoid button1_Click(object sender, EventArgs e) {string Result = string.Empty;try { Result = bus.DeleteBus();

if (Result != "Error") {for (int i = 1; i < cmbDestination.Items.Count; i++) { pc._Sysname = cmbDestination.Items[i].ToString(); pc._PortNo = PortNo + i;

Result = bus.ConnectionBus(pc);

if (Result != "Error") { pc._Dist = Convert.ToInt32(Result);string Update = bus.UpdateRoute(pc); } } }

Res = bus.ShowRange(); dgvAvaliableNodes.DataSource = Res.Tables["Range"].DefaultView; }catch {

} }

privatevoid button2_Click(object sender, EventArgs e) {int x = 0;string ResultString = string.Empty;string BlockedNodes = string.Empty;string FIntruder = string.Empty;try {if (cmbDestination.Text != ""&& cmbDestination.Text != "--Select--") {if (txtMessage.Text.Trim() != "") {if (dgvAvaliableNodes.RowCount != 0) {string Input = Microsoft.VisualBasic.Interaction.InputBox("Enter Number Between 1 to 4", "Sending Route Selection", "", 100, 100);

30

Page 31: Arun prjct dox

ResultString = txtMessage.Text.Trim();

if (Convert.ToInt16(Input) >= 5) {MessageBox.Show("Enter Number Between 1 to 4 to Send"); }else {for (int i = 0; i < dgvAvaliableNodes.RowCount - 1; i++) { pc._Sysname = dgvAvaliableNodes.Rows[i].Cells[0].Value.ToString(); pc._PortNo = 50;

FIntruder = bus.Intruder(pc);

if (FIntruder != "Error") {MessageBox.Show("Intruder Find Choosing Another Path");break; }elseif (Input == dgvAvaliableNodes.Rows[i].Cells[1].Value.ToString()) { x = x + 1;

if (ResultString == "") { ResultString = dgvAvaliableNodes.Rows[i].Cells[1].Value.ToString(); ResultString = ResultString + ":" + dgvAvaliableNodes.Rows[i].Cells[0].Value.ToString(); ResultString = ResultString + ":" + dgvAvaliableNodes.Rows[i].Cells[2].Value.ToString(); }else { ResultString = ResultString + ":" + dgvAvaliableNodes.Rows[i].Cells[1].Value.ToString(); ResultString = ResultString + ":" + dgvAvaliableNodes.Rows[i].Cells[0].Value.ToString(); ResultString = ResultString + ":" + dgvAvaliableNodes.Rows[i].Cells[2].Value.ToString(); } }else {if (BlockedNodes == "") { BlockedNodes = dgvAvaliableNodes.Rows[i].Cells[0].Value.ToString(); BlockedNodes = BlockedNodes + ":" + dgvAvaliableNodes.Rows[i].Cells[2].Value.ToString(); }else { BlockedNodes = BlockedNodes + ":" + dgvAvaliableNodes.Rows[i].Cells[0].Value.ToString(); BlockedNodes = BlockedNodes + ":" + dgvAvaliableNodes.Rows[i].Cells[2].Value.ToString(); } }

31

Page 32: Arun prjct dox

}

if (FIntruder != "Error") {int Rand = r.Next(dgvAvaliableNodes.Rows.Count-1); x = 1;if (ResultString == "") { ResultString = dgvAvaliableNodes.Rows[Rand].Cells[1].Value.ToString(); ResultString = ResultString + ":" + dgvAvaliableNodes.Rows[Rand].Cells[0].Value.ToString(); ResultString = ResultString + ":" + dgvAvaliableNodes.Rows[Rand].Cells[2].Value.ToString(); }else { ResultString = ResultString + ":" + dgvAvaliableNodes.Rows[Rand].Cells[1].Value.ToString(); ResultString = ResultString + ":" + dgvAvaliableNodes.Rows[Rand].Cells[0].Value.ToString(); ResultString = ResultString + ":" + dgvAvaliableNodes.Rows[Rand].Cells[2].Value.ToString(); }

if (ResultString == "") { ResultString = cmbDestination.Text + ":" + "6000" + ":" + x + ":" + "Complete"; }else { ResultString = ResultString + ":" + cmbDestination.Text + ":" + "6000" + ":" + x + ":" + "Complete"; }

string SendResultString = bus.SendResult(ResultString);

if (SendResultString == "Error") {MessageBox.Show("Connection Error - Unable to Send"); }else {string SendBlockedS = "Blocked" + ":" + BlockedNodes + ":" + "Completed";string Rs = bus.SendBlocked(SendBlockedS);if (Rs == "Error") {MessageBox.Show("Connection Error - Unable to Connect Blocked"); }else {MessageBox.Show("Message Sent Successfully"); } } }else {if (ResultString == "") {

32

Page 33: Arun prjct dox

ResultString = cmbDestination.Text + ":" + "6000" + ":" + x + ":" + "Complete"; }else { ResultString = ResultString + ":" + cmbDestination.Text + ":" + "6000" + ":" + x + ":" + "Complete"; }

string SendResultString = bus.SendResult(ResultString);

if (SendResultString == "Error") {MessageBox.Show("Connection Error - Unable to Send"); }else {string SendBlockedS = "Blocked" + ":" + BlockedNodes + ":" + "Completed";string Rs = bus.SendBlocked(SendBlockedS);if (Rs == "Error") {MessageBox.Show("Connection Error - Unable to Connect Blocked"); }else {MessageBox.Show("Message Sent Successfully"); } } } } }else {MessageBox.Show("Search Avaliable Nodes"); } }else {MessageBox.Show("Enter Message to Send"); } }else {MessageBox.Show("Select the Destination Before Sending Data"); } }catch {MessageBox.Show("Unable to Send Data"); } }

privatevoid dgvAvaliableNodes_CellContentClick(object sender, DataGridViewCellEventArgs e) {

}

}}

33

Page 34: Arun prjct dox

7.TESTING

Software testing is a critical element of software quality assurance and represents the ultimate review of specification, design and coding. The increasing visibility of software as a system element and attendant costs associated with a software failure are motivating factors for we planned, through testing. Testing is the process of executing a program with the intent of finding an error. The design of tests for software and other engineered products can be as challenging as the initial design of the product itself.

There of basically two types of testing approaches.

One is Black-Box testing – the specified function that a product has been

designed to perform, tests can be conducted that demonstrate each function is fully

operated.

The other is White-Box testing – knowing the internal workings of the

product ,tests can be conducted to ensure that the internal operation of the product

performs according to specifications and all internal components have been

adequately exercised.

White box and Black box testing methods have been used to test this package.

The entire loop constructs have been tested for their boundary and intermediate

conditions. The test data was designed with a view to check for all the conditions

and logical decisions. Error handling has been taken care of by the use of

exception handlers.

7.1 TESTING STRATEGIES:

Testing is a set of activities that can be planned in advanced and conducted systematically. A strategy for software testing must accommodation low-level tests that are necessary to verify that a small source code segment has been correctly implemented as well as high-level tests that validate major system functions against customer requirements.

Software testing is one element of verification and validation. Verification refers to the set of activities that ensure that software correctly implements as specific function. Validation refers to a different set of activities that ensure that the software that has been built is traceable to customer requirements.

The main objective of software is testing to uncover errors. To fulfill this objective, a series of test steps unit, integration, validation and system tests are planned and executed. Each test step is accomplished through a series of systematic test technique that assist in the design of test cases. With each testing step, the level of abstraction with which software is considered is broadened.

34

Page 35: Arun prjct dox

Testing is the only way to assure the quality of software and it is an umbrella activity

rather than a separate phase. This is an activity to be performed in parallel with the

software effort and one that consists of its own phases of analysis, design, implementation,

execution and maintenance.

UNIT TESTING:

This testing method considers a module as single unit and checks the

unit at interfaces and communicates with other modules rather than getting into details at

35

Page 36: Arun prjct dox

statement level. Here the module will be treated as a black box, which will take some

input and generate output. Outputs for a given set of input combination are pre-calculated

and are generated by the module.

SYSTEM TESTING:

Here all the pre tested individual modules will be assembled to create the

larger system and tests are carried out at system level to make sure that all modules are

working in synchronous with each other. This testing methodology helps in making sure

that all modules which are running perfectly when checked individually are also running

in cohesion with other modules. For this testing we create test cases to check all modules

once and then generated test combinations of test paths throughout the system to make

sure that no path is making its way into chaos.

INTEGRATED TESTING:

Testing is a major quality control measure employed during software

development. Its basic function is to detect errors. Sub functions when combined may not

produce than it is desired. Global data structures can represent the problems. Integrated

testing is a systematic technique for constructing the program structure while conducting

the tests. To uncover errors that are associated with interfacing the objective is to make

unit test modules and built a program structure that has been detected by design. In a non

- incremental integration all the modules are combined in advance and the program is

tested as a whole. Here errors will appear in an endless loop function. In incremental

testing the program is constructed and tested in small segments where the errors are

isolated and corrected.

Different incremental integration strategies are top – down integration,

bottom – up integration, regression testing.

7.2 TOP-DOWN INTEGRATION TEST:

Modules are integrated by moving downwards through the control

hierarchy beginning with main program. The subordinate modules are incorporated into

structure in either a breadth first manner or depth first manner. This process is done in

five steps:

Main control module is used as a test driver and steps are substituted or all

modules directly to main program.

36

Page 37: Arun prjct dox

Depending on the integration approach selected subordinate is replaced at a time

with actual modules.

Tests are conducted.

On completion of each set of tests another stub is replaced with the real module

Regression testing may be conducted to ensure trha6t new errors have not been

introduced.

This process continuous from step 2 until entire program structure is reached.

In top down integration strategy decision making occurs at upper levels in the hierarchy

and is encountered first. If major control problems do exists early recognitions is

essential.

If depth first integration is selected a complete function of the software may be

implemented and demonstrated.

Some problems occur when processing at low levels in hierarchy is required to

adequately test upper level steps to replace low-level modules at the beginning of the top

down testing. So no data flows upward in the program structure.

7.3BOTTOM-UP INTEGRATION TEST:

Begins construction and testing with atomic modules. As modules are

integrated from the bottom up, processing requirement for modules subordinate to a given

level is always available and need for stubs is eliminated. The following steps

implements this strategy.

Low-level modules are combined in to clusters that perform a specific software sub

function.

A driver is written to coordinate test case input and output.

Cluster is tested.

Drivers are removed and moving upward in program structure combines clusters.

Integration moves upward, the need for separate test driver’s lesions.

If the top levels of program structures are integrated top down, the number of drivers

can be reduced substantially and integration of clusters is greatly simplified.

37

Page 38: Arun prjct dox

REGRESSION TESTING:

Each time a new module is added as a part of integration as the software changes.

Regression testing is an actually that helps to ensure changes that do not introduce

unintended behavior as additional errors.

Regression testing maybe conducted manually by executing a subset of all test

cases or using automated capture play back tools enables the software engineer to capture

the test case and results for subsequent playback and compression. The regression suit

contains different classes of test cases.

A representative sample to tests that will exercise all software functions.

Additional tests that focus on software functions that are likely to be affected by the change.

VALIDATION TESTING:

Validation testing demonstrates the traces the requirements of the software. This can be achieved through a series of black box tests.

Test Id Test Inputs Actual Output Obtained Output Description

1 Activate optimal jamming page

Give computer name

Get system IP address automatically

Success Test passed. Passes the control to the Other Module Menus.

2 Search Nodes Click Search Nodes Button

Get nearable

Nodes

Success Test passed. Passes the control to the Other Module Menus.

3

Activate Intermeadiate1

Set the IpEnd values

Suceess Success Test passed. Passes the control to the Other Module Menus.

8.SCREEN SHORTS

Receiver:

38

Page 39: Arun prjct dox

Intruder:

Intermediate4:

39

Page 40: Arun prjct dox

Optimal jamming:

Intemediate2:

40

Page 41: Arun prjct dox

Intermediate1:

41

Page 42: Arun prjct dox

42

Page 43: Arun prjct dox

9.CONCLUSION

We investigate application-level protection against DoS attacks. More

specifically,supporting port hopping is investigated in the presence of timing uncertainty

and for enabling multiparty communications. We present an adaptive algorithm for

dealing with port hopping in the presence of clock-rate drifts (such a drift implies that the

peer’s clock values may differ arbitrarily with time). For enabling multiparty

communications with port-hopping, an algorithm is presented for a server to support port

hopping with many clients, without the server needing to keep state for each client

individually. A main conclusion is that it is possible to employ the port hopping method in

multiparty applications in a scalable way. The method does not induce any need for group

synchronization which would have raised scalability issues, but instead employs a simple

interface of the server with each client. The options for the adversary to launch a directed

attack to the application’s ports after eavesdropping is minimal, since the port hopping

period of the protocol is fixed. Another main conclusion is that the adaptive method can

work under timing uncertainty and specifically fixed clock drifts. An interesting issue to

investigate further is to address variable clock drifts and variable hopping frequencies as

well.

43

Page 44: Arun prjct dox

10. FUTURE ENHANCEMENT

While our system works well in most cases, there are also some limitations.

Here we discuss some of them. Firstly, our system is based purely on the web. While it

Is the key for the success of our system, it still has some shortcomings. 1) As addressed

before, Combiner is based on our web redundancy assumption, and currently only utilizes

the most reliable direct competitive information. However, our assumption is not always

true for all queries. If there is not enough redundant competitive information on the web,

our system may not be able to discover them. For example, if EntityAand EntityBare

competitors, and EntityBand EntityCare also discovered as competitors, then EntityAand

EntityCmay also be competitors and cannot be identified from the web directly. One way

to weaken the assumption is to leverage the latent relations. Previous work has shown that

both direct and indirect information are helpful in mining two entities’ associations [27].

We leave this as one of our future work. 2) Web spam is becoming more and more popular

with the fast development of WWW. Our system is sensitive to web spam. However,

compared with previous site centric web applications, our system has more immune ability

to the web spam since the

Spammer is easy to spam a specific site but much harder to the whole World Wide Web

44

Page 45: Arun prjct dox

Bibliography

FOR .NET INSTALLATION

www.support.mircosoft.com

FOR DEPLOYMENT AND PACKING ON SERVER

www.developer.com

www.16seconds.com

REFERENCES

[1] Z. Fu, M. Papatriantafilou, and P. Tsigas, “Mitigating DistributedDenial of Service Attacks in Multiparty Applications in thePresence of Clock Drifts,” Proc. IEEE Int’l Symp. Reliable DistributedSystems (SRDS), Oct. 2008.[2] CERT Advisory CA-1997-28 IP Denial-of-Service Attacks, http://www.cert.org/advisories/ca-1997-28.html, 2010.[3] K. Argyraki and D.R. Cheriton, “Active Internet Traffic Filtering:Real-Time Response to Denial-of-Service Attacks,” Proc. Ann.Conf. USENIX Ann. Technical Conf. (ATEC ’05), p. 10, 2005.[4] R. Mahajan, S.M. Bellovin, S. Floyd, J. Ioannidis, V. Paxson, and S.Shenker, “Controlling High Bandwidth Aggregates in the Network,”ACM SIGCOMM Computer Comm. Rev., vol. 32, no. 3,pp. 62-73, 2002.[5] D. Dean, M. Franklin, and A. Stubblefield, “An AlgebraicApproach to IP Traceback,” ACM Trans. Information and SystemSecurity, vol. 5, no. 2, pp. 119-137, 2002.

45