se-file

52
Experiment no.-1 Aim: - Introduction to Data Flow Diagrams. Data Flow Diagrams: Data flow diagrams (DFD) are the part of the SSADM method (Structured Systems Analysis and Design Methodology), intended for analysis and information systems projection. Data flow diagrams illustrate how data is processed by a system in terms of inputs and outputs. 1

Transcript of se-file

Page 1: se-file

Experiment no.-1

Aim:- Introduction to Data Flow Diagrams.

Data Flow Diagrams:

Data flow diagrams (DFD) are the part of the SSADM method (Structured Systems Analysis

and Design Methodology), intended for analysis and information systems projection. Data

flow diagrams illustrate how data is processed by a system in terms of inputs and outputs.

A data flow diagram

1

Page 2: se-file

Data Flow Diagram Notations:

Process: A process transforms incoming data flow into outgoing data flow.

DataStore: Datastores are repositories of data in the system. They are sometimes also referred to as files.

Dataflow: Dataflows are pipelines through which packets of information flow. Label the arrows with the name of the data that moves through it.

External Entity: External entities are objects outside the system, with which the system communicates. External entities are sources and destinations of the system's inputs and outputs.

Levels of DFDs:-

Data flow diagrams can be expressed as a series of levels. We begin by making a list of business activities to determine the DFD elements (external entities, data flows, processes, and data stores). Next, a context diagram is constructed that shows only a single process(representing the entire system), and associated external entities. The Diagram-0, or Level 0 diagram, is next, which reveals general processes and data stores. Following the drawing of Level 0 diagrams, child diagrams are drawn (Level 1 diagrams) for each process illustrated by Level 0 diagrams, and so on.

Context Diagrams:

A context diagram is a top level (also known as Level 0) data flow diagram. It only contains

one process node (process 0) that generalizes the function of the entire system in relationship

to external entities.

2

Page 3: se-file

Figure: General Form of a Level 0 DFD.

Level 1 DFD:

The first level DFD shows the main processes within the system. Each of these processes can be broken into further processes until we reach pseudocode. DFD Level 1 refers to functions (bubbles) in a Context Diagram. It is one level more detailed DFD for Context Diagram. The steps to develop DFD level 1 are:

Given the abstract data flows in Context Diagram, explicitly define in-flows and out-flows in your software in details.

For each type of in-flow, we should create a bubble (a process) to handle the in-flow. For each type of out-flow, we should use a bubble created in the last step or create a new

bubble to handle it. Define the necessary internal processes and data flows that are necessary to handle all

functionalities of our software, such that all in-flows (sources) and out-flows (sinks) are connected.

3

Page 4: se-file

Figure: Example of a Level 1 DFD Showing the Data Flow and Data Store Associated with a Sub Process “Digital Sound Wizard.”

When producing a first-level DFD, the relationship of the system with its environment must be preserved. In other words, the data flow in and out of the system in the Level 1 DFD must be exactly the same as those data flows in Level 0.

Level 2 DFD:

DFD level 2 is a further decomposition of DFD level 1. The steps to develop DFD level 2 are extremely similar to those for DFD level 1, but there are some extra requirements: Each bubble should both be clearly defined and also easy enough to implement and do

testing in later stages. The data flows should be consistent with DFD level 1 and Context Diagram.

Data Flow Diagram Layers:-

Draw data flow diagrams in several nested layers. A single process node on a high level

diagram can be expanded to show a more detailed data flow diagram. Draw the context

diagram first, followed by various layers of data flow diagrams.

4

Page 5: se-file

The nesting of data flow layers

Advantages of DFDs:-

The DFD method is an element of object oriented analysis and is widely used. Use of DFDs promotes quick and relatively easy project code development.

DFDs are easy to learn with their few and simple-to-understand symbols. The structure used for designing DFDs is simple, employing English nouns or noun-

adjective-verb constructs.

Disadvantages of DFDs:-

DFDs for large systems can become cumbersome, difficult to translate and read, and be time consuming in their construction.

Data flow can become confusing to programmers, but DFDs are useless without the prerequisite detail.

Different DFD models employ different symbols (circles and rectangles, for example, for entities).

5

Page 6: se-file

Experiment no.-2

Aim:- CASE-STUDY: ‘Computerization of Supermarket’

Abstract:-

In today’s world, Information Technology has become an inseparable part of every business organization. It not only helps to make business operations easier but also provides significant rise in productivity. A Supermarket needs to keep records of the inventory and sales. Alongwith, it needs to keep records of employees and generate their pay-slips, not to forget the generation of bills for the customers. All this, if done manually, consumes a lot of time, space and stationery. A better option is to computerize these operations using a suitable software. This case study is an effort to study the way in which such automation of supermarket can be done. This case study explains the whole procedure of computerization step by step using a desktop application. It looks into the effective working of the supermarket and so ensures that the desired result of profitability is achieved.

The software developed under the project ‘Computerization of Supermarket’ will be a user friendly software that will allow the owner of software to perform the following automated functions:

1. Records of suppliers2. Records of supplies3. Keep track of stock available4. Automatic tracking for re-order levels5. Bill generation6. Employee database7. Pay slip generation8. Answers to various queries to the database by manager9. Generation of details about budget and profits

The automation of all such operations would result in efficient working of the business, better record keeping and productivity. It would make the running of supermarket an altogether different experience for the owner. It will save customer time also, giving them a better shopping experience.

The software under consideration in this case study will be referred to as ‘Supermarket

Automation Software’ (SAS). By utilizing the detailed reports generated by SAS, the

business can be better analyzed to get prepared for the future and hence give the customers

what they really want from the store, leading to bigger profits, which is the ultimate name of

the game in retailing.

6

Page 7: se-file

Disadvantages of the existing system: -

The present system uses manual methods and the disadvantages of the existing system are as

listed below: -

There is no protection for the records and files

Retrieval and storage of information takes much time

Communication between the organization levels is quite less and slow

Deleting, adding, modifying, updating and inserting of individual records takes much

time

Human resources are not efficiently utilized while maintaining documents manually as a

lot of time and energy is wasted

Less productivity

Long waiting time for customers.

Less efficiency

Less profits

Advantages of the proposed system: -

The advantages of the proposed automatic system are as listed below: -

Easy to maintain the records

No wastage of time and energy

Calculations can be performed very accurately and quickly

Modifications can be done easily

Data can be retrieved as and when required depending on the necessity

Data entry is very easy

Faster customer service

Better utilization of human resources

Better productivity

More efficiency

Better profits

7

Page 8: se-file

Feasibility study:-

Purpose:

A feasibility study is a compressed, capsule version of the analysis phase of the system development life cycle aimed at determining quickly and at a reasonable cost if the problem can be solved and if it is worth solving. A feasibility study can also be viewed as an in-depth problem definition.

Strengths, weaknesses, and limitations:

A well-conducted feasibility study provides a sense of the likelihood of success and of the expected cost of solving the problem, and gives management a basis for making resource allocation decisions. In many organizations, the feasibility study reports for all pending projects are submitted to a steering committee where some are rejected and others accepted and prioritized.

Because the feasibility study occurs near the beginning of the system development life cycle, the discovery process often uncovers unexpected problems or parameters that can significantly change the expected system scope. It is useful to discover such issues before significant funds have been expended. However, such surprises make it difficult to plan, schedule, and budget for the feasibility study itself, and close management control is needed to ensure that the cost does not balloon out of control. The purpose of a feasibility study is to determine, at a reasonable cost, if the problem is worth solving.

It is important to remember that the feasibility study is preliminary. The point is to determine if the resources should be allocated to solve the problem, not to actually solve the problem. Conducting a feasibility study is time consuming and costly. For essential or obvious projects, it sometimes makes sense to skip the feasibility study.

Inputs and related ideas:

The feasibility study begins with the problem description prepared early in the problem definition phase of the system development life cycle. The feasibility study is, in essence, a preliminary version of the analysis phase of the system development life cycle. The information collected during the feasibility study is used during project planning to prepare schedules, budgets, and other project management documents using different tools. Prototypes and simulation models are sometimes used to demonstrate technical feasibility. Economic feasibility is typically demonstrated using cost/benefit analysis.

Cost of feasibility study:

The point of the feasibility study is to determine, at a reasonable cost, if the problem is worth solving. Thus the cost of the feasibility study should represent a small fraction of the estimated cost of developing the system, perhaps five or ten percent of the scope.

8

Page 9: se-file

Types of feasibility:

Four types of feasibility are considered:

•  Technical feasibility—

Proof that the problem can be solved using existing technology. Typically, the analyst proves technical feasibility by citing existing solutions to comparable problems. Prototypes, physical models, and analytical techniques are also effective.

For SAS being studied here, requisite technology is easily available in market and can be acquired for software development.

•  Economic feasibility— Proof that the likely benefits outweigh the cost of solving the problem; generally demonstrated by a cost/benefit analysis. The analyst demonstrates economic feasibility through cost/benefit analysis.

In case of SAS, the increase in profit will certainly outweigh the investment. So, the project is clearly economically feasible.

•  Operational feasibility— Proof that the problem can be solved in the user’s environment. Perhaps a union agreement or a government regulation constrains the analyst. There might be ethical considerations. Maybe the boss suffers from computer phobia. Such intangible factors can cause a system to fail just as surely as technology or economics. Some analysts call this criterion political feasibility.

In case of supermarket, computers can be bought alongwith the required software and employees can easily be trained for their respective usage if they are not technically sound upto the required level.

•  Organizational feasibility— Proof that the proposed system is consistent with the organization’s strategic objectives. If not, funds might be better spent on some other project.

SAS provides an efficient way to achieve the organizational objectives. Hence, this project is organizational feasible.

Steps in a typical feasibility study :-

The steps in a typical feasibility study are summarized in Figure that follows.

9

Page 10: se-file

Fig.  The steps in a typical feasibility study.

Starting with the initial problem description, the system’s scope and objectives are more precisely defined. The existing system is studied, and a high-level logical model of the proposed system is developed using one or more of the analysis tools. The problem is then

10

Page 11: se-file

redefined in the light of new knowledge, and these first four steps are repeated until an acceptable level of understanding emerges.

Given an acceptable understanding of the problem, several possible alternative solutions are identified and evaluated for technical, economic, operational, and organizational feasibility. The responsible analyst then decides if the project should be continued or dropped, roughs out a development plan (including a schedule, a cost estimate, likely resource needs, and a cost/benefit analysis), writes a feasibility study report, and presents the results to management and to the user.

Requirement Analysis:-

Functional Requirements:

The set of functionalities that are supported by the system are documented below –

register sales

Whenever any item is sold from the stock of the supermarket, this function will prompt the clerk to provide product_id for each item. The data regarding the item type and the quantity get automatically registered then. After the end of a sales transaction, it will print the bill containing the serial number of the sales transaction, the name of the item, code number, quantity, unit price, and item price. The bill should indicate the total amount payable.

register sold items

Input given is automatically registered data about the item along with its quantity. The processing is done as the sold-item gets registered.

generate bill

Input provided is automatically generated "generate bill command". The output is the transaction bill containing the serial number of the sales transaction, the name of the item, code number, quantity, unit price, and item price etc. The bill also mentions the total amount payable.

update inventory

In order to support inventory management, this function decreases the inventory whenever an item is sold. Again, when there is a new supply arrival, an employee can update the inventory

11

Page 12: se-file

level by this function. The input is new supply (when arrives) or registered sold-items. The processing is done whenever new supply arrives or items are sold this updates the inventory.

check inventory

The manager upon invoking this function can issue query to see the inventory details. In response, it shows the inventory details.

print sales-statistics

Upon invoking this function, it will generate a printed out sales statistics for every item the supermarket deals with for any particular day or any particular period.

update price

The manager can change the price of an item by invoking this function. Input given is change price command along with the new assigned price. The processing updates the price of the corresponding item in the inventory.

generate payslip

SAS keeps track of employees information and performance. It also generates pay-slips for the employees. It can provide reports regarding employees based on particular requirements of the management.

Non-functional Requirements:

The set of non-functional requirements can be stated as follows:

Bill format

The bill should contain the serial number of the sales transaction, the name of the name of the item, code number, quantity, unit price, and item price. The bill should indicate the total amount payable.

Sales-statistics Report format

The sales statistics report should indicate the quantity of an item sold, the price realized, and the profit.

12

Page 13: se-file

Payslip format

It should contain individual employee names, ids and the pay details.

Technical Requirements

Smart Draw 6 MS SQL Server Visual Basic 6.0 2 GB RAM >40 GB HDD(depending on size of supermarket database requirements) PCs of good quality High speed processor e.g. Pentium core 2 duo Laser printers

Structured Analysis:-

Data Flow Diagram: The context level diagram of the Supermarket Automation Software is shown below. It is a top level (also known as Level 0) data flow diagram. It only contains one process node (process 0) named SAS that generalizes the function of the entire automated system in relationship to external entities.

Fig: Context level diagram for SAS

13

SAS

Salesman

EmployeeDatabase

Pay slip

Salesstatistics

Employee newsupply

Bill

Manager

query

Change price

displayinventory

generate salesstatistics command

Sold items

Page 14: se-file

Experiment no.-3

Aim:- Level 1 Data Flow Diagram for case study.

Fig: First level diagram for SAS

14

0.1

registersales

sales information

sold items

bill0.2

updateinventory

0.3

checkinventory

query new supply

registeredsold items

inventory details

display inventory

0.4

print salesstatistics

generatesalesstatistics

sales statistics

0.5

updateprice

changed price

0.6

salarycalculation

employee details

pay-slip

Page 15: se-file

Experiment no.-4

Aim:- Level 2 Data Flow Diagram for case study.

Fig. Second level data flow diagram

15

Employee database

Page 16: se-file

Experiment no.-5

Aim:- Entity Relationship Diagram for case study.

In software engineering, an Entity-Relationship Model (ERM) is an abstract and conceptual representation of data. Entity-relationship modeling is a database modeling method, used to produce a type of conceptual schema or semantic data model of a system, often a relational database, and its requirements in a top-down fashion.

Diagrams created using this process are called entity-relationship diagrams, or ER diagrams or ERDs for short.

ERDs:

Entity Relationship Diagrams (ERDs) illustrate the logical structure of databases.

An ER Diagram

16

Page 17: se-file

Entity Relationship Diagram Notations:

Peter Chen developed ERDs in 1976. Since then Charles Bachman and James Martin have

added some sligh refinements to the basic ERD principles.

EntityAn entity is an object or concept about which you want to store information.

Weak EntityAttributes are the properties or characteristics of an entity.

Key attributeA key attribute is the unique, distinguishing characteristic of the entity. For example, an employee's social security number might be the employee's key attribute.

Multivalued attributeA multivalued attribute can have more than one value. For example, an employee entity can have multiple skill values.

Derived attributeA derived attribute is based on another attribute. For example, an employee's monthly salary is based on the employee's annual salary.

Relationships

Relationships illustrate how two entities share information in the database structure.

Cardinality

Cardinality specifies how many instances of an entity relate to one instance of another entity.

17

Page 18: se-file

Ordinality is also closely linked to cardinality. While cardinality specifies the occurences of a relationship, ordinality describes the relationship as either mandatory or optional. In other words, cardinality specifies the maximum number of relationships and ordinality specifies the absolute minimum number of relationships.

Recursive relationshipIn some cases, entities can be self-linked. For example, employees can supervise other employees.

Entity Relationship Diagrams for case study:

(a) ER diagram for inventory :

Fig: ER diagram for inventory of supermarket

18

Page 19: se-file

(b) ER diagram for employees’ information:

Fig: ER diagram for employees’ information in supermarket

19

Page 20: se-file

Experiment no.-6

Aim:- Representing mapping cardinalities and relationships in ER diagram using smart draw.

Objective :- To familiarize with the concept of mapping cardinalities and relationships in

ER diagram.

S/W Requirement :- Smart Draw

H/W Requirement :- •Processor – Any suitable Processor e.g. Celeron

•Main Memory - 128 MB RAM

•Hard Disk – minimum 20 GB IDE Hard Disk

•Removable Drives

–1.44 MB Floppy Disk Drive

–52X IDE CD-ROM Drive

•PS/2 HCL Keyboard and Mouse

Method:-

Mapping Cardinalities: These express the number of entities to which another entity can be associated via a relationship. For binary relationship sets between entity sets A and B, the mapping cardinality must be one of:

many-to-one

one-to-many

one-to-one

many-to-many

1. Many-to-one : An entity in A is associated with at most one entity in B. An entity in B is

associated with any number in A.

2. One - to - Many (1:M): A one-to-many between two entities indicates that a

single occurrence of one entity is associated with one or more occurrences of the related

entity. The example indicates that there is one Project Manager associated with each

Project, and that each Project Manager may be associated with more than one Project.

20

Page 21: se-file

3. One-to-One (1:1): A one-to-one relationship between two entities indicates

that each occurrence of one entity in the relationship is associated with a single

occurrence in the related entity. There is a one-to-one mapping between the two, such

that knowing the value of one entity gives you the value of the second. For example, in

this relationship an Employee uses a maximum of one Workstation:

21

Page 22: se-file

4. Many - to - Many (M:M): A many-to-many relationship between two entities

indicates that either entity participating in the relationship may occur one or several times.

The example indicates that there may be more than one Employee associated with each

Project, and that each Employee may be associated with more than one Project at a time.

That is, projects may share employees.

It is appropriate to identify and illustrate many-to-many relationships at the conceptual level

of detail. Such relationships are broken down to one-to-many relationships at the logical

level of detail. For example, at the logical level the many-to-many relationship above is

better represented by introducing a new entity such as Assignment and splitting the many-to-

many into two one-to-many relationships. The new entity Assignment contains the primary

keys of Project and Employee.

The appropriate mapping cardinality for a particular relationship set depends on the real world being modeled.

Relationships:

A relationship is an association that exists between two entities. For example, Instructor

teaches Class or Student attends Class. Most relationships can also be stated inversely. For

example, Class is taught by Instructor.

 

The relationships on an Entity-Relationship Diagram are represented by lines drawn between

the entities involved in the association. The name of the relationship is placed either above,

below, or beside the line.

22

Page 23: se-file

Relationships Between Entities:

 There can be a simple relationship between two entities. For example, Student attends a

Class:

Some relationships involve only one entity. For example, Employee reports to Employee:

This type of relationship is called a recursive relationship.

 

There can be a number of different relationships between the same two entities. For example:

23

Page 24: se-file

Employee is assigned to a Project,

Employee bills to a Project.

 

One entity can participate in a number of different relationships involving different entities.

For example:

Project Manager manages a Project,

Project Manager reports to Project Director,

Project Manager approves Employee Time.

 

Characteristics of Relationships:

 A relationship may be depicted in a variety of ways to improve the accuracy of the

representation of the real world. The major aspects of a relationship are:

 

24

Page 25: se-file

Naming the Relationship: Place a name for the relationship on the line representing the

relationship on the E-R diagram. Use a simple but meaningful action verb (e.g., buys, places,

takes) to name the relationship. Assign relationship names that are significant to the business

or that are commonly understood in everyday language.

 

Bi-directional Relationships: Whenever possible, use the active form of the verb to name

the relationship. Note that all relationships are bi-directional. In one direction, the active

form of the verb applies. In the opposite direction, the passive form applies.

 

For example, the relationship Employee operates Machine is named using the active verb

operates:

However, the relationship Machine is operated by Employee also applies. This is the passive

form of the verb.

 

By convention, the passive form of the relationship name is not included on the E-R diagram.

This helps avoid clutter on the diagram.

 

Relationship Dependency:

 

Types of Relationship Dependencies:

 

Three relationship dependencies are possible:

 

25

Page 26: se-file

mandatory,

optional,

contingent.

 

Relationship dependencies may be of different degrees. Each relationship dependency is

illustrated differently.

Mandatory Relationship:

A mandatory relationship indicates that for every occurrence of entity A there must exist an

entity B, and vice versa.

When specifying a relationship as being mandatory one-to-one, you are imposing

requirements known as integrity constraints. For example, there is one Project Manager

associated with each Project, and each Project Manager is associated with one Project at a

time. A Project Manager may not be removed if the removal causes a Project to be without a

Project Manager. If a Project Manager must be removed, its corresponding project must also

be removed. A Project may not be removed if it leaves a Project Manager without a Project.

A new project may be added if it can be managed by an existing Project Manager. If there is

no Project Manager to manage the Project, a Project Manager must be added with the

addition of a new Project.

 

Optional Relationship:

An optional relationship between two entities indicates that it is not necessary for every entity

occurrence to participate in the relationship. In other words, for both entities the minimum

26

Page 27: se-file

number of instances in which each participates, in each instance of the relationship is zero

(0).

 

As an example, consider the relationship Man is married to Woman. Both entities may be

depicted in an Entity-Relationship Model because they are of interest to the organization.

However, not every man, or woman, is necessarily married. In this relationship, if an

employee is not married to another employee in the organization, the relationship could not

be shown.

The optional relationship is useful for depicting changes over time where relationships may

exist one day but not the next. For example, consider the relationship "Employee attends

Training Seminar." There is a period of time when an Employee is not attending a Training

Seminar or a Training Seminar may not be held.

 

Contingent Relationship:

 A contingent relationship represents an association which is mandatory for one of the

involved entities, but optional for the other. In other words, for one of the entities the

minimum number of instances that it participates in each instance of the relationship is one

(1), the mandatory association, and for the other entity the minimum number of instances that

it participates in each instance of the relationship is zero (0), the optional association.

 

Contingent relationships may exist due to business rules, such as Project is staffed by

Consultant.

27

Page 28: se-file

In this case, a Project may or may not be staffed by a Consultant. However, if a Consultant is

registered in the system, a business rule may state that a Consultant must be associated with a

Project.

 

Mapping cardinalities and relationships in case study: ‘Computerization of Supermarket’ :-

The mapping cardinalities as referred from the ER diagram of case study described in

previous experiment are as follows:

supply

1:M

Mandatory and unidirectional relationship

sold to

M:N

Optional and unidirectional relationship

28

SUPPLIERS PRODUCTS

PRODUCTS CUSTOMERS

Page 29: se-file

earn

1:1

Mandatory and unidirectional relationship

reports

1:1

1:M

Unidirectional and recursive relationship

1:M

M:1

M:1

1:M

Experiment no.-7

29

EMPLOYEES PRODUCTS

EMPLOYEES

PRODUCTS

SUPPLIERS

CUSTOMERS

Page 30: se-file

Aim:- Techniques used in Black Box testing.

Theory:-

Black box testing is also known as functional testing. A software testing technique whereby the internal structure of the item being tested are not known by the tester .For example, in a black box test on a software designer the tester only knows the inputs and what the expected outcomes should be and not how the program arrives at those outputs .

The tester doesn’t ever examine the programming code and doesn’t need any further knowledge of the program other than its specifications. Black box testing is not a type of testing; it instead is a testing strategy, which doesn’t need any knowledge of internal design or code etc. As the name “black box” suggests, no knowledge of internal logic or code structure is required. The base of the black box testing strategy lies in the selection of appropriate data as per functionality and testing it against the functional specifications in order to check for normal and abnormal behavior of the system. In order to implement black box strategy, the tester is needed to be through with the requirement specifications of the system and as user, should know, how the system should behave in response to the particular action. The types of testing under this strategy are totally based on the testing for requirements and functionality of the work product or software application. Black box testing is sometimes also known as “Opaque Testing”, “Functional or Behavioral Testing” and also as “Closed Box Testing”.

There are essentially the following two main approaches to designing black box test cases.

Equivalence class partitioning Boundary value analysis

Equivalence class partitioning:

In this approach the domain of the input values to a program is partitioned into a set of equivalence classes. This partitioning is done such that the behavior of the program is similar for every input data belonging to the same equivalence class. The main idea of defining the equivalence class is that testing the code with any one value belonging to an equivalence class is as good as testing the software with any other value belonging to that equivalence class. Equivalence classes for the software can be designed by examining the input data.

Some general guidelines for designing the equivalence class are:-

30

Page 31: se-file

1. If the input data values to a system can be specified by a range of values then one valid and two invalid equivalence classes should be defined.

2. If the input can assume values from a set of discrete members of some domain, then one equivalence class for valid input values and another equivalence class for invalid input values should be defined.

Example : For software that computes the square root of an input integer which can assume values in the range from 1 to 5000, there are three equivalence classes: the set of negative integers, the set of integers in the range of 1 to 5000, and the integers larger than 5000. Therefore, the test cases must include representatives from each of the three equivalence classes and a possible test set can be {-5, 500, 6000}.

Boundary value analysis:

Some typical programming errors occur at the boundaries of different equivalence classes of inputs. The reason for such errors is unknown since programmers failed to see the special processing required by the input values that lie at the boundary.

Example: For a function that computes the square root of integer values in the range from 1 to 5000, the test cases must include the following values: {0,1,500,6000},

Experiment no.-8

31

Page 32: se-file

Aim:- Techniques used in White Box testing.

Theory:-White box test cases require thorough knowledge of the internal structure of the software. Therefore, it is also known as structural testing. The different approaches to white box testing are explained below:

Statement coverage: The statement coverage methodology aims to design test cases so as to force the execution of every statement in a program at least once. The principle idea behind this methodology is that unless the statement is executed we have no way of determining if an error existing in that statement. In other words the criteria are based on the observations that an error existing in one part of program cannot be discovered if the part of program containing error and generating a failure is not executed. However, the executing a statement and that to for just one input and observing that the programs behaves properly doesn’t guarantee that it will work correctly for all the inputs values.

For example: - consider the following program:

int compute_gcd(x,y)

int x,y;

{

while (x!=y)

if (x > y) then

x=x-y;

else y=y-x;

}

return x;

By choosing the test set {(x=3, y=3), (x-4, y=3), (x=3, y=4)}, we exercise the program such that all statements are executed atleast once.

Branch coverage: In branch coverage based testing methodology test cases are designed such that different branches are given true or false values in turn. The branch testing guarantees statement coverage and thus is a stronger testing criterion then statement based coverage.

For example:

32

Page 33: se-file

int compute_gcd(x,y)

int x,y;

{

while (x!=y)

if (x > y) then

x=x-y;

else y=y-x;

}

return x;

By choosing the test cases {(x=3, y=3), (x=3, y=2), (x=4, y=3), (x=3, y=4)}all the branches of the program can be executed atleast once.

Condition coverage:

In this whitebox testing test cases are designed such that each component of a condition in composite conditional expression is given both true and false values.

eg. in conditional expression (c1and c2or c3).c1,c2and c3 are excerised atleast once i.e there are given true or false values. Condition testing is a stronger testing method than the branch testing and branch testing is greater than the statement coverage testing. However for a Boolean expression of ‘n’ variables for condition coverage 2n test cases are required. Therefore a condition coverage based testing technique is practical only if ‘n’ is small.

Path Coverage:

Path coverage based testing stratergy requires us to design test cases such that all linearly independent paths in the program are executed atleast once. A linearly independent path is defined in terms of a control flow graph(CFG) , which describes the sequence in which the different instructions of a program get executed .in other words a CFG describes how the flow of control passes through the program. In order to draw the CFG of a program we first number all the statements of a program.

Eg.

Int compute gcd(x,y)

Int x,y;

{

1.while (x!=y)

2.if(x>y)then

3.x=x-y;

4.else y=y-x;

5}

33

Page 34: se-file

6.return x;

The different numbered statements served as nodes of the CFG and edge from one node to another exists if the execution of the control statement representing a first node can result in the transfer of control to the other node.

A path through the program is a node and edge sequence from the starting node to a terminal node of a CFG of a program. An independent path is any path through the program that introduces atleast one new node that is not included in any other linearly independent path.

CFG for the example program above is as follows:

Experiment no.-9

34

1

6

Page 35: se-file

Aim:- Representing sequence in a structured chart.

Objective :- To familiarize with the concept of structured charts

S/W Requirement :- Smart Draw

H/W Requirement :- •Processor – Any suitable Processor e.g. Celeron

•Main Memory - 128 MB RAM

•Hard Disk – minimum 20 GB IDE Hard Disk

•Removable Drives

–1.44 MB Floppy Disk Drive

–52X IDE CD-ROM Drive

•PS/2 HCL Keyboard and Mouse

Method:-

Structured software design is arranged hierarchically.

Structured software follows rules: 

Modules are arranged hierarchically. There is only one root (i.e., top level) module. Execution begins with the root module. Program control must enter a module at its entry point and leave at its exit point. Control returns to the calling module when the lower level module completes execution.

 Description of a Module:Logically, a module is one problem-related task that the program performs, such as Create Invoice or Validate Customer Request. Physically, a module is implemented as a sequence of programming instructions bounded by an entry point and an exit point.

 Common Modules:There are two categories of common modules:

  system (e.g., I/O handlers, locking), application (e.g., edits, calculations).

 Some common modules (e.g., security, navigation, audit trails, and help) do not fall clearly in either category. These modules may be common to more than one application but are not system level modules.

35

Page 36: se-file

System common modules should be defined as part of the preliminary design. Application common modules should be defined as early as possible but many may not be identified until detailed design.

 Constructs of Structured Software Design:Tree-structure diagrams are used to illustrate modules that follow the rules of structured software design. A tree-structure can be drawn as a set of blocks, for example, a Structure Chart, or a set of brackets, such as an Action Diagram.

 When designing structured software, three basic constructs are represented:

  Sequence - items are executed from top to bottom.

Repetition - a set of operations is repeated. The repetition is terminated based on the repetition test.

Condition - a set of operations are executed only if a certain condition or CASE statement applies.

36

Page 37: se-file

 

Result: This experiment introduces the concept of structured charts.

37

Page 38: se-file

Experiment no.-10

Aim:- Creating modules in a structured chart.

Objective :- To familiarize with the modules in structured charts.

S/W Requirement :- Smart Draw

H/W Requirement :- •Processor – Any suitable Processor e.g. Celeron

•Main Memory - 128 MB RAM

•Hard Disk – minimum 20 GB IDE Hard Disk

•Removable Drives

–1.44 MB Floppy Disk Drive

–52X IDE CD-ROM Drive

•PS/2 HCL Keyboard and Mouse

Method:-

A rectangle is used to represent a module on a Structure Chart. The module name is written inside the rectangle. Other than the module name, the Structure Chart gives no information about the internals of the module.

 Module Names and Numerical Identifiers:Each module must have a module name. Module names should consist of a transitive (or action) verb and an object noun. Module names and numerical identifiers may be taken directly from corresponding process names on Data Flow Diagrams or other process charts. The name of the module and the numerical identifier is written inside the module rectangle. Other than the module name and number, no other information is provided about the internals of the module.

 Existing Module:Existing modules may be shown on a Structure Chart. An existing module is represented by double vertical lines.

38

Page 39: se-file

Unfactoring Symbol:An unfactoring symbol is a construct on a Structure Chart that indicates the module will not be a module on its own but will be lines of code in the parent module. An unfactoring symbol is represented with a flat rectangle on top of the module that will not be a module when the program is developed.

An unfactoring symbol reduces factoring without having to redraw the Structure Chart. Use an unfactoring symbol when a module that is too small to exist on its own has been included on the Structure Chart. The module may exist because factoring was taken too far or it may be shown to make the chart easier to understand. (Factoring is the separation of a process contained as code in one module into a new module of its own).

Result: This experiment introduces the concept of creating modules in structured charts

39