read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date...

74
IQ – TESTING

Transcript of read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date...

Page 1: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

IQ – TESTING

STUDY MATERIAL

Page 2: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 2 of 59

Date Version Comments

Page 3: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 3 of 59

1 Testing Fundamentals1.1 Introduction to TestingTesting concepts are organized as below

1.2 Why testing?Imagine the confusion caused when a major Airline started issuing tickets from New York to London at unbelievably low rates! Little did the people realize that a bug in the reservation system caused tickets to be issued in this manner. The outcome, the airline had to bear the heavy loss. Just check out this one.

SoftwareTesting

Testing Fundamentals

Levels of Testing

Test Automation

Test Management

Other Testing Techniques

Introduction to Testing

Testing Principles & Terminologies

Software Testing Life Cycle

V-Model for Testing

SDLC Models & Testing

SDLC Vs STLC

Testing Methodologies

User Acceptance Testing

System Testing

Integrated Testing

Unit Testing

Test Automation Management

Key Success Factors for Evaluating Test Automation

Selecting a Test Tool

Test Automation Process

Scope of Test Automation

Introduction

Best Practices in Test Team Management

Tools in Test Management

Test Metrics

Test Management Functions & Activities

IntroductionOther Testing Techniques

Xtreme Testing

Model Based Testing

Page 4: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 4 of 59

Ariane 5 was remotely destroyed within 40 seconds of launch causing a loss of US$ 500 million because of a wrong exception handling.

These real life situations make room for the million dollar question ‘Why do we do testing?’ Testing is the process of questioning a product in order to evaluate it", where the "questions" are operations the tester attempts to execute with the product, and the product answers with its behavior in reaction to the probing of the tester.

In a nutshell, testing addresses the following areas.

To verify that all requirements are implemented correctly (both for positive and negative conditions)

To identify defects before software deployment

To help improve quality and reliability.

To make software predictable in behavior.

To reduce incompatibility and interoperability issues.

To help marketability and retention of customers

1.3 What is testing?According to IEEE, Software testing is the process of analyzing a software item to detect the differences between existing and required conditions (that is, defects), and to evaluate the features of the software item (Ref. IEEE Std 829).

Testing is a process used to identify the correctness, completeness and quality of developed computer software. Quality is not an absolute; it is in relation to the system requirements specification. With that in mind, testing can never completely establish the correctness of arbitrary computer software; testing furnishes a criticism or comparison that compares the state and behavior of the product against a specification.

Although most of the intellectual processes of testing are nearly identical to that of review or inspection, the word testing is connoted to mean the dynamic analysis of the product—putting the product through its paces. The application’s quality is system dependent and varies with the system but some of the common quality attributes include capability, reliability, efficiency, portability, maintainability, compatibility and usability. A good test is sometimes described as one which reveals an error; however, more recent thinking suggests that a good test is one which reveals information of interest to someone who matters within the project community.

1.4 What is debugging?Debugging is the activity of diagnosing the precise nature of a known error The purpose of debugging is to locate and fix the corresponding code responsible for a symptom violating a known specification .It is often seen that many of us use the terms testing and debugging interchangeably. But this is not to be. The table below brings out some of the main differences between Testing and Debugging

Testing Debugging

Testing starts with known conditions, uses Debugging starts from possibly unknown

Page 5: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 5 of 59

predefined procedures, and has predictable outcomes initial conditions and the end cannot be

predicted, except statistically

Testing should be designed and scheduled beforehand

The procedures for, and duration of debugging cannot be constrained

Testing is a demonstration of error or apparent correctness

Debugging is a deductive process

Testing proves programmer‘s failure Debugging is the programmer‘s vindication

Testing should strive to be predictable, dull, constrained, rigid, and

Inhuman.

Debugging demands intuitive leaps, conjectures, experimentation, intelligence, and freedom

Testing to a large extent, can be designed and accomplished in ignorance of the design.

Debugging is impossible without detailed design knowledge

1.5 Testing Principles & Terminologies

1.5.1 Test StrategyA test strategy basically describes which types of testing seem best to do, the order in which to perform them, the proposed sequence of execution, and the optimum amount of effort to put into each test objective to make testing most effective. A test strategy is based on the prioritized requirements and any other available information about what is important to the customers. Because there are always time and resource constraints, a test strategy faces up to this reality and outlines how to make the best use of whatever resources are available to locate most of the worst defects. The test strategy should be created at about the middle of the design phase, as soon as the requirements have settled down.

1.5.2 Test PlanA test plan is simply that part of a project plan that deals with the testing tasks. A test Plan details the scheduling, resource allocation and dependencies between various testing activities. It provides a complete list of all the things that need to be done for testing, including all the preparation work during all of the phases before testing. It shows the dependencies among the tasks to clearly create a critical path without surprises. The details of a test plan can be filled in starting as soon as the test strategy is completed. Both the test strategy and test plan are subject to change as the project evolves. Any changes to the testing should follow the change management process, starting with Test Strategy.

Page 6: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 6 of 59

1.5.3 Test CasesTest cases (and automated test scripts if called for by the strategy) are prepared based on the strategy which outlines how much of each type of testing to do. Test cases are developed based on prioritized requirements and acceptance criteria for the software, keeping in mind the customer's emphasis on quality dimensions and the project's latest risk assessment of what could go wrong. Except for a small amount of ad hoc testing, all test cases should be prepared in advance of the start of testing. There are many different approaches to developing test cases. Test case development is an activity performed in parallel with software development. Without expected results to compare to actual results, it will be impossible to say whether a test will pass or fail. A good test case checks to make sure requirements are being met and has a good chance of uncovering defects.

1.5.4 Test DataIn addition to the steps to perform to execute test cases, there also is a need to systematically come up with test data to use. This often means sets of names, addresses, product orders, or whatever other information the system uses. Since query functions, change functions and delete functions are probably going to be tested, a starting database of data will be needed in addition to the examples to input. Consider how many times those doing the testing might need to go back to the starting point of the database to restart the testing, and how many new customer names will be needed for all the testing in the plan. Test data development is usually done simultaneously with test case development. Test data should cover all cases of success and failure and different permutations and combinations

1.5.5 Test EnvironmentTest environments may be scaled-down versions of the real thing, but all the parts need to be there for the system to actually run. Building a test environment usually involves setting aside separate regions on mainframe computers and/or servers, networks and PCs that can be dedicated to the test effort and that can be reset to restart testing as often as needed. Sometimes lab rooms of equipment are set aside, especially for performance or usability testing. A list of components that will be needed is part of the test strategy, which then needs to be checked as part of the test planning process. Steps to set up the environment are part of the testing plan and need to be completed before testing begins.

1.5.6 Common sources of Software defects, reasons and impactTesting early in the life cycle reduces the errors. Test deliverables are associated with every phase of development. The goal of Software Tester is to find defects, find them as early as possible, and track the defects to closure. Once the defects are fixed, this will include one or many rounds of testing till the software is defect free.

The number one cause of Software defects is the Specification. There are several reasons specifications are the largest bug producer. In many instances a Specification simply isn’t written. Other reasons may be that the specification isn’t thorough enough, it’s constantly changing, or it’s not communicated well to the entire team. Planning software is vitally important. If it’s not done correctly defects will be injected.

Page 7: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 7 of 59

The next largest source of defects is the Design, That’s where the programmers lay the plan for their Software. Compare it to an architect creating the blue print for the building, Defects occur here for the same reason they occur in the specification. It’s rushed, changed, or not well communicated.

Figure below depicts the percentage of defects probability at various stages in the life cycle. Coding errors may be more familiar to you if you are a programmer. Typically

these can be traced to the Software complexity, poor documentation, schedule pressure or just plain dump mistakes. It’s important to note that many defects that appear on the surface to be programming errors can really be traced to specification.

The other category is the catch-all for what is left. Some defects can blamed for false positives, conditions that were thought to be defects but really weren’t. There may be duplicate defects, multiple ones that resulted from the square root cause. Some defects can be traced to Testing errors.

Costs: The costs are logarithmic- that is, they increase tenfold as time increases. A bug found and fixed during the early

stages when the specification is being written might cost next to nothing. The same bug, if not found until the software is coded and tested, might cost 10 times more. If a customer finds it, the cost would easily top to 100 times.

1.5.7 When to Stop TestingThis can be difficult to determine. Many modern software applications are so complex, and run in such as interdependent environment, that complete testing can never be done. "When to stop testing" is one of the most difficult questions to a test engineer. Common factors in deciding when to stop are:

Test cases completed with certain percentages passed

Coverage of code/functionality/requirements reaches a specified point

The rate at which Defects can be found is too small

The risk in the project is under acceptable limit.

Practically, we feel that the decision of stopping testing is based on the level of the risk acceptable to the management. As testing is a never ending process we can never assume that 100 % testing has been done, we can only minimize the risk of shipping the product to client with X testing done. The risk can be measured by Risk analysis but for small duration / low budget / low resources project, risk can be deduced by simply: -

Measuring Test Coverage.

Number of test cycles.

27%Design

7% Code 10%

Other

59%Specifications

Page 8: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 8 of 59

Number of high priority defects

1.5.8 Prevention Vs Cure of softwareThere are two approaches for tackling quality problems with software products.

A curative approach (Figure 1) where the focus is on testing and then use by users to find defects. Defects are identified both by developers and users and subsequently fixed.

Figure 1. Action-Product Model for a curative approach to producing quality software.

The other option is a preventative approach (Figure 2).

There are three aspects to the Preventative approach

To engineer-in tangible quality carrying properties that will anticipate and prevent the occurrence of quality defects in the first place

Page 9: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 9 of 59

Secondly, to use processes (including formal inspections and prototyping) to discover and remove deficiencies and defects as early as possible in the development process

thirdly, to use tools (e.g., static analyzers, automated testing and other tools) where possible to assure the quality-carrying properties have been properly engineered-in and detect any defects that remain

Page 10: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 10 of 59

Page 11: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 11 of 59

Figure 2 Action-Product Model for a preventative approach to producing quality software.

1.6 Software Testing Life CycleAbout STLC

Software testing is a comprehensive activity, as there are numerous types of software requiring specific testing techniques. To manage testing successfully, apart from technological skill, one needs management system to handle large number of entities, their internal relationships, testing process and personnel for testing and managing other stake holders in software development. Cost of building and correcting defects may far exceed the cost of detecting the defects. The method to reduce cost of defects is locating defects early. This involves beginning testing early during the requirement phase and continuing testing through out the life cycle.

The Software Testing Life Cycle is illustrated in the diagram below.

The Software Testing Life cycle contains the following components/steps:

Requirement Capture

o Requirement Analysis

o Collect Software Functional Specifications/Software Requirement Specifications

Test requirements identification

o Validate for testability

Test Planning and Scenario design

o Develop Test plan and objectives

o Building strategies-How to test?

o Identify Test Items

Test Requirements

Capture

Analysis

Test planning & Scenario Design

Test Case Development

Test Execution

Test Result Analysis

Test Cycle Closure

Defect Fixing Cycle

Defects

New Version

UAT

Page 12: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 12 of 59

o Resources and Schedules

Test Case Development

o Design test cases

o Test Case Specification

o Prepare Test Bed

Test Data Preparation

Test Environment Setup

Test Execution

o Run Unit Tests, Integrated Tests and Validate Results

Execute tests

Evaluate test results

o Bug Reporting

o Bug fixes and retesting

Test Result Analysis

o Defect Analysis

o Determining Test Set Coverage and Effectiveness

o Report test results

User Acceptance test

o Alpha and Beta tests

Collect metrics

o Evaluate test effectiveness

1.7 V –model for TestingThe V Concept of testing detail on the sequence with which the testing should be performed. The life cycle testing is performed against the deliverables at the pre-determined specified points. The SDLC has to be pre-determined for this to happen. The V concept recommends both the System Development and the System test process to start at the same point referring the same information. The development will have the responsibility of documenting the requirements for the development purpose; which the test team could use for testing purpose as well.

Page 13: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 13 of 59

1.7.1 What is V-Model?

The figure shows the brief description of the V-Model kind of testing. Every phase of the

STLC in this model corresponds to some activity in the SDLC. The Requirement Analysis would correspondingly have an acceptance testing activity at the end. The design has Integration Testing (IT) and the System Integration Testing (SIT) and so on.

Though all these activities are present, the testing activity formally starts at the fag end of the Coding phase or the Unit Testing activity by the developers. Note that V-model doesn’t say that testing activity should formally start after coding and it’s just a mapping of development phase with respect to various testing levels. But the key point here is to convey how to make testing in conjunction with V diagram development phases so that the testing effort in later stages can be minimized

1.7.2 How the activity works?Once the project is initiated, it goes through the Requirement Analysis phase, followed by the

design and then the coding phase. In order to cut the overall cost of the project usually, the test focal is identified in the middle or the end of the coding phase. The independent tester then analyses the document and then starts the test plan and test design. By this time the coding and the initial part of the unit testing would be complete and the tester would get the feel of the product and design the test-clients if needed and continues his testing.

Page 14: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 14 of 59

By implementing V-model for testing, we achieve three important gains. They are:

More parallelism is achieved and there is significant reduction in the cycle time for testing.

Since the test case design activity is carried out upfront, this ensures upfront validation.

Tests are designed by people with requisite skill sets.

In a nutshell, the V model presents excellent advantages for verification and validation.

1.8 SDLC Models and TestingSome of the common life cycle models that are used in Software projects are listed below.

The Waterfall Model

Iterative Model (RAD)

Prototyping Model

Ad Hoc Model

1.8.1 SDLC vs. STLCIn this section the Model Applicability and relevance to Verification and Validation is being brought out. The same is illustrated in the table below.

Models Where Applicable Relevant V&V Issues

Waterfall Clear demarcated phases are present

Deliverables of previous phase can be broken before proceeding to the next phase

V& V postponed by a phase

Testing is a downstream activity

Time consumed for error detection and correction can be high

Iterative (RAD) Feedback mechanism is in place

Modeling tools are available

Built-in feedback available beyond requirements stage too

Tools enhance V&V further

Prototyping Feedback mechanism is in place

Feedback for requirement

Prototype reuse can produce undesirable effects

Ad-hoc(Agile) When the development activity is adhoc

Minimizing variability during test cases design helps to save time in a future maintenance of the test suite.

Page 15: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 15 of 59

The diagram below brings out the mapping between the SDLC and STLC. It maps the stage in SDLC with the corresponding stages in STLC

1.9 Testing Methodologies

1.9.1 Static versus Dynamic Testing Static Testing is performed without executing the code and is used at the

“Requirement and Design” phase.

Dynamic Testing is performed by executing the code. This is to ascertain that the system is functioning properly and is used at the “test” phase.

1.9.2 White Box versus Black Box testingWhite box testing is a way of testing the external functionality of the code by examining and testing the program code that realizes the external functionality. It falls under structural testing. On the other hand, Black box testing involves testing using specifications. Here the testing is done from the customer’s viewpoint. Few more differences between White Box & Black Box testing are listed below.

White Box testing Black Box testing

Tests that validate the system architecture, how the system was implemented

Tests that validate business requirements, what the system is supposed to do

Based on knowledge of internal structure and logic

Based on external specifications without knowledge of how the system is constructed

Transition / Rollout

Testing (System, SI &

UAT)

Unit and Integration

Testing

DetailedDesign &

Development

High Level Design

Detailed *Requirements

Requirements Testability

review

Test analysis& design

Test bed setup

TestingStrategy

Unit testing

Integration testing

Business Analysis

Software Development Life Cycle

Requirements could be defined along many dimensions e.g. : Functional Reqts, Usability Reqts, System `, Performance Reqts, Quality Reqts, Technical Reqts, etc

Test data generation

Functional Test planning & scripting

IT test planning

Unit test planning

Functional testing

IT results review

Functional results review

Non-functional testing

Non-functional results review

Activities by Testers

User Interviews

Non-functional Test planning

Defect tracking

Software Testing Life Cycle

Page 16: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 16 of 59

E.g. Unit testing, Integration testing E.g. System testing, User Acceptance testing

Structure & Design based and Program-Logic driven Testing

Specification based and Business-Transaction driven Testing

Advantages

High Code coverage – Exhaustive (thorough) path testing

User’s perspective

Program logic is tested Focus on features, not implementation

Internal boundaries are tested Big-picture approach

1.9.3 Stages of TestingUnit Testing

Unit testing verifies the functioning in isolation of software pieces which are separately testable. Depending on the context, these could be the individual subprograms or a larger component made of tightly related units. Typically, unit testing occurs with access to the code being tested and with the support of debugging tools, and might involve the programmers who wrote the code.

Integration Testing

Integration testing is the process of verifying the interaction between software components. Classical integration testing strategies, such as top-down or bottom-up, are used with traditional, hierarchically structured software. Except for small, simple software, systematic, incremental integration testing strategies are usually preferred to putting all the components together at once, which is pictorially called “big bang” testing.

System Testing

System testing is usually considered appropriate for comparing the system to the non-functional system requirements, such as security, speed, accuracy, and reliability. External interfaces to other applications, utilities, hardware devices, or the operating environment are also evaluated at this level.

Unit Acceptance Testing

Acceptance testing checks the system behavior against the customer’s requirements, however these may have been expressed; the customers undertake, or specify, typical tasks to check that their requirements have been met or that the organization has identified these for the target market for the software. This testing activity may or may not involve the developers of the system.

Regression Testing

Regression testing is the “selective retesting of a system or component to verify that modifications have not caused unintended effects...” In practice, the idea is to show that

Page 17: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 17 of 59

software which previously passed the tests still does. Regression testing can be conducted at each of the test levels.

The table below shows Validation Techniques Used in Test Stages

Techniques

Test Stages

White box Black box Structural Functional Regression

Unit Test X X X

String /Integration Test

X X X X X

System Test X X X

Unit Acceptance Test X X X

1.9.4 Test TreeIndividuals testing levels are explained in detail in the subsequent chapter

Page 18: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 18 of 59

Testing

Static Testing Dynamic Testing

Code Review

Structural Functional Non-Functional

Integration Testing (Internal

Interfaces)

Unit Testing System Testing

Integration Testing

(External Interfaces)

Unit Acceptance Testing

Performance Testing

Security Testing

Stress Testing

Document Review

Page 19: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 19 of 59

2 Levels of Testing2.1 Unit TestingWhat is Unit Testing?

Unit testing is a type of structural testing. Structural testing (also known as white box testing) compares test program behavior against the apparent intention of the source code. This contrasts with functional testing (also known as black-box testing), which compares test program behavior against a requirements specification. Structural testing examines how the program works, taking into account possible pitfalls in the structure and logic.

Unit testing focuses verification effort on the smallest unit of software design – the module or component. Unit testing is just one of the levels of testing which go together to make the “big picture” of testing a system. It complements integration and system level testing. It should also complement (rather than compete with) code reviews and walkthroughs. Using the procedural design description as a guide, important control paths are tested to uncover errors within the boundary of the module. Unit testing is a “white box” testing which consists of testing paths, branch by branch to produce predictable results.

When to do Unit testing, who does it?

Unit test plan should be prepared during detailed design phase of development & test execution is carried out when the individual units/modules are ready and is generally carried out by developers.

Objectives of Unit Testing

The objective of unit testing is to ensure that a unit performs its specific tasks accurately. In order to achieve a defect free / perfect unit of code, the Unit test plan should focus on the implementation logic, so the idea is to write test cases for every method in the module.

The goal of unit testing is to isolate each part of the program and show that the individual parts are correct. This kind of isolated testing provides benefits such as flexibility when changes are required, facilitates integration, ensures documentation of the code etc.

A Unit test plan

Is written to check the unit against standards defined

Validate field against data types, format and data they hold

Validate business rules – with valid & invalid inputs and record the behavior of the unit

Any source of code should also be tested for optimal usage of memory in order to achieve the desired performance of the system.

Sample Entry Criteria for Unit Testing

Business Requirements are complete and have been approved to-date

Technical Design has been finalized and approved

Development environment has been established and is stable

Page 20: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 20 of 59

Code development for the module is complete

Unit Test Activities

Identification of testable units/modules,

Preparation of the unit test plan and test cases based on code structure and logic,

Identification of the tools to be used for unit testing and getting familiarized with it,

Preparation of the test data, and

Actual execution of the tests, and verification of output through path traversing.

Sample Exit Criteria for Unit Testing

Code has version control in place

All planned test cases are executed and actual results are recorded

Units are working as per the expected results

Defects recorded tracked to closure

No known major or critical defects prevents any modules from moving to Integration Testing

A testing transition meeting has been held and the developers finalized.

Project Manager approval has been received

2.1.1 Unit Test TechniquesCoverage refers to the extent to which a given verification activity has satisfied its objectives.

Appropriate coverage measures give the people doing, managing, and auditing verification activities a sense of the adequacy of the verification accomplished; in essence, providing an exit criteria for when to stop. That is, what is “enough” is defined in terms of coverage.

A large variety of coverage measures exist. Here is a description of some fundamental measures and their strengths and weaknesses

2.1.1.1 Statement Coverage Executes each statement atleast once

Statement coverage refers to writing test cases that execute each of the program statements. For example, in a section of code that consists of statements that are sequentially executed (that is, with no conditional branches), test cases can be designed to run through from top to bottom.

In case of a two way construct like if statement, to cover all the statements, we should also cover the then and the else part of the if statement.

The chief disadvantage of statement coverage is that it is insensitive to some control structures. Let’s consider an if-else statement containing one statement in the then-clause and 99 statements in the else-clause. After exercising one of the two possible

Page 21: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 21 of 59

paths, statement coverage gives extreme results: either 1% or 99% coverage. Decision coverage helps overcome this problem.

2.1.1.2 Decision Coverage Executes each decision direction at least once

Decision coverage refers to writing test cases for a program that is split into a number of distinct paths. A program (or a part of the program) can start from the beginning and take any of the paths to its completion. Regardless of the numbers of statements in each of the paths, if every path is considered, then we would have covered most of the scenarios.

Decision coverage provides a stronger condition of coverage than statement coverage as it relates to the various logical paths in the code rather than just the statements. Let us take this example-

If ((a =0) && (b=0)) then

Execute abc

Else

Execute def

End if

Sample Test condition

A =0 B=0 “then” part is executed

A = 1, B = 1 “else” part is executed

Here decision coverage is taken care. But all possibilities of the variables (conditions) are not taken care.

2.1.1.3 Condition CoverageExecute each condition with all possible outcomes at least once

Condition coverage reports the true or false outcome of each Boolean sub-expression, separated by logical-and and logical-or if they occur. Condition coverage measures the sub-expressions independently of each other.

It is necessary to have test cases that exercise each Boolean expression and have test cases produce the TRUE as well as the FALSE paths. Let us take this example-

If ((a >0) && (b=0)) then

Execute abc

Else

Execute def

End if

Condition a>0 should take for covering all possibilities of A is a>0 and a = 0

Condition b=0 should take for covering all possibilities of b is b = 0 and b <> 0

Hence minimal test cases will be-

A =0 B = 0 – this will execute the “else” part

Page 22: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 22 of 59

A =1 B = 1 – this also will execute the “else” part

By these two test cases, both conditions have taken all possible opportunities. But only the “else” part is tested.

2.1.1.4 Decision/Condition CoverageExecute all possible combinations of condition outcomes in each decision, treating all iterations as two way conditions exercising the loop zero times and once.

Decision/Condition Coverage testing is a hybrid measure composed by the union of decision coverage and condition coverage.

Using the same example used for condition coverage, the test cases will be-

A = 0 B = 0 “Else” part is covered

A = 1 B = 0 “then” part is covered

A = 0 B = 1 “Else” part is covered

A = 1 B = 1 “else” part is covered

Here both the decisions are covered at least once. Also both the variables are tested for all possibilities.

2.1.1.5 Multiple Condition Coverage Invoke each point of entry at least once

Multiple condition coverage reports whether every possible combination of Boolean sub-expressions occurs. As with condition coverage, the sub-expressions are separated by logical-and and logical-or, when present. The test cases required for full multiple condition coverage of a condition are given by the logical operator truth table for the condition.

In addition to the above the test plan should

Checks for standards

Validate fields against data types, formats and data they hold

Validate business rules

Test for resource utilization (CPU, memory etc)

Having done the test planning and execution the actual test results should be recorded. The defects should be recorded and tracked to closure.

2.1.1.6 Negative TestingNegative testing is that testing which attempts to show that the module does not do anything that it is not supposed to do. Negative testing applies to Integration and System testing as well.

Negative Testing = Showing error when not supposed to + Not showing error when supposed to

Example test case – entering special characters (@, $, % etc.) in a phone number field.

A Good Unit Test Case

Page 23: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 23 of 59

Should focus on code and logic

Check whether the code implements what the designer intended

Check if the if condition for each conditional statement is correct

Include test cases for loops/conditions

Check that all the special cases should work correctly. (for example if a date field accepts only a particular format say 31-Dec-06, test cases to check this should be present)

All the error cases should correctly be detected

Example Unit Test Case

Test Case Result Expected

Type 10 characters in the name field with max length 10, click on submit

Successful submit

Type numbers in the name field, click on submit

Error displayed

2.1.2 Tools in Unit Testing

2.1.2.1 Need for Unit Testing ToolsAssume you want to perform Unit testing for modules from the code tree of a commercial product. It’s not unheard to have millions of lines of code. Is it feasible to manually test this for statement coverage? No.

Also, special conditions such as memory leaks are seldom detected predictably without using code analyzers or profiling tools which enable you to profile and examine your applications for performance and memory usage problems using the different profiling views. Hence, the usage of tools for unit testing makes it inexpensive and efficient.

2.1.2.2 Identification of the Right ToolFew key factors which would help select a right tool are:

A good Unit testing tool should be able to generate and execute realistic unit test cases that will verify the code’s functionality, Construction/ Robustness

Should look at the Program code and try to find out if there are any possible uncaught runtime exception

Should Verify that the program produces the correct output for each input specified

Should be able to test programs individually irrespective of the program’s dependency on any other external function calls

Help in improving code coverage through any additional features that the tool can provide

Perform regression testing to check that the program behavior doesn’t break when modified

Page 24: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 24 of 59

Few Unit testing tools will also perform static analysis along with unit testing where they check for coding standards.

Some commonly used Unit Test Tools

Tool Name Vendor Details For Further Reference

Cactus Jakarta

Cactus is a simple test framework for unit testing server-side java code. The intent of Cactus is to lower the cost of writing tests for server-side code.

http://jakarta.apache.org/cactus

NUnit NUnit

NUnit is a lightweight unit testing framework that can be used to write and run unittests in all .Net languages. http://www.nunit.org/

GJTester Treborsoft

GJTester is a Java testing tool from Treborsoft. Especially this tool addresses two

dimensions of software testing: Unit testing and Regression testing of java programs or modules. http://www.gjtester.com

TestMentor Silvermark

Test Mentor – Java Edition is a unit, integration, and functional test and modeling tool for Java developers http://www.silvermark.com

Jtest Parasoft

JTest is an automated JAVA Unit Testing tool from Parasoft. It supports web technologies like HTML, ActiveX, XML, DHTML, JavaBeans. It also supports EJB, JSP, Servlets. http://www.Parasoft.com

C++Test Parasoft

A C/C++ unit testing tool that automatically tests any C/C++ class, function, or component. http://www.parasoft.com/

Junit JunitA regression testing framework used by developers who implement unit tests in Java. http://www.junit.org/

AQtest

AutomatedQA corp

AQtest automates and manages functional tests, unit tests and regression tests, for applications written with VC++, VB, Delphi, C++Builder, Java or VS.NET http://www.automatedqa.com/

JsUnitEdward Hieatt

JsUnit is a Unit Testing framework for client-side (in-browser) JavaScript. http://www.jsunit.net/

MinUnit Jera Design Minimal Unit Testing Framework for Chttp://www.jera.com/techinfo/jtns/jtn002.html

csUnit CsUnit csUnit is a unit testing framework for the Microsoft .NET Framework. It targets test driven development using .NET languages such as C#, Visual Basic .NET, Visual J# and

http://www.csunit.org/

Page 25: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 25 of 59

managed C++.

2.1.3 Measures & Metrics

2.1.3.1 Coverage MeasureTo evaluate the thoroughness/coverage of the executed tests the elements covered should be monitored so that the ratio between the elements covered and their total number can be measured.

2.1.3.2 Fault Density MeasureA program under test can be assessed by counting and classifying the discovered defects by their size. Fault density is measured as the ratio between the number of faults found and the size of the program.

2.1.4 SummarySoftware unit testing is an integral part of an efficient and effective strategy for testing systems. It is best performed by the designer of the code under test. Usage of unit test tools help significantly reduce the debugging time and cost and provide improved quality of the application being developed.

A unit test can be evaluated for adequacy if

All the statements been exercised by at least one test.

Each conditional statement been exercised at least once each way by the tests.

The tests should exercise the unit over the full range of operational conditions it is expected to address.

2.2 Integration TestingWhat is Integration Testing?

Two or more units of a product or a module (part of a product) interact with each other by the means of an interface. These interacting units should be compatible with each other and to the rest of the units, and therefore utmost care should be taken to detect errors in the interface. Integration testing helps in identifying the issues with interfaces.

Example of Interface error - Timing Errors

The called and the calling component operate at different speeds and out-of-date information is accessed

Depending upon the level, there can be intra-module, inter-module interface or external interfaces. Integration testing should cover all these interfaces.

Intra-module Interface: When two or more units within a module are interacting through an interface it is known as intra-module interface. The unit may be a sub-module or an individual program.

Page 26: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 26 of 59

Inter-Module Interface: When two or more modules within a product are interacting with each other, then the common interface between them is known as inter-module interface.

External Interface: When a module in a product interacts with a unit/module outside the product then the interface between them is known as external interface.

The various interfaces are depicted in the following figure.

When to do Integration Testing, Who does it?

Once individual units have been sufficiently tested, one performs the testing the between various interfaces and their aggregations, which leads to a successful system build. It is carried out after the completion of Unit Testing and as a prelude to System Testing.

Integration testing is generally carried out by testers or developers depending on the situation.

The effectiveness of Integration testing depends also on how well the units are tested individually, because any investment of Integration test time to find issues other than in interfaces (For example, Unit1 is faulty) is a distraction.

Objectives of Integration Testing

Integration testing is carried out to validate and ensure that multiple parts of the system interact with each other as per the system design. The data transfer between the intra, inter and external interfaces also should be thoroughly tested.

Sample Entry Criteria for Integration Testing

Page 27: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 27 of 59

Unit tested modules ready for integration testing

Outstanding issues and defects have been identified and documented

Test scripts and schedule are ready

The integration testing environment is established

Integration Test Activities

The activities carried out during the Integration testing are-

Identify various interfaces (inter, intra & external) to be tested,

Identify various transactions happening between interfaces which needs to be tested,

Prepare the integration test plan and test cases,

Set up an isolated development environment for testing,

Identification and set up of test data,

Execute test cases and record result, and

Record the defects and track them to closure.

Sample Exit Criteria for Integration Testing

All interfaces involved passed integration testing and meet agreed upon functionality and performance requirements

Outstanding defects have been identified, documented, and presented to the business sponsor

The implementation plan is final draft stage

A testing transition meeting has been held and everyone has signed off

2.2.1 Integration Testing Approaches

2.2.1.1 Methods of IntegrationIncremental

The unit tested modules are added one by one and each resultant combination is checked and tested. This process repeats till all modules are integrated and tested. This requires that various aspects of an application’s functionality be independent enough to work separately before all parts of the program are completed.

Here the correction is easy as the source and the cause of error could be easily detected.

Non-incremental (Big Bang)

The modules unit tested at isolation are integrated at one go and the integration is tested. The correction is difficult because isolation of causes is complicated. For this reason, this method is least effective even though this is often used for its simplicity.

Page 28: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 28 of 59

2.2.1.2 Strategies of Incremental IntegrationIntegration testing can proceed in a number of different ways using strategies like bottom-up, top-down and sandwich.

Bottom-Up Strategy

The process starts with low level modules of the program hierarchy in the application architecture. Test drivers which are simple programs designed specifically for testing the calls to lower layers, are used. It provides emerging low-level modules with simulated inputs and the necessary resources to function.

Top-Down Strategy

The process starts at the top of the program hierarchy in the application architecture and travels down its branches. Stubs which are dummy software components used to simulate the behavior of a real component, used until the actual program is ready. They do not perform any real computation or data manipulation. It can be defined as a small program routine that substitutes for a longer program, possibly to be loaded later or that is located remotely.

Sandwich Strategy

The sandwich strategy is a hybrid of Top-Down and Bottom up method. In this case instead of completely going for top down or bottom up, a layer is identified in between. Dummy modules interface viz. Stubs and Drivers are used in integration testing.

Guidelines for choosing Integration Method

Method Advantage Disadvantage

Top-down

Key Interface Defects are trapped earlier

May be difficult to develop Program stubs

Finds architectural errors Core functionality Tested late in the cycle

Bottom-up

A boon if major flaws occur towards the bottom of the program

Need to complete the entire system design before start testing

Allows testing tricky low-level modules early

Key Interface defects trapped late in the cycle

Factors Suggested Integration Method

Clear requirements and design Top-down

Dynamically changing requirements, design, architecture

Bottom-up

Limited changes to existing architecture with less impact

Big bang

Combination of above Select one of the above after careful analysis.

Page 29: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 29 of 59

2.2.2 Measures & MetricsEffort Measure

The effort spent is measured for activities like test plan preparation and test case creation and their review & rework, test execution effort and defect detection and logging effort

Defects Measure

Few of the defects measure are the number of defects discovered in the integrated modules and the number of test case/script defects

Productivity Measure

The productivity measure in described in terms of number of test cases or scripts prepared/ Preparation effort in person hours and the number of test cases or scripts executed/ Execution effort in person hours

Defect Detection Rate

The defect detection rate is the number of valid defects detected/Test execution effort.

2.2.3 SummaryIf integration testing is started without first having thoroughly tested the individual units, then when an error occurs it’s difficult to know what caused it. There are many possibilities when testing units together and the number of possibilities increases with the increase in the number of units being tested, meaning that even with a small number of units it's very difficult to determine where an error lies. Test your units sufficiently well and then start integrating them slowly, testing in groups of two, then threes, etc., until you're testing the entire suite. At each point in the test you would able to narrow the likely causes of error more quickly than if you were to throw the pieces together with no individual unit testing at all and expect it.

2.3 System TestingWhat is System Testing?

System Testing is a Black-box testing technique wherein the entire application or the system as a whole is tested against its Functional & Non-Functional specifications irrespective of the program structure and logic.

Who does system testing?

It is typically performed by an independent team in a near-production environment, after the Unit testing & Integration testing have been successfully accomplished.

Objectives

Objective of the system testing is to uncover the functional issues, related to end-to-end integration of the systems involved in a solution. Validate that the system developed adheres to the system requirement specifications.

Entry Criteria

Unit testing for each module has been completed and approved; each module is under version control.

Page 30: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 30 of 59

An incident tracking plan has been approved.

A system testing environment has been established

The system testing schedule is approved and in place

System Testing Activities

Analyze the requirements and identify the types of testing to be performed,

Gather details about testing priorities and focus,

Prepare the test strategy for different types of testing,

Prepare the requirement traceability matrix (RTM),

Carry out automation feasibility analysis and finalize the tool to be used,

Create, review & baseline test cases & automation scripts,

Set up the test environment which is close to production and set up the test data,

Execute tests as per plan, record results,

Report defects and track them to closure, and

Retest the defect fixes, regression testing of application.

Exit Criteria

All appropriate parties have approved the completed tests

A testing transition meeting has be held and the developers signed off

Application meets all documented business and functional requirements

No known critical defects prevent moving to the Integration Testing

Testing Requirements

In order to perform system testing certain pre-requisites are very essential .These are mainly as follows

Test environment with integrated software modules which successfully passed the unit and integration testing should be available in order to do the system testing.

System test suites (Test suite is the collection of test cases along with the test data) ready to be executed.

Live/Simulated environment should be ready.

2.3.1 System Integration Testing (SIT) – a sub-type of System testing

System Integration Testing (SIT) involves testing the end-to-end integration of the various systems or system components to validate whether they work cohesively to provide an end-to-end solution for customer’s business. Systems involved could be distributed across various platforms like legacy, client-server, mainframe, AS/400 etc.

Page 31: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 31 of 59

System Integration Testing

2.3.2 Types of System TestingFunctional Testing: This type validates the functionality of the system as to whether the system does what it is supposed to. For example- a Banking solution should allow the Customer to open a new account or close it.

Some forms of Functional Testing are:

Build Acceptance Testing (Should the Test team accept the Build from Dev Team, for a Test effort?)

Sanity or Smoke Testing (Do the main functionalities work?)

Regression Testing (To make sure that application changes do not have undesired impact on unchanged parts of the application.)

Ad-hoc, Random, and Exploratory Testing (unplanned but creative testing)

Regression testing is done in almost all levels of testing starting from unit till System testing.

Non Functional Testing: Validates the non-functional requirements which specify system’s quality characteristics/attributes like performance/security/availability etc. While Functional testing demonstrates WHAT the product does, Non-Functional testing demonstrates HOW WELL the product behaves.

Some forms of Non-Functional Testing are:

Performance Testing – Testing that is performed to determine how fast some aspect of a system performs under a particular workload.

Stress Testing

Volume Testing

Load Testing

Endurance Testing

Page 32: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 32 of 59

Compatibility Testing – Compatibility testing is a formal process which evaluates whether the system is compatible with other systems with which it should communicate.

Scalability Testing – The purpose of scalability testing is to determine whether your application scales for the workload growth.

Security / Penetration Testing – Testing whether a system preserves the confidentiality, integrity and availability of information.

Usability Testing – Usability testing is a means for measuring how well people can use some human-made object (such as a web page, a computer interface, a document, or a device) for its intended purpose.

2.3.3 Testing and Test Case generation TechniquesThe most widely used and well known functional test techniques are explained in this section.

2.3.3.1 Boundary Value AnalysisBoundary value analysis uses the same analysis of partitions as equivalence partitioning. However, boundary value analysis assumes that errors are most likely to exist at the boundaries between partitions. Boundary value analysis consequently incorporates a degree of negative testing into the test design, by anticipating that errors will occur at or near the partition boundaries. Test cases are designed to exercise the software on and at either side of boundary values.

In a program which edits credit limits within a given range ($10,000 - $15,000), boundary analysis would test:

Low boundary +/- one ($9,999 and $10,001)

On the boundary ($10,000 and $15,000)

Upper boundary +/- one ($14,999 and $15,001)

There are some techniques like Robustness testing, Worst case testing, Special value testing, Robust worst case testing which are the derivatives of the boundary value analysis

2.3.4 Equivalence PartitioningEquivalence partitioning is a much more formalized method of test case design. It is based upon splitting the inputs and outputs of the software under test into a number of partitions, where the behavior of the software is equivalent for any value within a particular partition. Data which forms partitions is not just routine parameters. Partitions can also be present in data accessed by the software, in time, in input and output sequence, and in state.

For example, a program which edits credit limits within a given range ($10,000 - $15,000) would have three equivalence classes:

< $10,000 (invalid)

Between $10,000 and $15,000 (valid)

$15,000 (invalid)

Page 33: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 33 of 59

As software becomes more complex, the identification of partitions and the inter-dependencies between partitions becomes much more difficult, making it less convenient to use this technique to design test cases. Equivalence partitioning is still basically a positive test case design technique and needs to be supplemented by negative tests.

2.3.5 Error GuessingError guessing is based mostly upon experience, with some assistance from other techniques such as boundary value analysis. Based on experience, the test designer guesses the types of errors that could occur in a particular type of software and designs test cases to uncover them. This is probably the single most effective method of designing tests which uncover bugs.

For example, in an example where one of the inputs is the date, a tester may try February 29, 2000.

Check list can be maintained with the benefit of experience gained in earlier unit tests, helping to improve the overall effectiveness of error guessing. Decision tables, cause-effect graphing are some of the other techniques which are rarely used.

2.3.6 Usage of Tools in System TestingThe need for using tools arises when the test data for the system is very large, scheduled time for testing is not enough, regression has to be carried out more than once, or the test cases are huge in number etc.

The following table provides the details of commonly used tools for functional testing and their features.

 Features

(Win

Runner 7.6) QTP 8.2

Rational (Rational Robot 2003.06.00.436.000)

(Silk

Test 7.5)

(QA

Run V4.8)

Rational (Rational Functional Tester 6.1)

Platform

Windows, Citrix and Microsoft Terminal Server environments

Windows® 98, Windows® 2000, Windows NT® 4.0, Windows® Me, or Windows® XP. Windows

Windows, Linux & Solaris

Windows® 98, Windows® 2000, Windows NT® 4.0, , or Windows® XP.

Windows, Linux Red Hat

Browser Internet explorer all versions,

Netscape,Internet Explorer,AOL (America

Internet explorer and Netscape both

Netscape 4.5+ or IE 4.0+

IE 4.01+Netscape Navigator 4.0.8Netscape

IE 4.x, 4.7x, 5.x, 6.x and Netscape 6.2.x, 7.01, 7.02, 7.1

Page 34: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 34 of 59

Netscape 4.05 to 4.79 and 6.1,7.1

Online),Applications with embedded Web browser control.

except that some specific objects and actions cannot be replayed back in navigator

Communicator 4.5, 4.6, and 4.7

and Mozilla 1.4, 1.5, & 1.6

Tech-nology

HTML, DHTML, Java Script, VBScript, Active X, Java Apps, MFC(C, C++) Legacy applications

Windows MFC (Microsoft Foundation Classes), Visual C++, Visual Basic, Web, Active X, Mainframe (3270/5250) and Microsoft .NET-based application

HTML, DHTML, Java Script, VBScript, Active X, Java Apps,

MFC(C, C++)

Web, Java, ERP/CRM, Character based wireless and PDA Applications, Client/Server applications and emulator –based applications: UNIFACE, Siebel, SAP, Oracle, PeopleSoft and PowerBuilder

Web, Java ( SWT, AWT, JFC), any VS.NET application running under the .NET Framework

Script Type

Yes, visual based  

Text Based

Yes, text based Text No

Script Language

TSL(Test script language)

VBScript SQABasic(VB like language)

4TestQA Run Script

Java or Visual Basic.NET

Automated test Parameterization facility Yes Yes No Yes No Yes

Exception Handling Yes Yes Yes Yes Yes  

Test Result Analysis

Result file is generated for Test

Result file is generated for Each

Test Result displayed in Rational Test

Result file is generated for Test

Yes  

Page 35: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 35 of 59

Case Test Run. Manager Run

Version control Support to update and revise automated test scripts while maintaining old versions of each test Yes    

No. Easily to maintain with any VCS Yes

Yes with Rational Clear Case LT

Integration to Test Management Tool

Yes - Test Director

Yes - Test Director

Yes-Test Manager

Built-in Facility - SilkOrganizer Yes

Yes with Rational Functional Tester

Editor/

Debugger Yes Yes Yes Yes Yes Yes

Features of test Management tools are as below.

 Features Test Director Quality Center

Platform

Windows, Citrix and Microsoft Terminal Server environments

Windows® 98, Windows® 2000, Windows NT® 4.0, Windows® Me, or Windows® XP.

Browser

Any browser that supports ADO and runs on MS Windows OS.

Any browser that supports ADO and runs on MS Windows OS.

Repository Database - for a scalable solution

MS Access, MS SQL , Sybase, Oracle Microsoft SQL, Oracle

Integration between test management product, regression, and load testing products Yes Yes

Requirements Yes Yes

Page 36: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 36 of 59

management and traceability

Customizable User Interface and data storage Yes Yes

Coverage analysis (Graphs) in Requirments Module Yes Yes

Automatic Traceability Notifications Alerts and Follow up alerts Yes Yes

VB script for controlling workflow which enables to restrict and dynamically change the fields and values Yes Yes

SiteScope - a web environment monitoring solution which provides tools to monitor key aspects Yes Yes

Features of performance and load testing tools are as below.

 Features Load Runner Web Load

Platform Windows, Linux and Unix

Central Console: Microsoft Windows XP, 2000, 2003Load Machine: Microsoft Windows XP, 2000, 2003, Sun Solaris 2.6 and above, Red Hat V7.3 and aboveProbing Client: Microsoft Windows XP, 2000, 2003, Sun Solaris 2.6 and above, Red Hat V7.3 and above

BrowserIE5.x or higher, Netscape Navigator 4.x, 6.x IE, Netscape & Mozilla

Memory Requirement

Load Generator - Atleast 1 MB RAM for non multithreaded Vuser or atleast 512KB for multithreaded VuserLoad Controller, VuGen, Analysis - 128MB or more

Min 512 MB, 1 GB Recommended

Scripting Type Text and GUI (icon) based Text and GUI based

Page 37: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 37 of 59

Scripting Language ANSI C Javasacript

Integrated Network monitoring Yes No

Ability to drill down on application performance across network No

Yes(Possible with Webload Analyzer)

Integrated server monitoring with results correlated to load Yes Yes

At runtime the ability to monitor the number of users which have succeeded or failed for a given transaction

Yes Yes

MS SQL Server DB-Lib Yes No

Oracle 7.x and 8.x Yes No

Multi-threaded Virtual User Execution Yes Yes

Checks where the performance delays occur: network or client delays, CPU performance, I/ O delays, database locking, or other issues at the database server. No

Yes(With the help of Web load Analyzer)

2.3.7 Best Practices Use automation tools if the testing is repetitive in nature

Good understanding of the business requirements is very important before test cases are prepared.

Mapping of requirements to the specification will ensure the completeness and coverage of the test cases prepared.

2.3.8 SummarySystem testing is the final destructive testing phase before acceptance testing. Hence it can be put in a nutshell that the following points ensure the effectiveness of system testing.

The system test planning needs to be driven by the requirements.

Page 38: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 38 of 59

Clear functional and non-functional specifications should be available.

Good unit and integration testing before system testing.

Tools can further enhance the effectiveness

2.4 User Acceptance Testing (UAT)

What is UAT?

Acceptance testing enables the customer to determine whether or not to accept a system, and is carried out when the test team has completed the System testing.

These tests are based on the Users’ needs or the Requirements Specification, and the idea is that if the software works as intended during the simulation of normal use, it will work just the same in production.

When and Who does UAT?

Acceptance Test planning should start during the requirement analysis phase to identify the gaps in requirements and to verify their testability early which saves late heartburns. Typically the users or their technical representatives perform this final verification of the intended business function in a simulated environment which is very close to the real environment.

Entry Criteria

Integration testing signoff was obtained

Business requirements have been met or renegotiated with the Business Sponsor or representative

UAT test scripts are ready for execution

The testing environment is established

Security requirements have been documented and necessary user access obtained

Activities

Prepare the acceptance test plan and test cases. Checks to be included in acceptance test cases are

Check whether every function identified in the functional specification for this software is present and functioning appropriately.

Check whether the software can process the maximum volume of data

Check whether all interfaces with other systems is functioning correctly.

Check whether the software provides the required access authorization.

Check whether the software provides proper mechanism for handling of errors & exceptions handling follows a logical flow.

Page 39: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 39 of 59

Check whether the software does not leave corrupted data when an error such as a hardware failure occurs.

Check whether the software has restart / recovery procedures and are functioning correctly.

Check whether the software is user friendly. The error messages help guidelines, popup messages are meaningful, navigation between the screens are smooth, and the GUI is user friendly.

Check whether the User Manual is understandable and information is presented in a user- friendly manner.

Check whether the software can support multiple hardware devices and software.

Check whether the software can be installed following the installation guidelines.

Execute tests as per plan, record results

Report defects and track them to closure

Retest the defect fixes, regression testing of application

Exit Criteria

UAT has been completed and approved by the user community in a transition meeting

Change control is managing requested modifications and enhancements

Business sponsor agrees that known defects do not impact a production release—no remaining defects are rated 3, 2, or 1

2.4.1 Types of UAT:There are two types of acceptance testing as given below.

Alpha Testing: Simulated or actual operational testing performed by end users within a company but outside development group.

Beta Testing: Simulated or actual operational testing performed by a sub-set of actual customers outside the company.

Testing Requirements

User requirement document is available

Acceptance criteria which is pre-established standards or requirements a product or project must meet is available

Environment with system tested application is available at the client’s place.

2.4.2 Usage of ToolsSystem Testing tools are used in UAT too.

Best Practices

Page 40: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 40 of 59

System should be validated based on the real world business scenarios.

Testers should work with users for clearly defining the criteria required for the product.

2.4.3 SummaryAcceptance testing is critical to the success of a software project. An automated acceptance testing framework can provide significant value to projects by involving the customer.

Although acceptance testing is a customer and user responsibility, If testers help develop an acceptance test plan, include that plan in the system test plan and perform or assist in performing the acceptance test, test duplication can be avoided and effectiveness of testing can be improved.

2.5 Test AutomationIntroduction – Business Case for Test Automation

In the previous chapters, we have seen various types of testing and how test cases are developed for them. When these test cases are run and checked, the quality of the product & the underlying Testing process will certainly improve. However, this throws up a challenge that much additional time is required to run those test cases.

Also, the number and complexity of features increases with time as a by-product of changes. Unfortunately, the number of test engineers and the time invested in testing each new release either remains flat or may even decline. As a result, the test coverage steadily decreases which increases the risk of failure.

One way to overcome these challenges is to automate most of the test cases that are repetitive in nature.

What is Test Automation?

Test Automation is the act of converting test cases to machine executable code to control the test execution & comparison of actual results to expected results. In other words, developing software to test software is called Test Automation.

Characteristics of a Test Automation suite

The automation suite developed for testing is expected to be reused over a number of builds. Thus products or large applications where many number of releases are expected are the ideal candidates for regression automation.

Automated test suite should be Robust, Maintainable and Data-driven.

Robust, because the suite is able to handle unexpected conditions while execution as a result of defects in the application being tested.

Maintainable, because in case of any functional / requirement changes, the scripts developed can be modified or new scripts can be added to the suite with minimum efforts.

Page 41: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 41 of 59

Data Driven, because the automation suite is able to execute from different machines for a number of builds, so data is a part of the suite but separate from the scripts.

Such a suite provides consistency, reliability and repeatability in test execution. The same tests can be repeated exactly in the same manner (extremely important if the test is complicated, such as involving a large data setup) every time the automated scripts are executed, thus eliminating manual testing errors.

Test automation allows quicker and more frequent execution of existing tests on newer version(s) of a program/application, thereby enabling a huge saving of costs and time. It also allows execution of tests that would be difficult or impossible to do manually (say, performing login from 5000 clients simultaneously), or detection of bugs that can be uncovered only with long run (say, memory leaks after data traffic over 24 hours).

The time saved in automation can also be leveraged to perform some extra or more creative manual testing, thus improving the test coverage. Moreover, the automated tests can be run overnight, saving the elapsed time for testing, thereby enabling the product to be released more frequently.

What to Automate, Scope of Test Automation

It’s a perennial myth that automation testing can be applied to 100% of the test requirements. Trying to automate everything possible is the surest way of accomplishing poor ROI from test automation.

2.5.1 Good Candidates for Test Automation Repetitive tests that need to be run for every build of the application like sanity

check, regression test, data driven tests, or multiple OS/ browser combinations.

Time consuming tests, where automation can speed up certain tasks such as configurations, and save time.

Tests requiring great deal of precision, where manual testing can induce errors.

2.5.2 Bad Candidates for Test Automation One-time or short-term tests such as testing using hot fixes.

Product’s life cycle is small, or the application is going to be obsolete.

Tests requiring specialized skills such as Ad hoc/random tests, and interface tests involving hardware & peripheral devices, or batch program tests.

Subjective tests such as usability test involving look and feel analysis.

Furthermore, the following factors should be considered to ascertain the readiness for test automation:

Stability of the application (so that we aren’t always debugging the application issues while creating test scripts )

Proneness of the application to change because of new functionalities or outstanding defects

Page 42: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 42 of 59

Proneness of the test cases to change (say, test cases that test compliance to standards)

Testing stages & their amenity to automation

Test Stage Type of Testing

Amenity to Automation

Factors Influencing

Regression Testing

Black Box High Focus is to test the impact on the stability of the original application

Tests are repetitive (Same functionality need to be tested for every release)

Model Office Testing

Black Box High Application will be highly stable

Final level of acceptance tests and therefore less number of outstanding defects

Performance Testing

Black Box High Application will be fairly stable

Tests are repetitive in nature (Same tests need to repeated for various parameters)

User Acceptance

Testing

Black Box Medium to High

Application will be fairly stable

Depends on the # of outstanding defects from System Testing

Number of test cycles planned (At least 2 cycles should be planned)

System Testing

Black Box Medium No repeatability / No stability in terms of functionality

High cost for automation

Integration Testing

White Box Low System level integration testing

White box and not a functional testing

No repeatability and no stability in terms of both technical and functional aspects

Unit Testing White Box Low Component level white box testing

No repeatability and No Stability as system is still in construction phase

Technical feasibility analysis (POC) and Economical feasibility analysis (ROI) needs to be considered before any major test automation effort.

2.5.3 Test Automation ProcessTest automation process involves the following life cycle stages

Page 43: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 43 of 59

Figure 1 - Test Automation Lifecycle Stages

Requirements & expectations have to be defined clearly & explicitly before starting off with any test automation activity. Things like how and what to be automated, which all modules can be automated should be decided in the beginning itself to prevent any confusion after the automation process has been flagged off.

Once the requirements & expectations have been highlighted clearly and the feasibility studies yield a positive result, methodical planning should be followed. This would include defining the approach, decide on which framework is to be used and subsequently, estimating the time and cost involved.

Environment set-up is one of the critical aspects of test automation process. Test team may not be involved in this activity if the customer/development team provides the test environment in which case the test team is required to do a readiness check of the given environment. Test environment decides the software and hardware conditions under which a work product is tested.

In Test script generation phase, test scripts are created, debugged, and verified by test automation team. Test bed is set up and a dry run is performed. After baselining the test scripts, testing & defect logging will be carried based on the test requirements. The development team members will analyze the defects logged and fix them. The development team members will also prepare a document on the defects and the associated solution.

Documentation is an important aspect of test automation process where in the automation team members will document the test procedures for the end user usage.

2.5.4 Case Study- Automation of Testing Processes Infosys designed, developed and executed the automated test framework using Mercury tools for a leading global insurance company.

Context

The system was being tested manually and found to be stable. Regular change Requests & Unavailability of a specialized in-house testing team increased client’s cost of testing. Coverage and Reusability were identified as the need.

Infosys Services Offered

Study of the existing system.

Automation Feasibility Analysis

Schedule, Estimation, Test strategy

Automation script generation

Documentation

Defect Logging and Tracking

Automated Test Script Execution

Test Environment Set up

Page 44: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 44 of 59

Preparation of efficient test strategy to complete automation in phases to de-risk the whole project.

Identification of suitable test case candidates for Automation.

Conducting POC for checking the automation feasibility.

Setting up the testing Environment/bed.

Preparation of test data.

Creation of a robust automation framework to facilitate multi-user concurrent usage.

Integration of automated test scripts with Test management tool to allow batch execution and effective management of scripts and defects.

Impact

Major cut-down in effort and cost due to automation.

Reduced turnaround time for testing, facilitating more frequent releases.

Followed processes reduced rework time due to minor changes.

2.5.4.1 Selecting a Test ToolSelection of the right tool for the test automation is a very important aspect of test automation. The tool evaluation criteria, parameters for tool selection & cost calculations will be used to select the right tool for a specific test automation requirement

Tool Evaluation Criteria

Area Description

Functional Evaluation Test Requirement Management

Test Script planning

Advanced planning

Test script execution

Test script defect Management

Test execution reporting

Technical Evaluation Application architecture

Technical architecture

Deployment

Security

Development and Maintainability

Vendor Evaluation Business direction

Technical support

Customer Service

Financial Viability

Page 45: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 45 of 59

Table 1 Test Tool Evaluation Criteria

Parameters for test tool selection include but not limited to:

Application requirements

Supporting Operating System

Supporting Language Platforms

Supported Technologies

GUI Standard Requirement

Tool Features

The following test tool evaluation was carried out for identifying the right tool for functional test automation. Two tools were evaluated in following areas. Tool 1 was selected for the automation as the application required the tool to support Citrix environment.

Area Description Weight %

Average

(Tool 1)(on scale of 1 to 3)

Average

(Tool 2)(on scale of 1 to 3)

Functional Evaluation Test Requirement Management 10 2.833 2.833

  Test Script planning 30 2.675 2.3

  Advanced planning 5 2 0.667

  Test script execution 25 2.818 2.272

  Test script defect Management 15 3 2.5

  Test execution reporting 15 2.9167 2.667

   Average score 2.778 2.35

Technical Evaluation Application architecture 25 2.875 1.9375

  Technical architecture 20 3 2.267

  Deployment 15 2.6 2.4

  Security 20 2.833 2.333

  Development and Maintainability 20 2.53 2.235

   Average score 2.78 2.21

Vendor Evaluation Business direction 20 2.667 2.667

  Technical support 25 3 2

  Customer Service 30 3 2.667

  Financial Viability 25 3 3

Page 46: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 46 of 59

   Average score 2.93 2.58

Parameters Tool 1 Tool 2

Platform Supported

Windows, Citrix and Microsoft Terminal Server environments

Windows® 98, Windows® 2000, Windows NT® 4.0, Windows® Me, or Windows® XP.

Browser Supported

Internet explorer all versions, Netscape 4.05 to 4.79 and 6.1,7.1

Netscape,Internet Explorer,AOL (America Online),Applications with embedded Web browser control.

Technology Supported

HTML, DHTML, Java Script, VBScript, Active X, Java Apps, MFC(C, C++) Legacy applications

Windows MFC (Microsoft Foundation Classes), Visual C++, Visual Basic, Web, Active X, Mainframe (3270/5250) and Microsoft .NET-based applications

Object mode recording Yes Yes

GUI Repository GUI Based Object Repository

Data Driven Testing

Yes Yes

Object verification checkpoints Yes Yes

Test Result Analysis

Yes Yes

On-line help Yes Yes

Regular Expression Support Yes Yes

Editor/Debugger Yes Yes

2.5.4.2 Key Success Factors for Evaluating Test AutomationKey success factors for evaluating a test automation process should be defined upfront during initial life cycle stages. During the project life cycle these factors should be captured for identifying the gaps in the process & to take the corrective action. Test automation process can be considered successful & beneficial when the key success factors are met. Some of the key success factors that could be considered for test automation are:

Page 47: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 47 of 59

Percentage reduction in testing effort: It gives the reduction in the test execution effort, due to automation, as percentage. A higher value indicates that automation was a profitable option for the test requirement. It is calculated as:

((Effort to execute manually - Effort to execute the automated test scripts) / Effort to execute manually) * 100

Percentage reduction in testing time: It gives the reduction in the testing time due to automation as percentage. Higher value indicates that automation was a profitable option for the test requirement & vice-versa. It is calculated as:

((Time to execute manually - Time to execute automated test scripts)/ Time to execute manually) *100

Return on Investment: This is revenue gained by automating the existing manual testing.

Percentage of automation achieved: It gives percentage of automation achieved in a project/ application. This value should be optimal against the set ROI. It is calculated as:

(No. of automated test cases/ Total no of manual test cases)*100

Percentage reusability of test scripts: This is test automation process metric that gives an indication of test automation process quality. Higher the reusability of the test scripts across modules better is the quality of the test script. It is calculated as

(No. of reusable test scripts/ Total no of test scripts generated)*100

2.5.5 Re-Usable Test Automation FrameworksTest automation frameworks enable the re-use of test scripts. There are different types of automation framework being used in the industry. Some of the commonly used automation frameworks are discussed in this section.

Data Driven Framework: Data driven framework is an approach based on the concept of feeding data into the scripts by utilizing an input file, called the data sheet. Data-driven framework is used for testing a single test script with varying input and response data that comes from a pre-defined data set. In case of changes, the modifications are made in the data sheet and not the script themselves. But it involves higher maintenance of the test cases for different releases as the number of data sheets increase.

Key Driven automation Framework: Keyword-driven testing and table-driven testing are interchangeable terms that refer to an application-independent automation framework. This framework requires the development of data tables and keywords (re-usable functions), independent of the test automation tool & application-under-test. Various keywords specify logical and repeatable step which can be used across the applications and can be collectively used to fulfill all the objectives that the test case demands. There will be a test script code that "drives" the application-under-test and the data. In this approach, the functionality of the application-under-test is documented through step-by-step instructions for each test. Though minimal scripting skills are needed for framework customization & maintenance, there will be a high initial effort involved in the creation of automation framework. This stage would also require good scripting knowledge. Apart from this, addition of new functionality requires effort from a skilled test scripter for addition of new keyword and creation related script

Business-Driven Test Automation Framework: Business driven test automation is based on business abstraction & is a reusable framework. It is independent of application

Page 48: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 48 of 59

platform. It allows the flexibility to define the test cases in English like language and then generate automated test scripts for any platform with the click of a button, based on a predefined keyword library. It simplifies and speeds up the test process. It is particularly effective in dynamic, rapidly changing business environment.

2.5.5.1 Test Automation ManagementSome feel that the test automation is an easy and one-time job. While Test automation cuts down the testing expenses, it may actually take much longer to develop, verify, and document an automated test case than to create and execute a manual test case, if the automation is not planned well.

Test automation requires a test environment & test data base that accurately replicates the production environment. It can be of smaller scale, but it must have similar types of hardware, software & database that can be restored to a known baseline.

Successful test automation also requires detailed and accurate manual test cases that can be converted to automated format, dedicated computers to run automated scripts & most importantly cost effective & appropriate test tools (80% of the selected test tools are estimated to end up as shelfware), tool specific trainings & skills. Test automation is not an alternative to having a streamlined test process.

It is said that an automated test can cost 3 to 30 times of a manual test the first time, but should cost near-zero dollars every next time. Clearly, the value of the test automation lies in its reusability, maintenance and management.

2.6 Test ManagementWhat is test management?

An important part of software quality is the process of testing and validating the software. Test management is the practice of organizing and controlling the process and artifacts required for the testing effort.

Test management allows teams to plan, develop, execute, and assess all testing activities within the overall software development effort. This includes coordinating efforts of all those involved in the testing effort, tracking dependencies and relationships among test assets and, most importantly, defining, measuring, and tracking quality goals.

2.6.1 Test Management Functions and ActivitiesTest management activities apply to each and every stage of the software test life cycle. (The software test life cycle has been explained in the previous chapters).

Following is a detailed explanation of the various functions and activities in Test management:

Test Requirement Analysis

Test Requirement Analysis includes understanding the contract, proposal, application and the objective of all stakeholders. It also involves the preparation of Requirement Traceability Matrix (RTM) in the initial stages to keep track of every requirement.

Page 49: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 49 of 59

2.6.2 Requirement Traceability Matrix (RTM) - Maps business requirements to the corresponding test scenarios and test cases.

Helps estimate effort as it provides a sense of the testing effort required in terms of test scenarios / cases to be executed.

Plays a critical role during test execution as the amount of testing required and the scenarios to be tested can be determined based on the available time and the relative criticality and priority of the remaining test cases.

Formal Change Management procedures help in managing requirement changes (Requirements traceability) and to assess the impact of changes on testing.

2.6.3 Test StrategyTest strategy is a well optimized approach to achieve a business objective within limited resources. Test Strategy must address the risk and reduce the risks. Risks or concerns form the basis or objectives for testing.

The test strategy is a formal description of how a software product will be tested. A test strategy is developed for all levels of testing, as required. The test team analyzes the requirements, writes the test strategy and reviews the plan with the project team.

The test plan may include test cases, conditions, the test environment, a list of related tasks, pass/fail criteria and risk assessment. The objective of the testing is to reduce the risks inherent in the system; the strategy must address the risk and present a process that can reduce those risks.

2.6.3.1 Components of the testing strategy are: Test Factor: Attributes of the software that, if not present, pose a risk to the

success of the software. The risk or issue that needs to be addressed as part of the test strategy. The strategy will select those factors that need to be addressed in the testing of a specific application,

Test Phase: The phase of the system development life cycle in which the testing will occur.

Test Tactics: The Test plans, test criteria, techniques and tools used to assess the software system.

In order to develop a test strategy the first and foremost task is to select and rank test factors followed by the identifying the affected phases, identifying the concern associated with each phase and factor and finally define a test strategy

2.6.4 Test PlanningTest planning is the overall set of tasks that address the questions of why, what, where, and when to test. The reason why a given test is created is called a test motivator (for example, a specific requirement must be validated). What should be tested is broken down into many test cases for a project. Where to test is answered by determining and documenting the needed software and hardware configurations. When to test is resolved by tracking iterations (or cycles, or time period) to the testing.

Page 50: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 50 of 59

Test planning is to simulate the operation of entire system and validate it ran correctly. Preparation starts once requirements are base lined. The testing scope is on the areas such as- Functionality, Performance, Usability, Security, Safety, Configuration, Recovery, Reliability, and Portability.

An approach to test planning would comprise of

Clear identification of all the requirements

Clear identification of all the dependencies, constraints and deliverables

Develop the test outline

Apply the test categories

No data provided

Do it twice

Valid data

Invalid data

Abort

Power loss

Stressing

Elaborate the outline to various test cases

2.6.5 Test EstimationTest estimation is a process, through which we estimate time and resources required for testing,

Effective software test estimation helps track and control cost/effort-overrun. Estimation is a continuous process that should be completed at the beginning of a project, but reviewed throughout the lifecycle. Test Estimation should be based on Technology, Architecture, Domain, and complexity.

This consists of estimation of the following for carrying out the project-

Project size, Cost and Effort

Number of Hardware, Software, Human resources

There are various Test Estimation Models which are widely used in industry. Some of them are explained below:

2.6.5.1 SMC Method (Based on Complexity)Business requirements for each function are identified and classified as Simple (S), Medium (M), or Complex (C).

For Example

Test Case with less than 5 steps – Simple

Test case with 5 -10 steps – Medium

Test Case with more than 10 steps – Complex

Page 51: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 51 of 59

The average implementation effort on S/M/C can be got from the baseline if it exists. If a project-specific baseline doesn’t exist, the test effort can be deduced from organizational data sources (PCB/PDB) for similar projects (project type & technology). If the technology/domain is unknown, then the average testing effort for S/M/C is derived from past experiences.

Other typical Test activities like Test Environment & Data Setup form the input parameters to calculate the total effort estimates.

Limitations of this method-

Level of complexity may differ from project to project, based on combination of domain, technology used. Hence SMC values cannot be generalised.

Does not consider the expertise level of test team members.

Is person-dependent.

2.6.5.2 Test Unit ModelThe test unit model makes use of the system specific parameters such as number of steps, number of input parameters, number of validation checks, number of links, number of numeric validations, number of UI checks and number of database validations.

Each of these parameters contribute to the complexity of the Test case and the same is measured in terms of test unit (an abstract Unit of Size, which is the unit of measurement of the size of a testing project)

2.6.6 Test Execution Test execution entails running the tests by assembling sequences of test scripts into a suite of tests. The activities carried out in test execution management are

Verification of Entry & Exit criteria to understand whether the necessary criteria are met to start and finish the Test activities.

Verification of Suspension & Resumption criteria to understand when should testing Stop or Resume.

Execution of test cases, test scripts and updates to test artifacts.

Tracking testing progress through milestone and metrics analysis.

Regular reviews & audits to keep the project in good health.

Issue resolution

Regular status tracking & reporting and communication with all stake holders to keep them updated about the project.

Defect management

At any point during Execution, this should answer- “How well is the Testing being performed?”

Page 52: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 52 of 59

2.6.7 Test Status ReportingTest Status Reporting is the analysis & communication of the various results of the testing effort to determine the current status as well as the overall level of quality of the application or system.

Testing produces a great deal of information from which metrics can be extracted to define, measure, track and communicate quality goals for the project.

Test Status Report updates all the stakeholders regarding the-

Current status of the project

Forthcoming releases and milestones

Challenges faced

Challenges ahead

Best Practices followed

Customer appreciation

Customer complaints

Depending on the role of the stakeholder (recipient of the Report) in the project, complexity of the project and need, the content, frequency (Weekly, Monthly) and type of status will vary.

Status Report contains details regarding-

Planned versus Actual data about the Test deliverables, Schedule, Cost, Quality

What were the Risks encountered?

How much testing is complete?

What are the results of the completed tests?

Who were involved in testing this?

When did the testing complete?

Number of defects raised in each iteration per build?

What is the requirements coverage percentage?

What are the dependencies?

2.6.8 Defect ManagementDefect is a flaw in a software system, such software system when executed results in failure. Software defects are expensive. Moreover, the cost of finding and correcting defects represents one of the most expensive software development activities. For the foreseeable future, it will not be possible to eliminate defects. While defects may be inevitable, we can minimize their number and impact on our projects. To do this development teams need to implement a defect management process that focuses on preventing defects, catching defects as early in the process as possible, and minimizing the impact of defects. A little investment in this process can yield significant returns.

Page 53: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 53 of 59

A Typical Defect Life-cycle

Defects can be managed as follows:

Classification based on priority, the urgency at which the defect fix is needed and on severity, the impact the defect is causing on the application.

Identifying defect life cycle. This depends on tool, but may need customization. Movement of defect from one stage to other and who are involved in it.

Metrics collection like defect density, defect detection rate, defect fix rate etc.

Check to see if there exists mapping between defects and failed test cases.

2.6.9 Test Cycle Closure ProcessThe test cycle can be closed when

Test cases for the cycle are executed and all outstanding defects are resolved

All deliverables are accepted by customer

Quality goal of the product is achieved

The various closure activities are

Conduct closure meeting – To discuss closure report data

Perform closure analysis - Prepare Closure report and obtain a sign off

Collect feedback from all stakeholders – To understand the satisfaction level

Archive project artifacts – For future reference and use

Handover the client assets (access cards, H/W, S/W resources) and release test team members

Submit knowledge assets – For reference in future

A closure report contains the following:

Process Details

Risks

Size and Schedule

Effort

Defects

Test Metrics

Best Practices

Customer Appreciations / Complaints

Page 54: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 54 of 59

2.6.10 Test Process Analysis/ImprovementImproving the test process is essential for ensuring the quality of the information system and the overall business process. However, in practice, it’s often challenging to define the steps required to improve and control the process, and in what order. Following diagram depicts the areas of test process analysis/improvement.

Test Metrics

One of the primary testing goals is to assess and determine quality, but how do you measure quality? There are many means of doing this depending upon the type of system or application as well as the specifics of the project. Any such quality metrics need to be clear and unambiguous to avoid being misinterpreted. More importantly, metrics must be feasible to capture and store, otherwise they might not be worth the cost or could be incomplete or inaccurate.

Test Metrics is a mechanism to know the effectiveness of the testing that can be measured quantitatively. It is a feedback mechanism to improve the Testing Process that is followed currently

The objectives of metric collection are to-

Determine the quality and productivity at which each project is operating

Determine if project is meeting the Service Level Agreements within the defined bounds and look for areas of improvement

Check the health of the project

Analyze strengths and weaknesses

Set goals for future projects based on past data

Provide inputs for estimation of future projects

Following table lists the Metrics collected for a Business goal to achieve a stated Objective.

Business Goal Measurement Objective Metrics

Improvement in productivity

Ensure that the project operates at the desired productivity levels

Measure process performance

Size, effort, review effectiveness, rework effort percentage

Improvement in delivered quality

Ensure that the software produced is at a desired level of quality

Meeting SLAs

Ensure User Acceptance Level satisfaction

Delivered defects, defects detected at various stages, Turn around time, Service levels (as applicable)

Adherence to schedule

On time delivery

Meeting SLAs

Elapsed days, schedule adherence percentage

Page 55: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 55 of 59

Return on Investment (ROI)

Validate if the total cost of

ownership has been lowered

Cost of Quality, cost of defects detected / undetected, cost savings over multiple releases, cost benefits of off shoring

Tools in Test Management

One tool that can implement these practices is the test management capabilities in ClearQuest. It directly addresses many specific technical needs, for example: working with offshore teams through ClearQuest MultiSite. It also provides a flexible framework for creating the right test management solution for any project or organization's needs.

Best practices in Test Team Management

Starting test activities early and iteratively, focusing on goals and results, and coordinating and integrating with the rest of development will keep the testing effort from becoming an afterthought to the software. Maximizing the reuse of test artefacts, leveraging remote testing resources, defining and enforcing a flexible testing process, and automating will all help overcome resource constraints. There are a number of best practices that can help prevail over these challenges:

Match strengths and work assignment

Empower the team with responsibility

Make work more challenging

Equitable allocation of work

Personal attention to each team member

Regular communication with stakeholders

Recognize achievements and celebrate every success

Have fun always - even under stress

Knowledge Management – for smooth running of project even when people move in/out of project

Team motivation to get the best from the them

Prioritization of tasks - With best combination of priority and severity

Timely escalation to avoid emergency situations

Timely audits to check process compliance

Use of RTM

Summary

An important step to improving software quality is advancing test management practices beyond old-fashioned, document-based methods. Test management encompasses various functions, including planning, authoring, executing, and reporting on tests, as well as addressing how testing fits in and integrates with the rest of the software development effort. There are a number of daunting and inevitable challenges to test management, such as scarcity in time and resources, testing teams located in remote geographic locations, problems linking testing with requirements and development, and reporting the right information.

Page 56: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 56 of 59

2.6.11 Other Testing Techniques

2.6.11.1Xtreme TestingThose who are familiar with Xtreme Programming would definitely agree on the fact that the concept of Xtreme Programming is the brightest set of guidelines in software development. The concept of Xtreme testing must be driven by the fact that there is no single approach that will work for all companies. Nor it is true that all the pieces from one methodology are the best of the breed. The right mix of good practices will differ for each individual project or company.

A good test must be designed like a good software. Xtreme testing must eliminate, to a large extend, the unnecessary elements of building software. A relatively small increase in methodology size or density adds a relatively large amount to the project cost. Xtreme Programming is the less complicated in implementation methodology comparing to others. It shows better results in terms of return on investment (ROI). Any increase in methodology has to show some proven ROI before company even begins to consider it.

Implementing Xtreme Testing

The quality for Xtreme testing project is perceived as achieving the target, increasing the testability and minimizing the variability of the system under the test. Minimizing variability during test cases design helps to save time in a future maintenance of the test suite.

Some of the guidelines that need to be adhered to in Xtreme Testing are:-

Define the target or what needs to be tested.

Come up with the minimum necessary documentation for testing that must be created.

Redundancy of TC for unit and system testing should be eliminated.

Design test cases covering the system architecture and environment

Testing processes should be made more robust.

Encourage seamless interworking between testers and developers(Participating of testers on all architecture related meetings and walkthrough, Developers reviewing TC running by testers and vice versa)

Use techniques like orthogonal array technique to reduce the number of TC.

Create a built-in diagnostic for any of:

o External system

o Third party libraries

o Database

o Configuration parameters

Built-in verification and calculation of a control

Concentrate on Business cycle testing (system behavior in certain time frames), Concurrency testing, Integration testing of any third party components,

Page 57: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 57 of 59

Specification based testing, scenario testing, Database testing using CRUD matrix. (Gray box testing method), other testing types like: load, performance, installation, volume, user documentation, disaster recovery etc.

Conclusion

Xtreme Testing offers ample scope to the testers to be more innovative in their approach towards testing, the key drivers being competence and attitude of individual testers. Xtreme Testing can be fit into any level of testing. To conclude, "Design test first" practice not only helps to create better software but also allows manage changes quickly.

2.6.12 Model-based TestingModel-based testing is a new and evolving technique for generating a suite of test cases from requirements. This approach concentrates on a data model and generation infrastructure instead of coming up with individual tests. This testing helps in reducing the cost of test generation, increasing the effectiveness of the tests, and shortening the testing cycle.

Because test suites are derived from models and not from source code, model-based testing is usually seen as one form of black-box testing. In some aspects, this is not completely accurate. Model-based testing can be combined with source-code level test coverage measurement, and functional models can be based on existing source code in the first place.

Model-based testing is software testing in which test cases are derived in whole or in part from a model that describes some (usually functional) aspects of the system under test (SUT).

General model-based testing setting

2.6.12.1 Implementing Model-based TestingThe model is usually an abstract, partial presentation of the system under test's desired behavior. The test cases derived from this model are functional tests on the same level of abstraction as the model. These test cases are collectively known as the abstract test suite. The abstract test suite cannot be directly executed against the system under test because it is on the wrong level of abstraction. Therefore an executable test suite must be derived from the abstract test suite that can communicate with the system under test.

Page 58: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 58 of 59

This is done by mapping the abstract test cases to concrete test cases suitable for execution.

Especially in Model Driven Engineering or in OMG's model-driven architecture the model is built before or parallel to the development process of the system under test. The model can also be constructed from the completed system. Recently the model is created mostly manually, but there are also attempts to create the model automatically, for instance out of the source code. One important way to create new models is by model transformation, using languages like ATL, a QVT-like Domain Specific Language.

For finding appropriate test cases, i.e. paths that refer to a certain requirement to proof, the search of the paths has to be guided. For the test case selection multiple techniques are applied.

2.6.12.2 Test case generation by theorem provingTheorem proving has been originally used for automated proving of logical formulas. For model-based testing approaches the system is modeled by a set of logical expressions (predicates) specifying the system's behavior. For selecting test cases the model is partitioned into equivalence classes over the valid interpretation of the set of the logical expressions describing the system under test. Each class is representing a certain system behavior and can therefore serve as a test case.

2.6.12.3 Test case generation by constraint logic programmingConstraint programming can be used to select test cases satisfying specific constraints by solving a set of constraints over a set of variables. The system is described by the means of constraints. Solving the set of constraints can be done by boolean solvers (e.g. SAT-solvers based on the boolean satisfiability problem) or by numerical analysis, like the Gaussian elimination. A solution found by solving the set of constraints formulas can serve as test cases for the corresponding system.

2.6.12.4 Test case generation by model checkingOriginally model checking was developed as a technique to check if a property of a specification is valid in a model. We provide a model of the system under test and a property we want to test to the model checker. Within the procedure of proofing if this property is valid in the model the model checker detects witnesses and counterexamples. A witness is a path, where the property is satisfied, a counterexample is a path in the execution of the model, where the property is violated. This path can again be used as test cases.

2.6.12.5 Test case generation by symbolic executionSymbolic execution is often used in frameworks for model-based testing. It can be a means in searching for execution traces in an abstract model. In principle the program execution is simulated using symbols for variables rather than actual values. Then the program can be executed in a symbolic way. Each execution path represents one possible program execution and can be used as a test case. For that, the symbols have to instantiated by assigning values to the symbols

Conclusion

Model-based testing offers advantages like automating test generation and providing a basis for statistically estimating product quality. These perks can be enjoyed provided

Page 59: read.pudn.comread.pudn.com/downloads208/doc/978592/04 Testing.doc  · Web viewSTUDY MATERIAL. Date . Version Comments Testing Fundamentals. Introduction to Testing. Testing concepts

Page 59 of 59

the right models are used, the proper resources acquired and adequate training undergone.

The effectiveness of model-based testing is primarily due to the potential for automation it offers. If the model is machine-readable and formal to the extent that it has a well-defined behavioral interpretation, test cases can in principle be derived mechanically.