introduction to software testing and quality assurance...

113
1 | Page This course provides a highly practical bottom-up introduction to software testing and quality assurance. Each organization performs testing and quality assurance activities in different ways. This course provides a broad view of both testing and quality assurance so that participants will become aware of the various activities that contribute to managing the quality of a software product.

Transcript of introduction to software testing and quality assurance...

Page 1: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

1 | P a g e

This course provides a highly practical bottom-up introduction to software testing and quality assurance. Each organization performs testing and quality assurance activities in different ways. This course provides a broad view of both testing and quality assurance so that participants will become aware of the various activities that contribute to managing the quality of a software product.

Page 2: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

2

Contents CHAPTER 1: Software Testing and Software Development Life Cycle ........................................................ 5

1.1 Introduction ....................................................................................................................................... 5

1.2 Software Development Lifecycle (SDLC) ......................................................................................... 5

1.3 Various SDLC Models ........................................................................................................................ 6

CHAPTER 2: Software Quality Testing ..................................................................................................... 12

2.1 Introduction ..................................................................................................................................... 12

2.2 What is Software Quality? .............................................................................................................. 12

2.3 Standards and Guidelines ............................................................................................................... 14

CHAPTER 3: Software Test Life Cycle and Verification & Validation .................................................... 16

3.1 Software Testing Life Cycle (STLC) ................................................................................................ 16

3.2 Verification and Validation Model ................................................................................................. 17

CHAPTER 4A: Validation Activity – Low-Level Testing ......................................................................... 24

CHAPTER 4B: Validation Activity – High-Level Testing ......................................................................... 28

4B.1 Objectives ...................................................................................................................................... 28

4B.2 Steps of Function Testing ............................................................................................................. 28

4B.3 Summary ....................................................................................................................................... 29

CHAPTER 5: Types of System Testing ..................................................................................................... 30

5.1 Introduction ..................................................................................................................................... 30

5.2 Usability Testing .............................................................................................................................. 31

5.3 Performance Testing ....................................................................................................................... 34

5.4 Load Testing .................................................................................................................................... 35

5.5 Stress Testing .................................................................................................................................. 36

5.6 Security Testing ............................................................................................................................... 38

5.7 Configuration Testing ..................................................................................................................... 40

5.8 Compatibility Testing ...................................................................................................................... 40

5.9 Installation Testing ......................................................................................................................... 41

5.10 Recovery Testing ........................................................................................................................... 42

5.11 Availability Testing ....................................................................................................................... 42

5.12 Volume Testing.............................................................................................................................. 43

5.13 Accessibility Testing ..................................................................................................................... 43

CHAPTER 6: Acceptance Testing .............................................................................................................. 45

Page 3: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

3

6.1 Introduction ..................................................................................................................................... 45

6.2 Objective .......................................................................................................................................... 45

6.3 Acceptance Testing ......................................................................................................................... 45

CHAPTER 7: Black Box Testing ................................................................................................................ 47

7.1 Introduction ..................................................................................................................................... 47

7.2 Objectives ......................................................................................................................................... 47

7.3 Advantages of Black Box Testing ................................................................................................... 48

7.4 Disadvantages of Black Box Testing .............................................................................................. 48

7.5 Black Box Testing Methods ............................................................................................................ 48

CHAPTER 8: Testing Types ....................................................................................................................... 52

8.1 Introduction ..................................................................................................................................... 52

8.2 Mutation Testing ............................................................................................................................. 52

8.3 Progressive Testing ......................................................................................................................... 52

8.4 Regression Testing .......................................................................................................................... 53

8.5 Retesting .......................................................................................................................................... 53

8.6 Localization Testing ........................................................................................................................ 53

8.7 Internationalization Testing ........................................................................................................... 54

CHAPTER 9: White Box Testing ............................................................................................................... 55

9.1 Introduction ..................................................................................................................................... 55

9.2 Objective .......................................................................................................................................... 55

9.3 Advantages of WBT ......................................................................................................................... 55

9.4 Disadvantages of WBT .................................................................................................................... 56

9.5 Techniques for White Box Testing ................................................................................................. 56

9.6 Cyclomatic Complexity ................................................................................................................... 58

9.7 How to calculate Statement, Branch/Decision, and Path Coverage for ISTQB Exam purpose . 65

CHAPTER 10: Test Cases ........................................................................................................................... 68

10.1 Introduction................................................................................................................................... 68

10.2 Objective ........................................................................................................................................ 68

10.3 Structure of Test Cases ................................................................................................................. 69

10.4 Test Case Template ....................................................................................................................... 69

CHAPTER11: Test Planning ...................................................................................................................... 71

11.1 Introduction................................................................................................................................... 71

Page 4: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

4

11.2 Objectives ...................................................................................................................................... 71

11.3 IEEE Standard for Software Test Documentation ...................................................................... 71

CHAPTER 12: Configuration Management .............................................................................................. 75

12.1 Introduction................................................................................................................................... 75

12.2 Objective ........................................................................................................................................ 75

12.3 Configuration Management Tools ............................................................................................... 76

CHAPTER13: Defect Tracing and Defect Life Cycle ................................................................................ 77

13.1 Introduction................................................................................................................................... 77

13.2 Objectives ...................................................................................................................................... 77

13.3 Why Do Faults Occur? ................................................................................................................... 77

13.4 What Is a Bug Life Cycle? .............................................................................................................. 78

13.5 Bug Status Description ................................................................................................................. 80

13.6 Severity: How Serious Is The Defect? .......................................................................................... 82

13.7 Priority: How to Decide Priority? ................................................................................................ 82

13.8 Defect Tracking ............................................................................................................................. 83

13.9 Defect Prevention .......................................................................................................................... 83

13.10 Defect Report .............................................................................................................................. 84

CHAPTER 14: Risk Analysis ...................................................................................................................... 86

14.1 Introduction................................................................................................................................... 86

14.2 Objectives ...................................................................................................................................... 86

14.3 Risk Identification ......................................................................................................................... 87

14.4 Risk Strategy .................................................................................................................................. 87

14.5 Risk Assessment ............................................................................................................................ 87

14.6 Risk Mitigation .............................................................................................................................. 88

14.7 Risk Reporting ............................................................................................................................... 88

14.8 What Is Schedule Risk? ................................................................................................................. 89

DEFINITIONS ............................................................................................................................................. 90

Page 5: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

5

CHAPTER1: Software Testing and Software Development Life Cycle

1.1 Introduction Software testing is a crucial phase of a product development lifecycle. It is a process of finding flaws in a given product or application. The purpose of testing is not to ensure that a product functions properly under all conditions, but to ensure that the product does not function under specific conditions. The objective of software testing is to:

• validate and verify, automatically or manually, that a software program/product meets the technical and business requirements.

• evaluate the product for its correctness, completeness, reusability, and reliability.

• ensure that the behavior of the product is as per the end-user’s expectations.

• identify defects in a product as early as possible in the development lifecycle, thereby helping in reducing costs fixing the defects later.

• deliver defect-free and high-quality products.

1.2 Software Development Lifecycle (SDLC)

Software development lifecycle (SDLC) is a conceptual model that describes sequence of activities followed by designers and developers during product development. SDLC consists of multiple stages or phases in which input for each phase is the output of the previous one. In the IT industry, different SDLC models are followed, which involve various stages from creating through testing a software product. The commonly followed SDLC model is categorized into five stages: analysis, design, implementation, verification, and maintenance.

1.

Software Development Life Cycle (SDLC)

Analysis

Implementation Design Verification

Maintenance

Page 6: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

6

1.3 Various SDLC Models

Various types of SDLC models exist to streamline the development process. Each one has its pros and cons. It is up to the development team to choose the appropriate model for its project. In this section, we will learn about four SDLC models:

1. Waterfall model 2. Incremental model 3. Spiral model 4. Agile methodology

Let's learn about each of these models in brief.

1.3.1 Waterfall Model Waterfall model, a classic software lifecycle model, was introduced and widely followed in software engineering. This model exhibits a linear and sequential approach in software development. In this model, the different phases of software engineering are cascaded to each other such that one can move to a phase only when its preceding phase is finished and once you finish one phase, you cannot move back to the previous phase. The different phases in waterfall model are as follows:

• Project Planning This phase defines the objectives, strategies, and supporting methods required to achieve the project goal.

• Requirement Analysis and Definition The main objective of this phase is to prepare a document, called Software Requirement Specification (SRS), that clearly specifies all the requirements of the customer. SRS is the primary output of this phase.

• Systems Design This phase includes designing of screen layouts, business rules, process diagrams, pseudo code, and other documentation to describe the features and operations of a software product in detail.

• Implementation In this phase, the actually coding starts. After the preparation of system design documents, programmers develop the software program/application based on the specifications. In this phase, the source code, executables, and databases are created.

• Integration and Testing In this phase, the pieces of all codes/modules of a product are integrated into a complete system and tested to check if all modules/units coordinate between each

Page 7: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

7

other, and the system as a whole behaves as per the specifications.

• Acceptance, Installation, Deployment This phase includes:

o Getting the software accepted o Installing the software at the customer site

Acceptance consists of formal testing conducted by the customer according to the acceptance test plan prepared earlier and analysis of the test results to determine whether the system satisfies its acceptance criteria. When the test results satisfy the acceptance criteria, the user accepts the software.

• Maintenance This phase is for all types of modifications and corrections of the product after it is installed and operational. This, the least glamorous and perhaps the most important step of all in SDLC, however goes on seemingly forever.

Waterfall Model

Let’s quickly go through the advantages and disadvantages of the Waterfall model.

Advantages

• It is simple and easy to use. • Because of the rigidity of the model, each phase has specific deliverables and a

review process; it is easy to manage. • Phases are processed and completed one at a time. • More suitable for smaller projects where requirements are very well understood.

Requirement

Design

Implementation/Coding

Testing

Maintenance

Page 8: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

8

Disadvantages

• Adjusting scope during the lifecycle can kill a project. • No working software is produced until late during the lifecycle. • Risk and uncertainty is very high. • Poor model for complex and object-oriented projects. • Poor model for long and ongoing projects.

1.3.2 Incremental Model

Incremental model is an advanced approach to the Waterfall model. It is essentially a series of waterfall cycles. In this model, a core setof functions is identified in the first cycle and is built and deployed as the first release. You can then repeat the software development cycle with each release adding mode functionality until all the requirements are met. Each development cycleacts as the maintenance phase for the previous software release. New requirements discoveredduring the development of a given cycle are implemented in subsequent cycles. In this model, a subsequent cycle may beginbefore the previous cycle is complete.

Incremental Life Cycle Model

Let’s go through the advantages and disadvantages of the Incremental model.

Advantages

• Allows requirement modification and addition of new requirements • Easier to test and debug on smaller cycles • Easier to manage riskssince risks are identified and handled during each iteration • Every iteration in incremental model is an easily managed milestone

Requirements

Design

Implementation

& Unit Testing

Integration & System

Operation

Page 9: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

9

Disadvantages

• Majority of requirements must be known in the beginning • Cost and schedule overrun may result in an unfinished system

1.3.3 Spiral Model

This model is similar to the incremental model, but with additional phase of risk analysis. The spiral model has four phases: Planning, Risk Analysis, Engineering, and Evaluation. Let’s see each of these phases in brief.

1. Planning: determines the objectives, alternatives, and constraints on the new iteration

2. Risk analysis: evaluates alternatives and identify and resolve risk issues

3. Engineering: develops and verifies the product for this iteration

4. Evaluation: evaluates the output of the project to date before the project continues to the next spiral; plans the next iteration

----PLANNING ----RISK ANALYSIS

-Requirement -Prototyping

Gathering Progress

-Design Project Cost

----EVALUATION ----ENGG.

-Customer -Coding

Evaluation -Testing

Spiral Model

Page 10: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

10

Let’s go through the advantages and disadvantages of the spiral model.

Advantages

• Useful for complex and large projects • High amount of risk analysis • Software is produced early in the software lifecycle because of the prototype

Disadvantages

• Expensive model • Time spent for planning, risk analysis, prototyping can be excessive • Risk analysis requires highly-level, skilled expertise • Project’s success is highly dependent on the risk analysis phase • Doesn’t work well for smaller projects

1.3.4 Agile Methodology Agile methodology breaks development tasks into smalleriterations with minimal planning. The working software is delivered frequently, say, on weekly, fortnightly, or monthly basis. Iterations are short time frames and typically last from one to four weeks. In each iteration, a team works through a full software development cycle.This minimizes the overall risk and allows the project to adapt to changes quickly.

The team involved in agile methodology is usually cross-functional and self-organizing regardless of any existing corporate hierarchy or the corporate roles of team members. Team members take responsibility for tasks that deliver the functionality and decide individually on how to meet an iteration's requirements.

In most agile implementation, a formal, daily, face-to-face meeting is conducted among team members. In a brief session, team members report to each other what they did the previous day, what they intend to do today, and what their roadblocks are. This face-to-face meeting helps in exposing the problem areas.

Let’s go through the advantages and disadvantages of the agile methodology.

Advantages

• Involves an adaptive team which is able to respond to the changing requirements

• Face- to-face communication and continuous inputs from customer representative leaves no space for guesswork

• End result is the high-quality software in least possible time duration and satisfied customer

Page 11: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

11

Disadvantages

• It becomes difficult to assess efforts required at the beginning of SDLC, in case of large, complex software deliverables.

• The project can easily get off the track if the customer is not clear what final outcome that they want.

Page 12: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

12

CHAPTER 2: Software Quality Testing

2.1 Introduction

“Quality” is defined as the degree to which a component, system, or process meets the specified requirements and/or user/customer needs and expectations. Quality could also mean:

• a product or service free of defects • fitness for use • conformance to requirements

In this chapter, you will learn about software quality testing and terminologies.

2.2 What is Software Quality?

In the software engineering industry, software quality refers to:

• Software functional quality: reflects how well a product complies/conforms to a given design, based on the functional requirements or specifications

• Software structural quality: refers to how a product meets non-functional requirements such as robustness or maintainability

Software quality is broadly classified as Quality Assurance and Quality Control.

QUALITY

QUALITY ASSSURANCE QUALITY CONTROL

Categories of Software Quality

Page 13: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

13

2.2.1 Quality Assurance (QA) Quality Assurance aims at defect prevention in processes. It monitors and evaluates various aspects of projects and ensures that the engineering processes and standards are strictly adhered throughout the software lifecycle to ensure quality. Audits are a key technique used to perform product evaluation and process monitoring. Key Points

• Identifies weaknesses in processes and improves them

• QA is the responsibility of the entire team

• Helps defect prevention

• Helps establish processes for defect prevention

• Sets up measurement programs to evaluate processes

2.2.2 Quality Control (QC) Quality control focuses on testing of products to remove defects from the products and ensure that the product meets performance requirements. Key Points

• Involves comparison of product quality with applicable standards, and actions taken when non-conformance is detected

• Implements processes for defect removal

• QC is the responsibility of the tester

• Detects and reports defects found in testing

Page 14: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

14

2.3 Standards and Guidelines

Standards are rules or processes set to be followed in any organization for developing a product, whereas guidelines acts as a suggestion for carrying out a particular activity or task. The Software Engineering Institute (SEI) standard, established in 1984 at Carnegie Mellon University, aims at rapid improvementof the quality of operational software in the mission-critical computer systemsof the UnitedStates Department of Defense.

Based on the type of industry, various industry standards exist. The standards used for software industries are as follows:

1. Capability Maturity Model (CMM)

2. International Organization for Standardization (ISO)

3. IEEE

4. ANSI

Let’s learn about each of these standards in detail.

2.3.1 Capability Maturity Model (CMM) Capability Maturity Model (CMM) is a process improvement approach. It helps organizations improve their performance and can be used to guide process improvement across a project, a division, or an entire organization. CMM describes five evolutionary stages in which an organization manages its processes. These stages are:

1. Level 1: Initial In level 1 organizations, processes are disorganized and chaotic. Success usually depends on individual efforts and heroics of people. These organizations often exceed the budget and schedule of their projects. Key Points

• Tendency to over commit • Skip processes in the time of crisis • Does not repeat their past successes again • Success depends on having quality people

2. Level 2: Repeatable In level 2 organizations, project tracking, requirements management, realistic planning, and configuration management processes are established and put in place.

Page 15: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

15

Key Points

• Software development successes are repeatable • Process discipline helps ensure that existing practices are followed even

during tight delivery timelines • Basic project management processes are established to track cost, schedule,

and functionality

3. Level 3: Defined Standard software development and maintenance processes are established and improved over time. These standard processes bring consistency across the organization.

4. Level 4: Managed Using metrics and measurements, management can effectively track productivity, development efforts, processes, and products. In this level of organization, quality is consistently high.

5. Level 5: Optimizing In level of organization, processes are constantly improved and new, innovative processes are introduced to better serve the organization's particular needs.

2.3.2 International Organization for Standardization (ISO) The ISO 9001:2000 standard specifies requirements for a quality management system. This ISO standard covers documentation, design, development, production, testing, installation, servicing and other processes.

2.3.3 Institute of Electrical and Electronics Engineers (IEEE) IEEE has created standards related to software quality and testing. These standards are IEEE Standard for Software Test Documentation(IEEE/ANSI Standard 829), IEEE Standard of Software Unit Testing (IEEE/ANSI Standard 1008), IEEE Standard for Software Quality Assurance Plans (IEEE/ASNI Standard 730), and others.

2.3.4 American National Standards Institute (ANSI) ANSI is the primary industrial standards body in the U.S. It publishes some software-related standards in conjunction with the IEEE and ASQ (American Society for Quality).

Page 16: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

16

CHAPTER3: Software Test Life Cycle and Verification& Validation

3.1 Software Testing Life Cycle (STLC)

Every company follows its own software testing lifecycle (STLC) to suit their requirements, culture, and available resources. STLC includes various stages of testing through which a software product goes.

STLC comprises of following sequential phases:

1. Planning 2. Analysis 3. Design 4. Construction and verification 5. Testing cycles 6. Final testing and implementation 7. Post implementation

Let’s learn about each of these stages. 1.Planning In the Planning stage, the Project Manager decides what things need to be tested, what would be the appropriate budget, etc. Proper planning at this stage helps to reduce the risk of low quality software. Major tasks involves in the planning stage are:

• Defining scope of testing • Identifying approaches • Defining risks • Identifying resources • Defining schedule

2.Analysis Once the test plan is created, the next phase is the Analysis phase. This phase involves:

• Identifying the types of testing to be carried out at various SDLC stages • Determining if testing should be performed manually or automatically • Creating test case formats, test cases, functional validation matrix based on Business

Requirements • Identifying which test cases to automate • Reviewing documentations

In the analysis phase, frequent meetings are held between testing teams, project managers, and development teams to check the progress of project and ensure the completeness of the test plan created in the planning phase.

3.Design In the design phase, the following activities are carried out:

• Test plans are test cases are revised.

• Functional validation matrix is revised and finalized.

Page 17: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

17

• Risk assessment criteria is developed.

• Test cases for automation are identified and scripts are written for them.

• Test data is prepared.

• Standards for unit testing and pass/fail criteria are defined.

• Testing schedule is revised and finalized.

• Test environment is prepared.

4.Construction and Verification This phase aims at completion of all test plans, test cases, and scripting of the automated test cases. In this phase, test cases are run and defects are reported as and when found.

5. Testing cycles In this phase, test cycles need to be completed until test cases are executed without errors or a predefined condition is reached. Activities involved in this phase are:

• Running test cases • Reporting defects • Revising test cases • Adding new test cases • Fixing defects • Retesting

6. Final Testing and Implementation In this phase, the following activities are carried out:

• Executing stress and performance test cases • Completing or updating documentation for testing • Providing and completing different matrices for testing

In this phase, acceptance, load, and recovery testing is also conducted and the application is verified under production conditions.

7. Post implementation In this phase, the following activities are carried out:

• Evaluating the testing process and documenting lessons learnt from the testing process

• Creating plans to improve the process. Recording of new errors and enhancements is an ongoing process.

• Cleaning up the test environment • Restoring test machines to base lines.

3.2 Verification and Validation Model

Verification and Validation are the two main processes involved in software testing. Let us learn about these processes in detail.

Software quality, correctness, and completeness can be identified by performing adequate testing. In order to make sure that the product development is as per requirements, we

Page 18: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

18

have to initiate testing right from the beginning. The picture below depicts the Verification and Validation model which shows that the software testing process is carried out in parallel with the development process. The left part of the “V” is called validation, which is carried out after a part of the product is developed. V-V model can also be called as Software Testing Life Cycle (STLC). In STLC, each development activity is followed with a testing activity.

Instead of moving down in a linear way, the process steps are bent upwards after the coding phase, to form the typical V shape. The V-Model demonstrates the relationships between each phase of the development life cycle and its associated phase of testing.

V-V Model

3.2.1 Different Stages of SDLC with STLC Stage 1: Requirement Gathering

Development Activity

In this phase, the requirements of the proposed system are collected by analyzing the needs of the users. However, in many situations, not enough care is taken in establishing correct requirements; up front. It is necessary that requirements are established in a systematic way to ensure their accuracy and completeness, but this is not always an easy task.

Page 19: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

19

Testing Activity

To make requirements more accurate and complete, testing needs to be performed right from the requirements phase in which testers review the requirements. For example, the requirements should not have ambiguous words like may or may not. It should be clear and concise. Stage 2: Functional Specifications

Development Activity

The Functional Specification document describes the features of the software product. It describes the product’s behavior as seen by an external observer, and contains the technical information and data needed for the design. The Functional Specification defines what the functionality will be.

Testing Activity

Testing is performed in order to ensure that the functional specifications accurate.

Stage 3: Design

Development Activity

During the design process, the software specifications are transformed into design models that describe the details of the data structures, system architecture, interface, and components. At the end of the design process, a design specifications document is produced. This document is composed of the design models that describe the data, architecture, interfaces, and components.

Testing Activity

Each design product is reviewed for quality before moving to the next phase of software development. In order to evaluate the quality of a design (representation), the criteria for a good design should be established. Such a design should exhibit good architectural structure, be modular, contain distinct representations of data, architecture, interfaces, and components.

The software design process encourages good design through the application of fundamental design principles, systematic methodology and through review.

Stage 4: Code

Development Activity

Using the design document, code is constructed. Programs are written using a conventional programming language or an application generator. Different high-level programming languages such as C, C++, VB, Java, etc. are used for coding. With respect to the type of application, the right programming language is chosen. Programming tools such as compilers, interpreters, and debuggers are used to generate the code.

Page 20: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

20

Testing Activity

Code review is done to find and fix defects that are over looked in the initial development phase to improve the overall quality of code. Online software repositories, like anonymous CVS, allow groups of individuals to collaboratively review code to improve software quality and security. Code review is a process of verifying the source code. Code reviews can often find and remove common security vulnerabilities such as format string attacks, race conditions, and buffer overflows, thereby improving software security.

Stage 5: Building Software

Development Activity

This phase involves building different software units (components) and integrating them one by one to build single software.

Testing Activity

a. Unit Testing A unit test is a validation procedure to check working of the smallest module of source code. Once the modules are ready, individual components should be tested to verify that the units function as per the specifications. Test cases are written for all functions and methods to identify and fix the problems faster. For testing of units, dummy objects are written such as stubs and drivers. This helps in testing each unit separately when all the code is not written. Usually a developer uses this method to review his code.

b. Integration Testing Integration testing follows unit testing and is done before system testing. Individual software modules are combined and tested as a group under integration testing. The purpose is to validate functionality, performance and reliability requirements. Test cases are constructed to test all components and their interfaces and confirm whether they are working correctly. It also includes inter-process communication and shared data areas.

Stage 6: Building System

Development Activity

After the software has been build, we have the whole system considering all the non-functional requirements such as installation procedures, configuration, etc.

Testing Activity

a. System Testing Testing the complete integrated system to confirm that it complies to requirement specifications is called as System Testing. Under System Testing, the entire system is tested against its Functional Requirement Specifications (FRS), and/or System

Page 21: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

21

Requirement Specification (SRS) and with the non-functional requirements. System Testing is crucial. Testersneed to test from users’ perceptive and need to be more creative.

b. Acceptance Testing Also called as User Acceptance Testing (UAT), this is one of the final stages of a project and will often occur before a customer accepts a new system. It is a process to obtain confirmation by the owner of the object under test, through trial or review that the modification or addition meets mutually agreed upon requirements. Users of the system will perform these tests according to their User Requirements Specification, to which the system should conform. There are two stages of acceptance testing, Alpha and Beta.

Now the whole product has been developed, the required level of quality has been achieved, and the software is ready to be released for customers.

3.2.1 Verification

Verification ensures that the product is built or developed in accordance with the requirements and design specifications given by the end user.

Verification also ensures that the data gathered is used in the right place and in the right way. Verification happens at the beginning of the software testing lifecycle. This process is used to exhibit consistency, correctness, completeness of the software at every stage as well as in between the different stages of the lifecycle. In the verification phase, documents related to software, plans, code, specifications, etc. are reviewed.

Verification Methods There are mainly three methods of verification. They are as follows:

1. Peer Reviews 2. Walkthroughs 3. Inspections

1. Peer Reviews

Peer review is the review of products performed by peers during product development to identify the defects for removal and recommend other changes that are needed. It is an informal way of verification. Peer reviews are also called as “buddy checks”.

2. Walkthroughs Walkthroughs are semi-formal meetings led by a presenter who presents the documents. The purpose of walkthroughs is to find potential bugs and is essentially used for knowledge sharing or communication purpose.

Page 22: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

22

3. Inspections Inspections are formal meetings attended by authors and participants who come prepared with their own task. The goals of these meetings are to communicate important product information and detect defects by verifying the software product.

3.2.2 Validation Validation checks the product design to ensure that the product is right for its intended use.

Unlike verification, the validation process happens in the later part of the software testing cycle. It is in this process that the actual testing of software takes place. Validation determines the correctness of the product in accordance with the user requirements. Validation Techniques The two main techniques of the Validation process are:

1. White box testing

2. Black box testing

1. White box testing

White box testing is a software testing approach. It uses inner structural and logical properties of the program for verification and deriving test data. White box testing is also called as glass, structural, open box, transparent, or clear box testing. For white box testing, the tester needs to have knowledge of the code or the internal program design. White box testing also requires the tester to look into the code and find out which unit/statement of the code is malfunctioning.

2. Black box testing

Black Box Testing is a validation strategy that does not need any knowledge of the internal design or the code. Black box testing is also called as opaque box, functional/behavioral box, or closed box testing.

The main focus of this testing is on testing for requirements and functionality of the software product or application. In this approach, black-box tests are derived from functional design specifications against which testers check the actual behavior of the software.

Page 23: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

23

The verification and validation processes are summarized in the table below.

Verification Validation

Focus is on “Process”, i.e. determining if “Am I building the product right?”

Focus is on “Product”, i.e. determining if “Am I building the right product?”

Low-level activity High-level activity It is performed during development on key artifacts, like walkthroughs, reviews and inspections, mentor feedback, training, checklists, and standards.

It is performed after a product is produced against established criteria ensuring that the product integrates correctly into the environment.

Am I accessing the data right (in the right place; in the right way)?

Am I accessing the right data (in terms of the data required to satisfy the requirement)?

Verify the consistency, completeness, and correctness of the software at each stage and between each stage of the development life cycle.

Validate the correctness of the final software product by a development project with respect to the user needs and requirements

Advantages of V-V Model

• Simple and easy to use

• Each phase has specific deliverables

• Chances of success are high since the test plans are developed in the initial stage of development lifecycle

• Works well for small projects where requirements are easily understood

Disadvantages of V-V Model:

• Less flexible and adjusting scope is difficult and expensive

• Software product is developed during the implementation phase, so no early prototypes of the software are produced

• Very rigid like the waterfall model

Page 24: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

24

CHAPTER 4A: Validation Activity – Low-Level Testing

The validation process in software development stage is carried out at two levels, low level and high level.

In this section, we will learn about low-level testing methods. Low-level testing is broadly classified into:

• Unit Testing • Integration Testing

Unit Testing

Unit testing involves validation of individual units of source code to ensure that they are working properly. A unit is the smallest testable part of an application.

The main purpose of unit testing is to take the smallest piece of testable software in the application, isolate it from the remainder of the code, and determine whether it behaves as per requirements. Each unit is tested separately before integrating into modules to test the interfaces between modules. Unit testing results in identifying large number of defects. Unit testing requires knowledge of internal design of code and it is generally done by developers.

Integration Testing

Integration testing is the process of combining and testing multiple components in a group. This testing is performed after unit testing and before system testing.

Integration testing detects interface errors and ensures that the modules or components operate properly when combined together. Integration testing is done by developers or by the QA team.

Integration testing is of two types:

• Non-incremental • Incremental

Page 25: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

25

INTEGRATION TESTING

INCREMENTAL NON-INCREMENTAL

TOP DOWN BOTTOM UP SANDWITCH

DFS BFS

Types of Integration Testing

Non-incremental Testing

In this approach, all the developed modules are coupled together to form a complete software system and then used for integration testing. It is also called as Big Bang Integration. The integrated result is then tested. In this method, debugging is difficult since an error can be associated with any component.

Incremental Testing

In this approach, modules are integrated in small increments. It, therefore, becomes easier to isolate the errors and interfaces are more likely to be tested completely. Incremental testing is further classified into top down integration, bottom up integration, and sandwiched integration. a) Top Down Integration

In this method, modules are integrated in small increments in a downward direction, starting from the top, i.e. with the main module until the end of the related modules at the bottom, sequentially. Top-down integration is further classified into: depth-first and breadth-first search approach.

o Depth-first search

Depth-first approach integrates the components vertically downwards, i.e. depth-wise using a control path of the program. For example, if we select the left hand path, components U1, U2, U4 DFS= {[(U1+U2)+U4]+U5}+U3

Page 26: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

26

U1

U2 U3

U4 U5

Top-down Integration

o Breadth-first search Breadth-first integration incorporates all components directly subordinate at each level, moving across the structure horizontally. For example, considering components U1,U2,U3,

BSF={[(U1)+ (U2+U3)]+U4+U5}

Advantages of top-down integration Functionality of the main module is tested first. This helps in verifying major control or decision points early in the testing process. Disadvantages of top-down integration Stubs are required when performing integration testing and generally, developing stubs is very difficult.

b) Bottom-up integration In this approach, the lowest level components are tested first, then moving upwards with testing the higher-level components. Bottom-up integration testing begins components at the lowest levels in the program structure. All the bottom or low-level modules, procedures or functions are integrated and then tested. After the integration testing of lower-level integrated modules, the next level of modules are formed and used for integration testing. This approach is best used only when all or most of the modules of the same development level are ready. This approach helps to determine the levels of software developed and makes it easier to report testing progress in the form of a percentage. Advantages of bottom-up integration Drivers required are much easier to develop.

Page 27: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

27

Disadvantages of bottom-up integration The Main module functionality is tested at the end. So major control and decision problem is identified later in the testing process.

c) Sandwich Integration In this approach, top-down testing and bottom-up testing is combined together. Both top-down and bottom-up are started simultaneously and the testing is built up from both sides. It needs a big team.

Page 28: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

28

CHAPTER 4B: Validation Activity – High-Level Testing

High-level testing is broadly classified into:

1. Function Testing 2. System Testing 3. Acceptance Testing

Function testing is a type of High-level testing and is based on black box testing that creates its test cases on the specifications of the software component under test. Functions are tested by feeding them input and then examining the output. It is used to detect discrepancies between a program’s functional specification and it’s actual behavior. It is carried out after completing unit testing and integration testing, and can be conducted in parallel with System testing. However, it is advisable to begin system testing when function testing has demonstrated some pre-defined level of reliability, usually after 40% of the function testing is complete.

Functional testing differs from system testing in that functional testing validates a program by checking it against the functional design specifications, while system testing validates a program by checking it against the user or system requirements.

4B.1 Objectives

The goal of function testing is to verify the actual behavior of the software or application against the functional design specifications provided by customers.

Function testing is performed before the product is made available to customers. It can begin whenever the product has sufficient functionality to execute some of the tests, or after unit and integration testing have been completed.

Function testing is the process of attempting to detect discrepancies between a program’s functional specification and its actual behavior. When a discrepancy is detected, either the program or the specification is incorrect. All black-box methods are applicable to function based testing.

4B.2 Steps of Function Testing

1. Decompose and analyze the functional design specification 2. Identify functions that the software is expected to perform 3. Create input data based on the function's specifications 4. Determine output based on the function's specifications 5. Develop functional test cases

Page 29: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

29

6. Execute test cases 7. Comparison expected and actual results

4B.3 Summary

Function Testing:

1. Attempts to detect discrepancies between a program’s functional specification and its actual behavior.

2. Includes positive and negative scenarios i.e. valid inputs and invalid inputs. 3. Ignores the internal mechanism or structure of a system or component and focuses on

the output generated in response to selected input and execution conditions. 4. Evaluates the compliance of a system or component with specified functional

specification and corresponding predicted results.

Page 30: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

30

CHAPTER 5: Types of System Testing

5.1 Introduction

‘System Testing’ is the next level of testing and is one of the most difficult activities. It focuses on testing the system as a whole. Once the components are integrated, the system needs to be rigorously tested to ensure that it meets the Quality Standards. It verifies software operation from the perspective of the end-user, with different configuration/setups. System testing builds on the previous levels of testing namely unit testing and integration testing. System testing can be conducted in parallel with function testing.

5.1.1 Prerequisites for System Testing

The prerequisites for System Testing are:

• All the components should have been successfully Unit Tested. • All the components should have been successfully integrated and Integration testing

must have been performed. • An Environment closely resembling the production environment should be created.

5.1.2Steps of System Testing The major steps of system testing are as follows:

1. Create a System Test Plan by decomposing and analyzing the SRS. 2. Develop the requirements test cases. 3. Carefully build data used as input for system testing. 4. If applicable, create scripts to

a) Build environment and b) To automate execution of test cases

5. Execute test cases. 6. Fix bugs if any and re-test the code. 7. Repeat test cycle as necessary

5.1.3 Types of System Testing 1. Usability testing 2. Performance Testing 3. Load Testing 4. Stress Testing 5. Security Testing

Page 31: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

31

6. Configuration Testing 7. Compatibility Testing 8. Installability Testing 9. Recovery Testing 10. Availability Testing 11. Volume Testing 12. Accessibility Testing

5.2 Usability Testing

Usability testing is a technique for ensuring that the intended users of a system can carry out the intended tasks efficiently, effectively and satisfactorily. It is carried out pre-release so that any significant issues identified can be addressed. Usability testing can be carried out at various stages of the design process. In the early stages, however, techniques such as walkthroughs are often more appropriate. System usability testing is the system testing of an integrated, black box application against its usability requirements. The system usability test is conducted to observe people using the product to discover errors and areas of improvement. Usability testing is a black-box testing technique.

• Identify usability defects involving the application’s human interface such as: o Difficulty of orientation and navigation (e.g., navigation defects such as broken

links and anchors within a website) o Efficiency of interaction (based on user task analysis) o Information consistency and presentation o Appropriate use of language and metaphors o Conformance to the:

Digital brand description document Website design guidelines

o Programming defects (e.g., incorrectly functioning tab key, accelerator keys, and mouse actions)

• Validating the application by determining if it fulfills its quantitative and qualitative usability requirements concerning ease of:

o Installation by the environments team o Usage by the user organization o Operations by the operations organization

• Determine if the application’s human interfaces should be iterated to make them more usable.

• More emphasis is on the presentation of the product rather than its functionality.

Page 32: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

32

• Report these failures to the development teams so that the associated defects can be fixed.

• It helps determine the extent to which the application is ready for launch. • Provide input to the defect trend analysis effort.

5.2.1 What Is Usability? Usability is how easily users can navigate from one page to another or from one menu to another. Usability is a combination of factors that influence user’s experience with a product or system. Usability testing is a methodical evaluation of the graphical user interface (GUI) according to usability criteria.

Usability criteria include:

• Efficiency of use – Once a user is experienced with the system, how much time will it require to accomplish key tasks?

• Ease of learning –How fast can a user learn to use a system that he has never seen before, in order to accomplish basic tasks?

• Memorability – When the user approaches the system the next time, will he/she remember enough to use it effectively?

• Subjective satisfaction – How does the user react to the system? How does he/she feel about using it?

• Error frequency and severity –How frequent are errors in the system? How severe are they? How do users recover from errors?

5.2.2 Purpose of Usability Testing A usability test establishes the ease of use and effectiveness of a product using standard usability test practices. It also identifies usability problems with the product and helps establish solutions for those problems. Once those solutions are implemented, the product is easier to use, requires less support and should be better received in the marketplace. When clients want to determine how well target users can understand and use their software or hardware product we recommend usability testing of the product with target market users.

Page 33: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

33

5.2.3 Methods of Usability Testing • By Onsite Observation: Conducted on-site. On-site observation enables the study of

users working on the system in their typical work environment. This is usually done when the system or environment are too complicated to be replicated in a laboratory. On-site observations might also be used to study users in their real environment.

The advantage of this type of testing is that it gives users a less formal feeling regarding the test and enables a relatively long observation period. The informal setting helps collect information from a real environment and not only from preset scenarios.

• By Laboratory Experiments: The usability test may be performed on a real system, on a paper prototype, or on a demo (e.g., Power Point) that incorporates only the elements of the system that are to be tested. Testing is performed in a controlled atmosphere. Users are introduced to the system and are required to perform several key tasks according to pre-set scenarios. User activities are recorded using two cameras – one that records on-screen activities and the second that records the user response and expressions. In addition, usability experts monitoring the usability test take notes of any item of interest.

5.2.4 Summary • The goal of usability testing is to adapt software to meet user’s actual work styles,

rather than forcing users to adapt a new work style. • Usability testing involves having users work with the product and observing their

responses to it. • Unlike Beta testing, which also involves users, it should be done as early as possible in

the development cycle. • Usability testing is the process of attempting to identify discrepancies between the user

interface of a product and the human engineering requirements of its potential users. • The real customer is involved as early as possible, even at the stage where only screens

drawn on paper are available. • Usability is testing to ensure that the application is easy to work with, limits keystrokes,

and is easy to understand. The best way to perform this testing is to bring in experienced, medium and novice users and solicit their input on the usability of the application.

• Usability testing can be done numerous times during the life cycle.

Page 34: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

34

5.3 Performance Testing

Performance testing is done to verify all the performance related aspects of the application. The aim of performance testing is to identify inefficiencies and bottlenecks with regard to application performance and enable to be identified, analyzed, fixed and prevented in the future. Performance testing is the system testing of an integrated, black box, partial application against its performance requirements under normal operating circumstances.

Software performance testing is used to determine the speed or effectiveness of a computer, network, software program or device. This process can involve quantitative tests done in a lab, such as measuring the response time or the number of MIPS (millions of instructions per second) at which a system functions. Qualitative attributes such as reliability, scalability and interoperability may also be evaluated. Performance testing is often done in conjunction with stress testing.

The performance testing is conducted to:

• Validate the system. • Cause failures relating to performance requirements:

o Response time (the average and the maximum application response times). o Throughput (the maximum transaction rates that the application can handle). o Latency (the average and maximum time to complete a system operation). o Capacity (the maximum number of objects the application/databases can handle).

• Track and report these failures to development teams so that the associated defects can be fixed.

• Reduce hardware costs by providing information allowing systems engineers to: o Identify the minimum hardware necessary to meet performance requirements. o Tune the application for maximum performance by identifying the optimal system

configuration (e.g., by repeating the test using different configurations). • Provide information that will assist in performance tuning under various workload

conditions, hardware configurations, and database sizes (e.g., by helping identify performance bottlenecks).

5.3.1 What Is Performance Testing? Performance testing is testing to ensure that the application responds in the time limit set by the user.

Page 35: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

35

5.3.2 Purpose of Performance Testing Performance testing is testing that is performed to determine how fast some aspect of a system performs under a particular workload. The purpose of performance testing is to measure and evaluate response times, transaction rates, and other time sensitive requirements of an application in order to verify that performance requirements have been achieved.

Examples include response times for on-line processing, processing times for batch work, transaction throughput rates (number of transactions in a predetermined period), etc.

5.3.3 Benefits of Performance Testing • Helps in improving customer satisfaction by providing them with a faster, more reliable

product. • Helps identify and fix bottlenecks in an application before rolling it out to customers.

5.3.4 Summary Performance testing determines whether the program meets its performance requirements. Efficiencies in performance testing are realized through extensive experience, optimization of processes, and optimal selection of tools.

5.4 Load Testing

Load Tests are end-to-end performance tests under anticipated production load. Load testing is the process of exercising the system under test by feeding it the largest tasks it can operate with. It is the process of putting demand on a system or device and measuring its response. Load testing is sometimes called volume testing, or longevity/endurance testing. Load testing is done to expose bugs that do not surface in cursory testing, such as memory management bugs, memory leaks, buffer overflows, etc. and to ensure that the application meets the performance baseline established during performance testing. This is done by running regression tests against the application at a specified maximum load.

The load testing is done to:

• Cause failures concerning the load requirements that help identify defects that are not efficiently found during unit and integration testing. Partially validate the application (i.e., to determine if it fulfills its scalability requirements for example, when the number of users increases), Distribution and load-balancing mechanisms.

• Determine if the application will support typical production load conditions.

Page 36: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

36

• Identify the point at which the load becomes so great that the application fails to meet performance requirements.

• Report these failures to the development teams so that the associated defects can be fixed.

• Locate performance bottlenecks including those in I/O, CPU, network, and database.

5.4.1 What Is Load Testing? Load Testing is subjecting your system to statistically representative load. Load testing is non-functional form of System Testing. Load Runner and Rational Robot are the front runners for this type of testing. Application is tested against the heavy loads, such as testing of a Web site under a range of loads to determine at what point the system’s response time degrades or fails.

5.4.2 Why Is Load Testing Important? • It is done to measure and monitor performance of an e-business infrastructure. Watch

out how the system handles (or not) the load of thousands of concurrent users hitting your site before deploying and launching it for the entire world to visit.

• It increases uptime and availability of mission-critical Internet systems, by spotting bottlenecks in the systems under large user stress scenarios before they happen in a production environment.

• It protects IT investments by predicting scalability and performance. IT projects are expensive. The hardware, the staffing, the consultants, the bandwidth, and more add up quickly. Avoid wasting money on expensive IT resources and ensure that it will all scale with load testing.

• It avoids project failures by predicting site behavior under large user loads. Before uploading the site, one has to visualize the site behavior with a large number of users and test high-load scenarios. Take precautions to avoid such scenarios.

5.5 Stress Testing

Stress testing tries to break the system under test by overwhelming its resources or by taking resources away from it (in which case it is sometimes called negative testing). The main purpose behind this is to make sure that the system fails and recovers gracefully – this quality is known as recoverability.

Page 37: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

37

Where performance testing demands a controlled environment and repeatable measurements, stress testing joyfully induces chaos and unpredictability.

Stress testing is performed to:

• Partially validate the application (i.e., to determine if it fulfills its scalability requirements).

• Determine how an application degrades and eventually fails, as conditions become extreme. For example, stress testing could involve an extreme number of simultaneous users, extreme numbers of transactions, queries that return the entire contents of a database, queries with an extreme number of restrictions, or an entry at the maximum amount of data in a field.

• Report these failures to the development teams so that the associated defects can be fixed.

• Determine if the application will support “worst case” production load conditions. • Provide data that will assist systems engineers in making intelligent decisions regarding

future scaling needs. • Help determine the extent to which the application is ready for launch. • Provide input to the defect trend analysis effort.

5.5.1 What Is Stress Testing? Stress Testing is testing done by applying load to the application under test, beyond the limits specified. Subjecting the system to extreme pressures in a short time-span is stress testing.

5.5.2 What Is The Purpose of Stress Testing? Stress testing helps in determining, e.g. the maximum number of requests a Web application can handle in a specific period of time, and at what point the application will overload and break down. The test is designed to determine how heavy a load the Web application can handle. A huge load is generated as quickly as possible in order to stress the application to its limit. The time between transactions is minimized in order to intensify the load on the application, and the time the users would need for interacting with their Web browsers is ignored.

For example, simultaneous log-on of 1000 users at the same time on a particular Website.

Page 38: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

38

5.5.3 Summary • Tester’s objective is to force the system to “break down” under the stress of extreme

conditions. • When we perform Stress testing on a particular application, system will fail but should

fail in a rational manner without corrupting the data or losing customer’s data. Perform testing on your application to the point that it experiences diminished response or break down, to determine the application’s limitations.

• This testing is conducted to evaluate a system or component at or beyond the limits of its specified requirements to determine the load under which it fails and how.

• Start Stress Testing early to catch subtle bugs that need the original developers to fix basic design flaws that may affect many parts of the system.

• It is to ensure that the application will respond appropriately with many users and activities happening simultaneously.

5.6 Security Testing

Security testing is performed to guarantee that only users with the appropriate authority are able to use the applicable features of the system. Security is a primary concern to avoid any unwanted penetration into the application. Security Testing is checking a system, application or its component against its security requirements and the implementation of its security mechanisms. It also verifies the applications failure to meet security-related requirements (black box testing), failure to properly implement security mechanisms (white box/gray box testing), thereby it enables the underlying defects to be identified, analyzed, fixed, and prevented in the future.

The security testing covers:

• Requirements: Verifying the application (i.e. determine if it fulfills its security requirements), identification, authentication, authorization, content protection, integrity, intrusion detection, privacy, system maintenance

• Mechanisms: Determine if the system causes any failures concerning its implementation of its security mechanisms:

o Encryption and Decryption o Firewalls o Personnel Security: Passwords

Page 39: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

39

Digital Signatures Personal Background Checks

o Physical Security: Locked doors for identification, authentication, and authorization Budges for identification, authentication, and authorization Cameras for identification, authentication, and authorization

• Cause Failures: Cause of failures concerning the security requirements that help identify

defects that are not efficiently found during other types of testing: o The application fails to identify and authenticate a user. o The application allows a user to perform an unauthorized function. o The application fails to protect its content against unauthorized usage. o The application allows the integrity of data or messages to be violated. o The application allows undetected intrusion. o The application fails to ensure privacy by using an inadequate encryption

technique.

• Report Failures: It is necessary to report failures to the development teams so that the associated defects can be fixed.

• Determine Launch Readiness: It helps determine the extent to which the system is ready for launch.

• Project Metrics: It helps provide project status metrics.

• Trend Analysis: It provides input to the defect trend analysis effort.

5.6.1 Purpose of Security Testing It helps in determining how well a system protects against unauthorized, internal or external access or willful damage.

5.6.2 Summary • It is to show that testing whether the system meets its specified security objectives. • Tester’s aim is to demonstrate the system’s failure to fulfill the stated security

requirements o Beware: it is impossible to prove that a system is impenetrable. o Objective is to establish sufficient confidence in security.

Page 40: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

40

5.7 Configuration Testing

Configuration testing is to check the operation of the software under test with different types of hardware configurations. It is done to check whether the system can work on machines with different configurations (software with hardware). Computers are designed using different peripherals, components, drivers which are designed by various companies.

5.7.1 Purpose of Configuration Testing To determine whether the program operates properly when the hardware or software is configured in a required manner.

5.7.2 Summary • It is the process of checking the operation of the software with various types of

hardware. For example: for applications to run on Windows-based PC used in homes and businesses.

o PC: different manufacturers such as Compaq, Dell, Hewlett Packard, IBM and others

o Components: disk drives, video, sound, modem, and network cards o Options and memory o Device drivers

5.8 Compatibility Testing

Compatibility testing is to check the operation of the software under test with different types of software. Software compatibility testing means checking that your software interacts with and shares information correctly with the other software. For example, it measures how Web pages display well on different browser versions. Compatibility testing is used to determine if your software application has issues related to how it functions in concern with the operating system and different types of system hardware and software.

5.8.1 Purpose of Compatibility Testing To evaluate how well software performs on a particular hardware, software, operating system, browser, or network environment.

Page 41: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

41

5.8.2 Summary • Testing whether the system is compatible with other systems with which it should

communicate. • It is the process of determining the ability of two or more systems to exchange

information. In a situation where the developed software replaces an already working program, an investigation should be conducted to assess possible comparability problems between the new software and other programs or systems.

• It means checking that your software interacts with and shares information correctly with other software. For example: What other software (operating systems, web browser etc.) your software is designed to be compatible with?

5.9 Installation Testing

Installability testing is to ensure that all the installation options in the software are working properly. Installation testing (in software engineering) can simply be defined as any testing that occurs outside of the development environment. Such testing will frequently occur on the computer system in which the software product will eventually be installed on.

5.9.1 Purpose of Installation Testing It is done to identify the ways in which installation procedures lead to incorrect results. It is done to ensure that the application or component is easy to install, ensure that time and money are not wasted during the installation process, Improve the morale of the engineers who will install the application or component, minimize installation defects, determine whether the installation procedure is documented, determine whether the methodology for migration from old system to new system is documented.

5.9.2 Summary • Testing installation procedures is a good way to avoid making a bad impression. Since,

“Installation makes the first impression on the end-user” • To identify ways in which the installation procedures lead to incorrect results • Installation options are

o New o Upgrade o Customized/Complete o Under Normal and Abnormal Conditions

• It is the testing concerned with the installation procedures for the system

Page 42: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

42

5.10 Recovery Testing

Recovery testing is to check a system’s ability to recover from failure. It is done to determine whether operations can be continued after a disaster or after integrity of the system has been lost. This involves reverting to a point where the integrity of the system was known and then reprocessing transactions up to the point of failure. It is used where continuity of operations is essential.

5.10.1 Purpose of Recovery Testing To verify the system’s ability to recover from varying degrees of failure.

5.10.2 Summary To determine whether, the system or program meets its requirements for recovery after a failure.

5.11 Availability Testing

Availability testing is done to verify functionalities available for use to the user whenever a system undergoes any failure. Application is tested for its reliability so that failures, if any, are discovered and removed before deploying the system. Availability testsare conducted to check both reliability and availability of an application. Reliability is the degree to which something operates without failure under given conditions during a given time period. Most likely scenarios are tested under normal usage conditions to validate that the application provides expected service.

It compares the availability percentage to the original service level agreement. Using availability testing, the application is run for a planned period, and failure events collected with repair times. Where reliability testing is about finding defects and reducing the number of failures, availability testing is primarily concerned with measuring and minimizing the actual repair time.

Formula for calculating percentage availability: (MTBF/(MTBF+MTTR)) X 100.

Notice that as MTTR trends towards zero, the percentage availability testing reduce and eliminate downtime.

Page 43: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

43

5.12 Volume Testing

Volume testing is done to check the performance of the application when volume of data being processed in the database is increased. Volume Testing, as its name implies, is testing that purposely subjects a system (both hardware and software) to a series of tests where the volume of data being processed is the subject of the test. Such systems capturing real-time sales or could be database updates and or data retrieval.

Volume testing will seek to verify the physical and logical limits to a system’s capacity and ascertain whether such limits are acceptable to meet the projected capacity of the organization’s business processing.

5.12.1 Summary • Testing where the system is subjected to large volumes of data • Testing designed to challenge a system’s ability to manage the maximum amount of

data over a period to of time. This type of testing also evaluates a system’s ability to handle overload situations in an orderly fashion.

5.13 Accessibility Testing

Accessibility Testing is an approach to measuring a product’s ability to be easily customized or modified for the benefit of users with disabilities. Users should be able to change input and output. Accessibility testing is the process of ensuring that a Web application is accessible to people with disabilities. If your Web application is produced for or by a US government agency, accessibility verification is required in order to prevent violation of the federal law, the potential loss of government contracts, and the potential for costly lawsuits. It can help you prevent functionality problems that could occur when people with disabilities try to access your application with adaptive devices such as screen readers, refreshable Braille displays, and alternative input devices.

5.13.1 Purpose of Accessibility Testing The goal of Accessibility Testing is to ensure that people with disabilities can access and use the software product as effectively as without disabilities. It is to pinpoint problems within Web sites and products that may otherwise prevent users with disabilities from accessing the information they are searching for.

It can help one in determine the compliance of the product i.e. how your product complies with legal requirements regarding accessibility, user friendliness & effectiveness of the product for physically challenged users.

Page 44: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

44

5.13.2 Summary • Enable users with common disabilities to use the application or component • Determines the degree to which the user interface of an application enables users with

common or specified (e.g., auditory, visual, physical, or cognitive) disabilities to perform their specified tasks. Examples of accessibility requirements include the people with auditory disabilities, colorblindness, and physical disabilities by enabling them to verbally interact with and use it, with mild cognitive disabilities.

Page 45: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

45

CHAPTER 6: Acceptance Testing

6.1 Introduction

Acceptance testing is the process of evaluating the product with the current needs of its end users. It is usually done by end users of customers after the testing group has successfully completed the testing. Acceptance tests really are requirement artifacts because they describe the criteria by which the customer will determine whether the system meets their needs. It is a type of high-level testing, describes black-box requirements, identified by your project customers, which your system must conform to. It involves operating the software in production mode for a pre-specific period of time.

6.2 Objective

The objectives of acceptance testing are to:

• Determine whether the application satisfies its acceptance criteria. • Enable the customer organization to determine whether to accept the application. • Determine if the application is ready for deployment to the full user community. • Report any failures to the development teams so that the associated defects can be

fixed.

6.3 Acceptance Testing

Acceptance testing is further divided into:

ACCEPTANCE TESTING

CONTRACTUAL NON-CONTRACTUAL

Page 46: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

46

If the software is developed under contract, the contracting customer does the accepting testing. For example, proper messages should be provided for the navigation from one part to another for an end-user. If the software is not developed under contract, then acceptance testing will be done in following two different ways:

• Alpha Testing • Beta Testing

6.3.1 Alpha Testing • Alpha testing is usually performed by end users inside the development organization. • The testing is done in a controlled environment. • Developers are present. • Defects found by end users are noted down by the development team and fixed before

release.

6.3.2 Beta Testing • Beta testing is usually performed by end users at customer’s site, i.e. outside the

development organization and inside the end users organization. • Not a controlled environment. • Developers will not be present. • Defects found by end users are reported to the development organization.

Once the acceptance testing is done and user/client gives a clearance, the next step is to release the software. At the time of release, usually final candidate testing is done, which is, a last minute testing. It is also called as a Golden Candidate Testing.

Page 47: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

47

CHAPTER7: Black Box Testing

7.1 Introduction

• Black Box Testing is a Validation strategy, and not a type of testing.

• The types of testing under this strategy are totally based/focused on the testing for requirements and functionality of the work product/software application.

• It is a testing technique that does not require knowledge of the internal functionality/program structure of the system.

• Black box testing is sometimes also called as “Opaque Testing”, “Functional/Behavioral Testing” and “Closed Box Testing”.

• It will not test hidden functions (i.e. functions implemented but not described in the functional design specification) and errors associated with them will not be found in black-box testing.

7.2 Objectives

The objectives of black box testing are to:

• Validate the system to determine if it fulfills its operational requirements. • Identifying the defects that are not efficiently found during unit and integration testing. • Report these failures to the development teams so that the associated defects can be

fixed. • Help determine the extent to which the system is ready for launch.

Black box testing verifies the actual behavior of the software with its functional requirements and not with the internal program structure or code. That is the reason black box testing is also considered as functional testing. This testing technique is also called as behavioral testing or opaque box testing or simply closed box testing. So, black box testing is not normally carried out by the programmer. This testing technique treats the system as black box or closed box. Tester will only know the formal inputs and projected (expected) results. Tester does not know how the program actually works at those results. Hence tester tests the system based on the functional specifications given to him.

Page 48: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

48

7.3 Advantages of Black Box Testing

• Tests will be done from an end user’s point of view because the end user should finally accept the system.

• Test cases can be designed as soon as the functional specifications are complete. • Testing helps to identify the vagueness and contradiction in functional specifications. • Efficient when used on larger systems. • The tester and the developer are independent of each other. • Tester can be non-technical.

7.4 Disadvantages of Black Box Testing

• It is difficult to identify all possible valid and invalid inputs in limited testing time. So writing test cases is slow and difficult.

• It is difficult to identify tricky inputs, if the test cases are not developed based on specifications.

• Chances of having repetition of tests that are already done by programmer.

7.5 Black Box Testing Methods

There are three Black Box Testing methods:

1. Equivalence partitioning 2. Boundary Value Analysis 3. Error Guessing

7.5.1 Equivalence Partitioning Equivalence partitioning is a black box testing technique. All the inputs with which we get the same output can be categorized in a same equivalence class. Therefore the tests are written using test data, which represents each equivalence class. It is designed to minimize the number of test cases.

Input Output

Page 49: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

49

7.5.2 How to Identify Equivalence Classes Take each input condition described in the specification and derive at least two equivalence classes for it. One class represents the set of cases which satisfy the condition (the valid class) and the other represents cases which do not (the invalid class).

Following are some general guidelines for identifying equivalence classes:

A. Considering a numeric value is input to the system and must be within a range of values. Identify one valid class inputs which are within the valid range and two invalid equivalence class’s inputs which are too low and inputs which are too high. For example, if an item in inventory can have a quantity of -9999 to + 9999,identify the following classes: • One valid class (-9999 < = QTY < = 9999) • The invalid class (QTY < -9999) • The invalid class (QTY > 9999)

B. Considering a specific value, identify one valid class inputs which are within the valid

range and two invalid equivalence classes inputs which are too low and inputs which are too high. For example, a 6 digit pin code number would have equivalence classes as: • Valid equivalence class (pin code =6) • Invalid class (pin code> 6) • Invalid class (pin code< 6)

C. If the requirements state that a particular input item match one of a set of values and

each case will be dealt with the same way, identify a valid class for values in the set and one invalid class representing values outside of the set.

For example, if the requirements state that a valid province code is ON, QU and NB, then identify:

• Valid class code is one of ON, QU, NB • Invalid class code is not one of ON, QU, NB

Page 50: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

50

7.5.3 Disadvantages • No guidelines for choosing inputs • Very limited focus • Doesn’t test every input • Heuristic based • It is not guaranteed that the system under test treats all sets of an equivalence class in

the same way

7.5.4 Boundary Value Analysis Boundary value analysis is a black box testing technique. Using this testing technique, boundaries of the input domain are tested. More emphasis is on input-output boundaries, as more errors tend to occur at the boundaries of given domain. It has been widely recognized that input values at the extreme ends of, and just outside of, input domains tend to cause errors in the system functionality. In boundary value analysis, values at and just beyond the boundaries of the input domain are used to generate test cases to ensure proper functionality of the system.

Boundary value analysis complements the technique of equivalence partitioning. Instead of checking any value in the equivalence class, take the values that are at the edge of the domain.

For example, for a system that accepts as input a number between one and ten, boundary value analysis would indicate that test cases should be created for the lower and upper bounds of the input domain (1,10), and values just outside these bounds (0,11) to ensure proper functionality. It is an excellent way to catch common user input errors which can disrupt proper program functionality.

7.5.5 Advantages of Boundary Value Analysis • Very clear guidelines on determining test cases • Very small set of test cases generated • Very good at exposing potential user interface/user input problems

7.5.6 Disadvantages of Boundary Value Analysis • Does not test dependencies between combinations of inputs • Does not test all possible inputs

Boundary value analysis and equivalence partitioning are used during the test design phase, and their influence is hard to see in the tests once they’re implemented. Note that any level of testing (unit testing, system testing, etc.) can use any test design methods.

Page 51: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

51

7.5.7 Error Guessing Error guessing is an ad hoc approach and totally depends on the intuition, experience and knowledge of the tester. Error Guessing is more a testing art than a testing science but can be very effective given a tester’s familiarity with the history of the system. Error Guessing involves making an itemized list of the errors expected to occur in a particular area of the system and then designing a set of test cases to check for these expected errors.

Page 52: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

52

CHAPTER 8: Testing Types

8.1 Introduction

There are several random types of testing used in software industry. Other than Validation activities like unit, integration, system and acceptance, we have the following types:

• Mutation Testing • Progressive Testing • Regression Testing • Retesting • Localization Testing • Internationalization Testing

8.2 Mutation Testing

It is also called as Fault injection Testing or Be-bugging. In this testing, we manually inject faults in a code. Mutation Testing is a fault-based testing technique that is based on the assumption that a program is well tested if all simple faults are predicted and removed; complex faults are coupled with simple faults and are thus detected by tests that detect simple faults.

Mutation Testing is a process of adding known faults intentionally in a computer program to monitor the rate of detection and removal, and estimating the number of faults remaining in the program. Formula for Mutation Testing is

FU = FG. (FE/FEG)

Where FU =Number of undetected errors

FG =Number of not seeded errors detected

FE =Number of seeded errors

FEG =Number of seeded errors detected

8.3 Progressive Testing

Whenever we start any testing activity (unit testing/integration testing/function testing/system testing) for the first time, it is termed as Progressive testing. Most test cases, unless they are truly thrown away, begin as progressive test cases and eventually become regression test cases for the life of the product.

Page 53: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

53

8.4 Regression Testing

Due to the code changes for fixing any bug, it may happen that some other functionality may get affected. To verify the impact on other functionalities, Regression testing is done. Testing a program that has been modified to verify that modifications have not caused unintended effects and still complies with its specified requirements.

For example, a login window is to be tested. The window has OK and Cancel buttons in addition to User Id and Password. Build 1 is written for login window to check functionality of the OK button. A tester finds a defect in the Ok button. It is reported and the developer fixes the defect and a new build (build 2) is given back to the testing team. Now we perform a regression testing to find whether the fixing of a defect in Ok button has lead to any changes in the Cancel button.

8.5 Retesting

When a defect is detected and fixed, then the software should be retested to confirm that the original defect has been successfully removed. This is called retesting.

8.6 Localization Testing

The process of adapting software to a specific locale, taking into account, its language, dialect, local conventions and culture is called Localization. Testing the localized software is called localization testing. Localization is abbreviated as L10N, as there are 10 letters between ‘L’ and ‘N’.

If you decide to localize, you should be familiar with the scope and purpose of localization testing. Localizers translate the product UI and sometimes change some initial settings to adapt the product to a particular local market.

This definitely reduces the "world-readiness" of the application. That is, a globalized application whose UI and documentation are translated into a language spoken in one country will retain its functionality. However, the application will become less usable in countries where that language is not spoken.

Localization testing checks how well the build has been translated into a particular target language. This test is based on the results of globalized testing where the functional support for that particular locale has already been verified. If the product is not globalized enough to support a given language, you probably will not try to localize it into that language in the first place.

You should be aware that pseudo-localization, which was discussed earlier, does not completely eliminate the need for functionality testing of a localized application. When you test for

Page 54: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

54

localizability before you localize, the chances of having serious functional problems due to localization are slim. However, you still have to check that the application you're shipping to a particular market really works. Now you can do it in less time and with fewer resources.

8.7 Internationalization Testing

Internationalization is the process of designing and coding a product so it can perform properly when it is modified for use in different languages and locales. Internationalization is the process of designing an application so that it can be adapted to various languages and regions without engineering changes. The term internationalization is often abbreviated as I18N, because there are 18 letters between the first “I” and the last “n”.

Localization refers to the process, on a properly internationalized base product, of translating messages and documentation as well as modifying other locale specific files.

An internationalized program has the following characteristics:

• With the addition of localized data, the same executable can run worldwide. • Textual elements, such as status messages and the GUI component labels are not hard

coded in the program. Instead they are stored outside the source code and retrieved dynamically.

• Support for new languages does not require recompilation. • Culturally dependent data, such as dates and currencies, appear in formats that conform

to the end user’s region and language. • It can be localized quickly.

Page 55: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

55

CHAPTER 9: White Box Testing

9.1 Introduction

White Box Testing (WBT) is a testing strategy that uses the control structure described as part of component level design to derive test cases. White box testing deals with the internal logic and internal structure of the code.

WBT is also called as Structural Testing, glass-box testing, Transparent-box and Clear-box Testing. 9.2 Objective The test written based on WBT strategy incorporate coverage of the code written, branches, paths, statements and internal logic

of the code, etc. WBT needs the tester to look into the code and find out which unit/statement/chunk of the code in malfunctioning. It does not account for errors caused by omission, and all possible code must also be readable.

White Box Testing is a test case design method that uses the control structure of the procedural design to drive test cases. Test cases can be derived that:

• Guarantee that all independent paths within a module have been exercised at least once.

• Exercise all logical decisions on their true and false sides. • Execute all loops at their boundaries and within their operational bounds. • Exercise internal data structures to ensure their values.

9.3 Advantages of WBT

• As knowledge of internal coding structure is a prerequisite, it becomes very easy to find which type of input/data can help in testing the application effectively.

• It helps in optimizing the code. • It helps in removing the extra lines of code, which can bring in hidden defects.

Page 56: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

56

9.4 Disadvantages of WBT

• As the knowledge of internal coding structure is a prerequisite, a skilled tester is needed to perform this type of testing, which increases the cost.

• It is nearly impossible to look into every bit of code to find out hidden errors, which may create problems, resulting in failure of the application.

• It fails to detect missing functions.

9.5 Techniques for White Box Testing

White Box Testing can be done by:

1. Data coverage 2. Code coverage

9.5.1 Data Coverage Data flow is monitored or examined throughout the program. We can also keep track of the data changes during its flow that take place in between the modules of the application. For example, watch window we use to monitor the values of variables and expressions.

9.5.2 Code Coverage Code coverage analysis (test coverage analysis) is a White box testing technique. Code coverage analysis is the process of:

• Finding areas of a program not exercised by a set of test cases. • Creating additional test cases to increase coverage.

Determining a quantitative measure of code coverage, which is an indirect measure of quality

• An optional aspect of code coverage analysis is identifying redundant test cases that do not increase coverage.

Code coverage can be implemented using Basic measures like:

• Statement coverage • Branch/Decision coverage • Condition coverage • Path coverage

Page 57: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

57

1. Statement Coverage

This measure reports whether each executable statement is encountered. It is also known as line coverage, segment coverage and basic block coverage. Faults are evenly distributed through code; therefore, the percentage of executable statements covered reflects the percentage of faults discovered.

• Statement coverage does not report whether loops reach their termination condition – only whether the loop body was executed. With C, C++, and Java, this limitation affects loops that contain break statements.

• Since do-while loops always execute at least once, statement coverage considers them the same rank as non-branching statements.

• Statement coverage is completely insensitive to the logical operators (II and &&). • Statement coverage cannot distinguish consecutive switch labels.

2. Branch/Decision Coverage

This measure whether Boolean expressions tested in control structures (such as the if-statement and white-statement) evaluated to both true and false. The entire Boolean expression is considered one true-or-false predicate regardless of whether it contains logical-and or logical-or operators. Additionally, this measure includes coverage of switch-statement cases, exception handlers, and interrupts handlers. It is also known as branch coverage, all-edges coverage, basis path coverage.

A disadvantage is that, Decision Coverage measure ignores branches within Boolean expressions. For example, consider the following C/C++/Java code fragment:

If (a>b&&c!=5)

a=a+b;

Else

a=a-b;

For the above example, conditions evolved are:

• (a > b) and c=5 • (a > b) and c!=5 • (a < b) and c=5 • (a < b) and c!=5

Page 58: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

58

Branch coverage fails for such conditions and so there is a need of a coverage which covers all the conditions.

For example, consider: If a< b then S1 else S2 Branch coverage subsumes Statement Coverage. It stated that data should be created in a way that both S1 and S2 should be executed and tested.

3. Condition Coverage

Condition testing is a test case design method that exercises the logical conditions contained in a program module. Condition coverage reports the true or false outcome of each Boolean sub-expression, separated by logical-and and logical-or if they occur. Condition coverage measures the sub-expressions independently of each other. This measure is similar to decision coverage but has better sensitivity to the control flow.

For example, consider: If a< b then S1 else S2 Conditional coverage subsumes Branch Coverage. In Conditional coverage, all possible values should be tested for each clause, here 'a' and 'b', to make sure each condition (here true or false for a<b) should be executed successfully.

4. Path Coverage

Basis Path Testing is a white box testing technique that enables the test case designer to derive a logical complexity measure of a procedural design and use this measure as a guide for defining a basis set of execution paths. Test cases derived to exercise the basis set are guaranteed to execute every statement in the program at least one time during testing. Path coverage can be calculated by McCabe’s Cyclomatic complexity.

So, we can conclude that:

• 100% statement coverage is not 100% decision coverage. • Decision coverage includes statement coverage since exercising every branch must lead

to exercising every statement. • Path coverage includes decision coverage. • 100% condition coverage is 100% decision coverage and 100% statement coverage.

9.6 Cyclomatic Complexity

Cyclomatic complexity is the most widely used member of a class of static software metrics. Cyclomatic complexity may be considered a broad measure of soundness and confidence for a program. Introduced by Thomas McCabe in 1976, it measures the number of linearly-independent paths through a program module. This measure provides a single ordinal number

Page 59: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

59

that can be compared to the complexity of other programs. Cyclomatic complexity is often referred to simply as program complexity, or as McCabe’s complexity. It is often used in concert with other software metrics. As one of the more widely-accepted software metrics, it is intended to be independent of language and language format.

Cyclomatic complexity has also been extended to encompass the design and structural complexity of a system.

Cyclomatic complexity is used to measure the amount of decision logic in a single software module. It is used for two related purposes in the structured testing methodology. First, it gives the number of recommended tests for software. Second, it is used during all phases of the software lifecycle, beginning with design, to keep software reliable, testable, and manageable. Cyclomatic complexity is based entirely on the structure of software’s control flow graph.

Cyclomatic complexity is used for white box testing. It enables the test case designer to derive the logical complexity of software program, and can be used for defining the basis set of execution paths. Basis set guarantees the execution of every statement in the program, at least once, during testing. Thomas McCabe designed this method in the year 1976.

Cyclomatic complexity gives you the minimum number of test cases, which you have to design in order to conform that each and every statement from the program has been executed at least once. One simple notation i.e. Flow Graph is used. This flow graph gives the logical control flow, using different notations: Each structured construct, ex: loops, decisions, switch cases, etc. have a corresponding flow graph symbol. Each circle is called the flow graph node; it represents one or more procedural statements. These nodes are connected with each other with edges.

Page 60: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

60

The structured constructs in flow graph forms are as given in the table below:

Types flow graph constructs

Some rulesto be followed while calculating Cyclomatic complexity:

1. A sequence of process boxes and decision diamond can map into single node. 2. An edge must terminate at a node; even if the node does not represent any

procedural statements.

• Predicate node: A node, which is having two or more outgoing edges from it.

• Bounded Region: A region, which is totally surrounded by nodes and edges.

Let us take some examples.

EXAMPLE 1

1. Main() 2. { 3. Int a,b,c; 4. Printf(“Enter First Number:”); 5. Scanf(“%d”,&a); 6. Printf(“Enter Second Number:”); 7. Scanf(“%d”,&b);

Sequence

If-Else Condition

While Condition

Case Condition

Page 61: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

61

8. If(a.b) 9. { 10. C=a-b; 11. Printf(“the subtraction s %d”,c); 12. } 13. Else 14. { 15. C=a+b; 16. Printf(“the addition is %d”,c); 17. } 18. Prinf(“Thank You”); 19. }

The flow graph can be drawn as shown below:

In this flow graph, the number of nodes, N = 5, while the number of Edges, E = 5.

By using the formula

i. C.C. = No. of Edges – No. of Nodes + 2

= 5-5 + 2

= 2

*-- (In the above flow graph 8 is the predicate node)

9 to 12

8

13 to 17

18

19

Page 62: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

62

ii. C.C.=No. of predicate Nodes+1

= 1 + 1

= 2

iii. C.C. = No. of Bounded region + 1 = 1 + 1 = 2

Therefore, the Cyclomatic complexity of the program code in the example 1 above is 2.

EXAMPLE 2

1. main () 2. { 3. int a,b,I; 4. printf(“Enter the Number:”); 5. scanf(“%d”,&a); 6. b = 0; 7. Ii= 1; 8. while (I,6) 9. { 10. b = b + I ; 11. i ++; 12. } 13. printf(“the Addition is :%d”,b); 14. printf(“Thank You”): 15. }

The flow graph of this program is:

8

9 to 12 13

Page 63: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

63

In this flow graph, the number of Nodes, N = 3, while the number of Edges, E = 3.

By using the formula

i. C.C. = No. of Edges – No. of Nodes + 2 = 3 – 3 + 2 = 2

*--(In the above flow graph 8 is the predicate node) ii. C.C. = No. of predicated Nodes + 1 = 1 + 1 = 2 III. C.C. = No. of Bounded region + 1

= 1 + 1 = 2

Therefore, the Cyclomatic complexity of the program code in the example 2 above is 2.

Let us understand now, why should we calculate Cyclomatic complexity.

The Cyclomatic complexity number helps us in understanding the complexity of the program. If the Cyclomatic complexity number is large, it means the program is highly complex and there is high risk associated with the program. If the Cyclomatic number is small, it means the program is less complex and there is low risk associated with the program. The table given below will make this concept clearer.

9.6.1 Usage of Cyclomatic Complexity 1. Risk Evaluation: Classification of Cyclomatic complexity and relative risk of the program.

Cyclomatic Complexity Risk Evaluation

1 – 10 a simple program, without much risk

11 – 20 more complex, moderate risk

21 – 50 complex, high risk program

Greater than 50 untestable program (very high risk)

2. Code Development Risk Analysis: while code is under development, it can be measured for complexity to assess inherent risk.

3. Test Planning: Mathematical analysis of C.C. shows the exact no. of test cases to test every decision point in program. This analysis can be used in test planning. For example: If large complex module is there, it will require prohibitive number of test cases. This number can be reduced to a practical size by breaking the module into smaller, less complex sub modules.

Page 64: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

64

9.6.2 Advantages of McCabe Cyclomatic Complexity • Can be used as a ease of maintenance metric. • Used as a quality metric, gives relative complexity of various designs. • Can be computed early in the life cycle than of Halstead’s metrics. • Measures the minimum effort and best areas of concentration for testing. • Guides the testing process by limiting the program logic during development. • Easy to apply.

9.6.3 Drawbacks of McCabe Cyclomatic Complexity • The Cyclomatic complexity is a measure of the program’s control complexity and not the

data complexity. • The same weight is placed on nested and non-nested loops. However, deeply nested

conditional structures are harder to understand than non-nested structures. • It may give a misleading figure with regard to a lot of simple comparisons and decision

structures. Whereas the fan-in-fan-out method would probably be more applicable as it can track the data flow.

9.6.4 Limiting Cyclomatic Complexity To 10 There are many good reasons to limit Cyclomatic complexity. Overly complex modules are more prone to error, are harder to understand, are harder to test, and are harder to modify. Deliberately limiting complexity at all stages of software development, for example as a departmental standard, helps avoid the pitfalls associated with high complexity software. Many organizations have successfully implemented complexity limits as part of their software programs. The precise number to use as a limit, however, remains somewhat controversial. The original limit of 10 as proposed by McCabe has significant supporting evidence, but limits as high as 15 have been used successfully as well. Limits over 10 should be reserved for projects that have several operational advantages over typical projects, for example experienced staff, formal design, a modern programming, code walkthroughs, and a comprehensive test plan. In other words, an organization can pick a complexity limit greater than 10, but only if it is sure it knows what it is doing and is willing to devote the additional testing effort required by more complex modules.

Somewhat more interesting than the exact complexity limit are the exceptions to that limit. F or example, McCabe originally recommended exempting modules consisting of single multi-way decision (“switch “or “case”) statements from the complexity limit. The multi-way decision issue has been interpreted in many ways over the years, sometimes with disastrous results statement to it that did nothing.

Page 65: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

65

9.6.5 Measurement of Cyclomatic Complexity Cyclomatic complexity measurement tools are typically bundled inside commercially-available CASE toolsets. It is usually one of the several metrics offered. Application of complexity measurements requires a small amount of training. The fact that a code module has high cyclomatic complexity does not, by itself, mean that it represents excess risk, or that it can or should be redesigned to make it simpler; more must be known about the specific application.

9.7 How to calculate Statement, Branch/Decision,and Path Coverage for ISTQB Exam purpose

Statement Coverage: In this, the test case is executed in such a way that every statement of the code isexecuted at least once. Branch/Decision Coverage: Test coverage criteria requires enough test cases such that each condition in a decision takes on all possible outcomes at least once, and each point of entry to a program or subroutine is invoked at least once. That is, every branch (decision) taken each way, true and false. It helps in validating all the branches in the code making sure that no branch leads to abnormal behavior of the application. Path Coverage: In this, the test case is executed in such a way that every path is executed at least once. All possible control paths taken, including all loop paths taken zero, once, and multiple (ideally, maximum) items in path coverage technique, the test cases are prepared based on the logical complexity measure of a procedural design. In this type of testing, every statement in the program is guaranteed to be executed at least once. Flow Graph, Cyclomatic Complexity and Graph Metrics are used to arrive at basis path. How to calculate Statement Coverage, Branch Coverage and Path Coverage Draw the flow in the following way-

• Nodes represent entries, exits, decisions and each statement of code. • Edges represent non-branching and branching links between nodes.

For Example:

Read P Read Q IF P+Q > 100 THEN Print “Large” ENDIF If P > 50 THEN Print “P Large” ENDIF

Page 66: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

66

Calculate statement coverage, branch coverage and path coverage. Solution: The flow chart is-

Statement Coverage (SC): To calculate Statement Coverage, find out the shortest number of paths following which all the nodes will be covered. Here by traversing through path 1A-2C-3D-E-4G-5H, all the nodes are covered. So by traveling through only one path all the nodes 12345 are covered, and hence the Statement coverage in this case is 1. Branch Coverage (BC): To calculate Branch Coverage, find out the minimum number of paths which will ensure covering of all the edges. In this case, there is no single path that will ensure cover age of all the edges at one go. By following paths 1A-2C-3D-E-4G-5H, the maximum numbers of edges (A, C,

Page 67: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

67

D, E, G and H) are covered but edges B and F are left. To cover these edges, we can follow 1A-2B-E-4F. By combining the above two paths, we can ensure traveling through all the paths. Hence Branch Coverage is 2. The aim is to cover all possible true/false decisions. Path Coverage (PC): Path Coverage ensures covering of all the paths from start to end. All possible paths are- 1A-2B-E-4F 1A-2B-E-4G-5H 1A-2C-3D-E-4G-5H 1A-2C-3D-E-4F So path coverage is 4. Thus for the above example SC=1, BC=2 and PC=4. REMEMBER 100% Path coverage will imply 100% Statement coverage 100% Branch/Decision coverage will imply 100% Statement coverage 100% Path coverage will imply 100% Branch/Decision coverage Branch coverage and Decision coverage are same

Page 68: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

68

CHAPTER 10: Test Cases

10.1 Introduction

Test cases are test conditions written to detect a bug. The term test case describes a case that tests the validity of a particular condition. Test cases are useful because they establish principles and thereby serve as a precedent for future similar cases.

10.2 Objective

The main objective of writing test cases is to determine whether a requirement is fully satisfied, to put down the conditions along with the steps involved and the expected result after following the steps in a structured format. In software engineering, “a test case is a set of conditions under which a tester will determine if a requirement upon an application is partially or fully satisfied.” It may take many test cases to determine that a requirement is fully satisfied. In order to fully test that all the requirements of an application are met, there must be at least one test case for each requirement unless a requirements has sub requirements. In that situation, each sub requirement must have at least one test case. Some methodologies like Rational Unified Process (RUP – is an iterative software development process created by the Rational Software Corporation) recommend creating at least two test cases for each requirement. One of them should perform positive testing of the requirement and the other should perform negative testing.

If the applications are created without formal requirements, then the test cases are written based on the accepted normal operation of programs of a similar class.

What characterizes a formal, written test case is that there is a known input and an expected output, which is worked out before the test is executed. The known input should test a precondition and the expected output should test a post condition.

Under special circumstances, there could be a need to run the test, produce results, and then a team of experts would evaluate if the results can be considered as pass. This happens often on new products’ performance number determination. The first test is taken as the base line for subsequent test/product release cycles.

Written test cases include a description of the functionality to be tested taken from either the requirements of use cases, and the preparation required to ensure that the test can be conducted.

Page 69: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

69

10.3 Structure of Test Cases

Test case definition consists of three main parts with subsections:

• Introduction/overview contains general information about the Test case o Identifier: is unique identifier of test case for further references, for example, while

describing found defect. o Test case Author/Creator: is the name of the tester or the test designer, who

created the test or is responsible for its development. o Version: of current Test case definition. o Name: of test case should be human-oriented title which allows to quickly

understanding test case purpose and scope. o Objective: Of writing down the test case, i.e. purpose or short description of test,

what functionality it checks. o Pre-requisites: of the software to execute the tests.

• Test Case Activity

o Testing environment/configuration contains information about configuration of hardware or software which must be met while executing the test case.

o Initialization describes actions that must be performed before test case execution. For example, we should open some file.

o Finalization describes actions to be done after the test case is performed. For example, if the test case crashes database, the tester should restore it before other test cases will be performed.

o Actions step by step to be done to complete test. o Input data description.

• EXPECTED RESULTS

o Contains description of what the tester should see after all test steps have been completed.

Usually test cases do not contain actual results. They should be described in defect reports or in testing reports.

10.4 Test Case Template

Test Case ID:

Test Case Name:

Page 70: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

70

Description/Objective: (If necessary, write description text)

Pre-requisites for this test case: (If necessary, write pre-condition text)

Author/Creator: Reviewer:

Date of Draft: Date of Review:

Step No.

Step Description

Input/Test Data

Expected Result

Actual Result

Status (Pass/Fail)

Defect ID Remarks

Test Case Status Status:

Pass/Fail

Test Case Format

Page 71: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

71

CHAPTER11: Test Planning

11.1 Introduction

The ultimate goal of the test planning process is communicating the software test team’s intent, its expectations, and its understanding of the testing that’s to be performed. The Planning process includes scope, approach, resources and schedule of the testing activities. Test planning is a process in which every aspect of testing is considered.

Test Plan is a by-product of the detailed test planning process. It is a document that covers test planning.

11.2 Objectives

• To identify the items that are subject to testing • To communicate, at high level, the extent of testing • To define the roles and responsibilities for test activities • To provide an accurate effort estimation required to complete testing as per the plan • To define the infrastructure and support required

11.3 IEEE Standard for Software Test Documentation

(ANSI/IEEE Standard 829-1983)

This is a summary of the ANSI/IEEE Standard 829-1983. It describes a test plan as “A document describing the scope, approach, resources, and schedule of intended testing activities. Itidentifies test items, the features to be tested, the testing tasks, who will do each task, and any risks requiring contingency planning.”

This standard specifies the following test plan outline:

1. Test Plan Identifier • A unique identifier

2. Introduction

• Summary of the items and features to be tested • Need for and history of each item (optional) • References to related documents such as project authorization, project plan, QA

plan, configuration management plan, relevant policies, relevant standards

Page 72: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

72

• References to lower level test plans

3. Test Items • Test items and their version • Characteristics of their transmittal media • References to related documents such as requirements specification, design

specification, user guide, operations guide, installation guide. • References to bug reports related to test items • Items which are specifically not going to be tested (optional)

4. Features To Be Tested • All software features and combinations of features to be tested • References to test-design specifications associated with each feature and

combination of features

5. Features Not To Be Tested • All features and significant combinations of features which will not be tested • Reasons why these features won’t be tested

6. Approach • Overall approach to testing • For each major group of features of combinations of features, specify the approach • Specify major activities, techniques, and tools which are to be used to test the groups • Specify a minimum degree of comprehensiveness required • Identify which techniques will be used to judge comprehensiveness • Specify any additional completion criteria • Specify techniques which are to be used to trace requirements • Identify significant constraints on testing, such as test-item availability, testing-

resource availability, and deadline

7. Item Pass/Fail Criteria • Specify the criteria to be used to determine whether each test item has passed or

failed testing

8. Suspension Criteria And Resumption Requirements • Specify criteria to be used to suspend the testing activity

Page 73: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

73

• Specify testing activities which must be redone when testing is resumed

9. Test Deliverables • Identify the deliverable documents: test plan, test design specifications, test case

specifications, test procedure specifications, test item transmittal reports, test logs, test incident reports, test summary reports

• Identify test input and output data • Identify test tools (optional)

10. Testing Tasks

• Identify tasks necessary to prepare for and perform testing • Identify all task interdependencies • Identify any special skills required

11. Environmental Needs

• Specify necessary and desired properties of the test environment: physical characteristics of the facilities including hardware, communications and system software, the mode of usage(i.e., stand – alone), and any other software or supplies needed

• Specify the level of security required • Identify special test tools needed • Identify any other testing needs • Identify the source for all needs which are not currently available

12. Responsibilities

• Identify groups responsible for managing, designing, preparing, executing, witnessing, checking and resolving

• Identify groups responsible for providing the test items identified in the Test Items section

• Identify groups responsible for providing the environmental needs identified in the Environmental Needs section

13. Staffing And Training Needs • Specify staffing needs by skill level • Identify training options for providing necessary skills

Page 74: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

74

14. Schedule • Specify test milestones • Specify all item transmittal events • Estimate time required to do each testing task • Schedule all testing tasks and test milestones • For each testing tasks and test milestones • For each testing resource, specify its periods of use

15. Risks And Contingencies

• Identify the high-risk assumptions of the test plan • Specify contingency plans for each

16. Approvals

• Specify names and titles of all persons who must approve the plan • Provide space for signatures and dates

Page 75: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

75

CHAPTER 12: Configuration Management

12.1 Introduction

While building software, it undergoes changes. Changes need to be controlled effectively. Configuration Management (CM) is a group activity that keeps details of all the changes that takeplacethroughout the process and maintains all versions of builds. Configuration Management can be defined as:

• The process of identifying and defining the configuration items in a system. • Controlling the release and change of these items throughout the system life cycle. • Recording and reporting the status of configuration items and change requests. • Verifying the completeness and correctness of configuration items.

12.2 Objective

Configuration Managementkeeps track of the versions of the application currently being tested; it reports the problem and manages the list of issues and problems found by the tester.

Change control is used to keep track of the problems that need to be corrected in the present release and is also used to keep a list of those problems that will not be fixed in the immediate future.

Problems resulting from poor Configuration Management:

• Can’t reproduce a fault reported by a customer. • Can’t roll back to previous subsystem. • One change overwrites another. • Emergency fault fix needs testing but tests have been updated to new software version. • Which code changes belong to which version? • Faults which were fixed on old version. • “Shouldn’t that feature be in this version?”

Configuration Management is an engineering management procedure that includes:

• Configuration identification • Configuration control • Configuration status accounting • Configuration audit

Page 76: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

76

Products for Configuration Management in testing:

• Test plans • Test Designs • Test Cases

o Test input o Test data o Test scripts o Expected results o Actual results o Test tools

12.3 Configuration ManagementTools

Various CM tools are used to track versions of all components.

1. Clear Case IBM Rational® Clear Case® provides life cycle management and control of software development assets. With integrated version control, automated workspace management, parallel development support, baseline management, and build and release management, Rational Clear Case provides the capabilities needed to create, update, build, deliver, reuse and maintain business-critical assets.

2. Visual Source Safe (VSS) Source Safe provides for true project-level configuration control. Source Safe also runs on many platforms, so that it can be used for a client/server project where coding is being done on a Windows PC using Visual Basic, and on a UNIX workstation using C.

Page 77: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

77

CHAPTER13: Defect Tracing and Defect Life Cycle

13.1 Introduction

A software bug is used to describe an error, flaw, mistake, failure, or fault in a program or system that produces an incorrect result.A bug can be defined as an error in a program’s code or a malfunction in a program’s code. It can also be defined as the abnormal behavior of software. No software exists without bugs. The elimination of bugs from the software depends upon the efficiency of testing done on software. A bug is a specific concernabout the quality of application under test.

Bugs arise from mistakes and errors made by people in either a program's source code or its design, and a few are caused by compilers producing incorrect code.

13.2 Objectives

The main objective of finding a defect is to fix it. Adefect goes through various cycles and the objectives of finding a defect are to understand the cause, way to correct a defect, frequency of a defect, impact risk associated with a defect. Other objectives also include fixing them in the product and avoiding the same in future, and correct defects to improve the quality of the work products.

Defectcan be defined as a deviation from the expected result, or difference between expected result and actual result.

Types of computer Bugs are:

• Logic bugs • Syntax bugs • Arithmetic bugs • Resource bugs • Multi-threading programming bugs • Performance bugs

13.3 Why Do Faults Occur?

There are various reasons for the occurrence of the faults; it may be due to

• Ambiguous or unclear requirements • Poor documentation • Lack of programming skills

Page 78: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

78

• Increased complexity as we are moving from era of 1-tier architecture to 2-tier architecture, multi-tier architecture and now to satellite communication

• Due to increase in work pressure and assigned deadlines

13.4 What Is aBug Life Cycle?

The duration or time span between the first time a bug is found (‘New’) and closed successfully (status: ‘Closed’), rejected, postponed or deferred is called as ‘Bug/Error Life Cycle’.

Defect life cycle includes the different stages after a defect is identified.

• New – When defect is identified • Open– When Development team validates that it is a Bug, defect is opened • Assigned – When development lead assigns a developer to fix the bug • Fixed – When developer fixes the detected bug by appropriate code changes • Retest – When Test lead assigns a tester to verify the fix • Closed/reopened– Retested by TE and he will update the Status of the bug

There are seven different life cycles that a bug can passes through:

Bug/Defect Life Cycle

New

OpenReject

Reason

Pending RejectValid

Invalid

Assigned

Fixed

Pending Retest

Retest

Closed

Post-Poned

Deffered

Re-Open

Not –Imp

Not-Available

After-Availability

Problem Persists

Page 79: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

79

13.4.1 Bug Life Cycle I 1. A tester finds a bug and reports it to Test Lead. 2. The Test lead verifies if the bug is valid or not. 3. Test lead finds that the bug is not valid and the bug is ‘Rejected’.

13.4.2 Bug Life Cycle II 1. A tester finds a bug and reports it to Test Lead. 2. The Test lead verifies if the bug is valid or not. 3. The bug is verified and reported to development team with status as ‘New’. 4. The development leader and team verify if it is a valid bug. The bug is invalid and is

marked with a status of ‘Pending Reject’ before passing it back to the testing team. 5. After getting a satisfactory reply from the development side, the test leader marks the

bug as ‘Rejected’.

13.4.3 Bug Life Cycle III 1. A tester finds a bug and reports it to Test Lead. 2. The Test lead verifies if the bug is valid or not. 3. The bug is verified and reported to development team with status as ‘New’. 4. The development leader and team verify if it is a valid bug. The bug is valid and the

development leader ‘Open’ the bug and assigns a developer to it marking the status as ‘Assigned’.

5. The developer solves the problem and marks the bug as ‘Fixed’ and passes it back to the Development leader.

6. The development leader changes the status of the bug to ‘Pending Retest’ and passes on to the testing team for retest.

7. The test leader changes the status of the bug to ‘Retest’ and passes it to a tester for retest.

8. The tester retests the bug and it is working fine, so the tester closes the bug and marks it as ’Closed’.

13.4.4 Bug Life Cycle IV 1. A tester finds a bug and reports it to Test Lead. 2. The Test lead verifies if the bug is valid or not. 3. The bug is verified and reported to development team with status as ‘New’. 4. The development leader and team verify if it is a valid bug. The bug is valid and the

development leader ‘Open’ the bug and assigns a developer to it marking the status as ‘Assigned’.

Page 80: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

80

5. The developer solves the problem and marks the bug as ‘Fixed’ and passes it back to the development leader.

6. The development leader changes the status of the bug to ‘Pending Retest’ and passes on to the testing team for retest.

7. The test leader changes the status of the bug to ‘retest’ and passes it to a tester for retest.

8. The tester retests the bug and the same problem persists, so the tester after confirmation from test leader reopens the bug and marks it with ‘Reopen’ status. And the bug is passed back to the development team for fixing.

13.4.5 Bug Life Cycle V 1. A Tester finds a bug and reports it to Test Lead. 2. The Test lead verifies if the bug is valid or not. 3. The bug is verified and reported to development team with status as ‘New’. 4. The developer tries to verify if the bug is valid but fails in replicate the same scenario as

was at the time of testing, but fails in that and asks for help from testing team. 5. The tester also fails to re-generate the scenario in which the bug was found. And

developer rejects the bug marking it ‘Rejected’.

13.4.6 Bug Life Cycle VI • After confirmation that the data is unavailable or certain functionality is unavailable, the

solution and retest of the bug is postponed for indefinite time and it is marked as ‘Postponed’.

13.4.7 Bug Life Cycle VII • If the bug does not stand importance and can be/needed to be postponed, then it is

given a status as ‘Deferred’.

In this way, any bug that is found ends up with a status of Closed, Rejected, Deferred, or Postponed.

13.5 Bug Status Description There are various stages of bug life cycle. The status caption may vary depending on the bug tracking system you are using.

1. New: When QA files new bug/ bug is revealed for the first time, the software tester communicates it to his/her team leader (Test Lead) in order to confirm if that is a valid bug. After getting confirmation from the Test Lead, the software tester logs the bug and the status of ‘New’ is assigned to the bug.

Page 81: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

81

2. Open: Once the developer starts working on the bug, he/she changes the status of the bug to ‘Open’ to indicate that he/she is working on it to find a solution.

3. Deferred: If the bug is not related to current build or cannot be fixed in this release or if the bug is not important to fix immediately, then the project manager can set the bug status as deferred.

4. Assigned: ‘Assigned to’ field is set by the project lead or the project manager and the bug is assigned to developer.

5. Resolved/Fixed: When the developer makes necessary code changes and verifies the changes, then he/she can change the bug status as ‘Fixed’ and the bug is passed to testing team.

6. Pending Retest: After the bug is fixed, it is passed back to the testing team to get retested and the status of ‘Pending Reset’ is assigned to it.

7. Retest: The testing team leader changes the status of the bug, which is previously marked with ‘Pending Retest’ to ‘Retest’ and assigns it to a tester for retesting.

8. Could not reproduce: If developer is not able to reproduce the bug by the steps given in bug report by QA then developer can mark the bug as ‘CNR’. QA needs action to check if bug is reproduced and can assign to developer with detailed reproducing steps.

9. Need more information: If developer is not clear about the bug reproduce steps provided by QA to reproduce the bug, then he/she can mark it as “Need more information’. In this case QA needs to add detailed reproducing steps and assign bug back to dev for fix.

10. Reopen: If QA is not satisfy with the fix and if bug is still reproducible even after fix then QA can mark it as ‘Reopen’ so that developer can take appropriate action.

11. Closed: If bug is verified by the QA team and if the fix is ok and problem is solved then QA can mark bug as ‘Closed’.

12. Rejected/Invalid: Sometimes the developer or team lead can mark the bug as Rejected or invalid if the system is working according to specifications and bug is just due to some misinterpretation.

Page 82: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

82

13.6 Severity: How Serious Is The Defect?

Severity Description Criteria

1 Show Stopper

Inability to install/uninstall the product, product doesn’t start, product hangs or Operating System freezes, no workaround is available, data corruption, product abnormally terminates

2 High Workaround is available, function is not working according to specifications, severe performance degradation, critical to customer

3 Medium Incorrect error messages, incorrect data, noticeable performance inefficiencies

4 Low Enhancements,cosmetic flaws

13.7 Priority: How to Decide Priority?

Priority Description Criteria

1 Critical Needs immediate fix, blocks further testing

2 High/ Major Must be fixed before the product is released

3 Medium/ Average

Should be fixed if time permits

4 Low/ Minor Would like to fix but can be released as is

Page 83: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

83

13.8 Defect Tracking

Defect tracking is important in software engineering as complex software systems typically have hundreds or thousands of defects. Managing, evaluating and prioritizing these defects is a difficult task. The process of monitoring defects right from the time of recording defects until satisfactory resolution has been determined is called as Defect Tracking.

Defect tracking systems are computer database systems that store defects and help people to manage them.

13.9 Defect Prevention

Discovering and removing defects is an expensive and inefficient process. It is much more efficient for an organization to conduct activities that prevent defects.The objective of defect prevention is to identify the defects and take corrective action to ensure they are not repeated over subsequent iterative cycles. While defect prevention is much more effective and efficient in reducing the number of defects, most organization conducts defect discovery and removal.

Defect prevention can be implemented by preparing an action plan to minimize or eliminate defects, generating defect metrics, defining corrective action and producing an analysis of the root causes of the defects.

Page 84: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

84

13.10 Defect Report

A sample defect report is shown in the below figure. Summary and description are the most important parts of a defect report.

Sample Defect Report

To track defects, a defect workflow process has been implemented. Defect workflow training will be conducted for all test engineers. The steps in the defect workflow process are as follows:

1. When a defect is generated initially, the status is set to "New".

Note: How to document the defect, what fields need to be filled in and so on, also need to be specified.

Page 85: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

85

The tester then selects the priority of the defect:

• Critical - fatal error • High - require immediate attention

2. A designated person (in some companies, the software manager; in other companies, a

special board) evaluates the defect and assigns a status and makes modifications of type of defect and/or priority if applicable).

• The status "Open" is assigned if it is a valid defect. • The status "Close" is assigned if it is a duplicate defect or user error. The reason

for closing the defect needs to be documented. • The status "Deferred" is assigned if the defect will be addressed in a later

release. • The status "Enhancement" is assigned if the defect is an enhancement

requirement.

3. If the status is determined to be "Open", the software Manager (or other designated person) assigns the defect to the responsible person (developer) and sets the status to "Assigned".

4. Once the developer is working on the defect, the status can be set to "Work in Progress".

5. After the defect has been fixed, the developer documents the fix in the defect tracking tool and sets the status to “fixed”, if it was fixed; or "Duplicate", if the defect is a duplication (specifying the duplicated defect). The status can also be set to "As Designed", if the function executes correctly. At the same time, the developer reassigns the defect to the originator.

6. Once a new build is received with the implemented fix, the test engineer retests the fix and other possible affected code. If the defect has been corrected with the fix, the test engineer sets the status to "Close". If the defect has not been corrected with the fix, the test engineer sets the status to “Reopen”. Defect correction is the responsibility of system developers; defect detection is the responsibility of the AMSI test team. The test leads will manage the testing process, but the defects will fall under the purview of the configuration management group. When a software defect is identified during testing of the application, the tester will notify system developers by entering the defect into the PVCS Tracker tool and filling out the applicable information.

Page 86: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

86

CHAPTER 14: Risk Analysis

14.1 Introduction

Risk Analysis attempts to identify all the risks and then quantify the severity of the risks. A risk is a potential for loss or damage to an organization from materialized threats. A threat is a possible damaging event. If it occurs, it exploits vulnerability in the security of a computer based system.

A tester uses the results of risk analysis to select the most crucial tests. All software projects benefit from risk analysis. Using risk analysis at the beginning of a project, one can highlight the potential problem areas, whose failures have more serious, adverse consequences. This allows developers and product managers to pay special attention when designing the application and consequently to mitigate the risks.

14.2 Objectives

Risk can be defined as “Combination of likelihood and its impact it would have on the user”. Risk analysis can help prioritize verification and validation activities by ranking potential problems according to the probability and consequence of their occurring.

We define these concepts as follows:

• Risk: The probability of an adverse event occurring. Example: the system crashes, causing an airplane crash.

• Impact: The consequence (usually expressed as cost) of an adverse event occurring. Example: the cost of the lost airplane, plus the compensation to the families of each of the passengers, plus lost future customers.

• Exposure: A measure of the “importance” of the risk, expressed as the risk impact multiplied by the probability:

EXPOSURE = RISK*IMPACT

By calculating the exposure associated with each risk, we gain a number by which risks can be ranked, and thus our activities prioritized. The problem with this approach is that it is frequently difficult to accurately estimate risk probability and impact.

Page 87: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

87

14.3 Risk Identification

Risk identification involves collecting information about the project and classifying it to determine the amount of potential risk in the test phase and in production (in the future).

Risk is the possibility of suffering harm or loss. In software testing, we think of risk on following:

• A way the program could fail • How likely it is that the program could fail and what could be the consequences of that

failure

14.4 Risk Strategy

Risk-based strategizing and planning involves the identification and assessment of risks and the development of contingency plans for possible alternative project activity or the mitigation of all risks. These plans are then used to direct the management of risks during the software testing activities. It is, therefore, possible to define an appropriate level of testing per function based on the risk assessment of the function. This approach also allows for additional testing to be defined for functions that are critical or are identified as high risk as a result of testing (due to poor design, quality, documentation, etc).

14.5 Risk Assessment

Once risks have been identified and assessed, the steps to properly deal with them are much more programmatically. Risk assessment may be the most important step in the risk management process, and may also be the most difficult and prone to error. Part of the difficulty of risk management is that measurement of both of the quantities in which risk assessment is concerned can be very difficult itself. Uncertainty in the measurement is often large in both cases.

Also, risk management would be simpler if a single metric could embody all of the information in the measurement. However, since two quantities are being measured, this is not possible. A risk with a large potential loss and a low probability of occurring must be treated differently than one with a low potential loss but a high likelihood of occurring.

Page 88: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

88

14.6 Risk Mitigation

Risk mitigation/avoidance activities avoid risks or minimize their impact.

It is the activity of mitigating and avoiding risks based on the information gained from the previous activities of identifying, planning, and assessing risks. The idea is to use inspection and/or focus testing on the critical functions to minimize the impact a failure in this function will have in production.

14.7 Risk Reporting

Risk reporting is based on information obtained from the previous topics (those of identifying, planning assessing, and mitigating risks).

Risk reporting is often done in a standard graph like the following:

Risk Analysis Graph

As shown in the figure above,risk is assigned to pieces of the system with respect to the risk in each quadrant, i.e.

• Quadrant No. 1 ----- assign the pieces which have Very High Risk • Quadrant No. 2 ----- assign the pieces which have High Risk • Quadrant No. 3 ----- assign the pieces which have Moderate Risk • Quadrant No. 4 ----- assign the pieces which have Low Risk

Page 89: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

89

S – Graph for Risk Analysis

Referring to the above two graphs, risk impact or consequence and probability have a distinct influence on management concern. A risk factor that has a high impact but a very low probability of occurrence should not absorb a significant amount of management time. However, high-impact risks with moderate to high probability and low-impact risks with high probability should be carried forward into the risk analysis.

14.8 What Is Schedule Risk?

In your project, you have to estimate how long it takes to complete a certain task. You estimate that it usually takes 15 days to complete. If things go well, it may take 12 days but if things go bad, it may take 20 days. In your project plan, you enter 15 days against the task. The other information, the best case estimate of 12 days and the worst case estimate of 20 days, is not entered into the project at all.

If this seems familiar, then you already go through the process of identifying uncertainty or risk. By entering only the most likely duration a great deal of additional information is lost. But with Schedule Risk this extra information is used to help produce a much more realistic project. And you are not just limited to durations. Uncertainty in resources and costs can also be modeled in your project to produce an even greater depth and accuracy to the information available to you.

Page 90: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

90

DEFINITIONS

A

ACCEPTANCE CRITERIA: The exit criteria that a component or system must satisfy in order to be accepted by a user, customer, or other authorized entity.

ACCEPTANCE TESTING: Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the user, customers or other authorized entity to determine whether or not to accept the system.

ACCESSIBILITY TESTING: Testing to determine the ease by which users with disabilities can use a component or system.

ACCURACY: The capability of the software product to provide the right or agreed results or effects with the needed degree of precision. Also see Functionality Testing.

ACTUAL OUTCOME: See Actual Result.

ACTUAL RESULT: The behavior produced/observed when a component or system is tested.

AD HOC REVIEW: See Informal Review.

AD HOC TESTING: Testing carried out informally; no formal test preparation takes place, no recognized test design technique is used, there are no expectations for results and arbitrariness guides the test execution activity.

ALPHA TESTING: Simulated or actual operational testing by potential users/customers or an independent test team at eh developers’ site, but outside the development organization.

ANOMALY: Any condition that deviates from expectation based on requirements specifications, design documents, user documents, standards, etc. or from someone’s perception or experience. Anomalies may be found during, but not limited to, reviewing, testing, analysis, compilation, or use of software products or applicable documentation. Also see Defect, Deviation, Error, Fault, Failure, Incident, and Problem.

Page 91: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

91

AUDIT: An independent evaluation of software products or processes to ascertain compliance to standards, guidelines, specifications, and/or procedures bases on objective criteria, including documents that specify:

1. The form or content of the products to be produced 2. The process by which the products shall be produced 3. How compliance to standards or guidelines shall be measured

AUDIT TRAIL: A path by which the original input to a process (e.g. data) can be traced back through the process, taking the process output as a starting point. This facilitates defect analysis and allows a process audit to be carried out.

AUTOMATED TESTWARE: Testware used in automated testing, such as tool scripts.

AVAILABILITY: The degree to which a component or system is operational and accessible when required for use. Often expressed as a percentage.

B

BACK-TO-BACK TESTING: Testing in which two or more variants of a component or system are executed with the same inputs, the outputs compared, and analyzed in cases of discrepancies.

BASIC BLOCK: A sequence of one or more consecutive executable statements containing no branches.

BASIC TEST SET: A set of test cases derived from the internal structure of a component or specification to ensure that 100% of a specified coverage criterion will be achieved.

BE-BUGGING: See Error Seeding.

BETA TESTING: Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.

BIG-BANG TESTING:A type of integration testing in which software elements, hardware elements, or both are combined all at once into a component or an overall system, rather than in stages. Also see Integration Testing.

BLACKBOX TECHNIDQUE: See Black Box Test Design Technique.

Page 92: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

92

BLACK-BOX TESTING: Testing, either functional or non-functional, without reference to the internal structure of the component or system.

BLACK-BOX TEST DESIGN TECHNIQUE: Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.

BLOCKED TEST CASE: A test case that cannot be executed because the preconditions for its execution are not fulfilled.

BOTTOM-UP TESTING: An incremental approach to integration testing where the lowest level components are tested first, and then used to facilitate the testing of higher level components. This process is repeated until the component at the top of the hierarchy is tested. Also see Integration Testing.

BOUNDARY VALUE: An input value or output value that is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.

BOUNDARY VALUE COVERAGE: The percentage of boundary values that have been exercised by a test suite.

BOUNDARY VALUE ANALYSIS: A black box test design technique in which test cases are designed based on boundary values.

BOUNDARY VALUE TESTING: SeeBoundary Value Analysis.

BRANCH: A basic block that can be selected for execution based on a program construct in which one of two or more alternative program paths are available, e.g. case, jump, go to, if then-else.

BRANCH CONDITION: See Condition.

BRANCH CONDITION COMBIONATION COVERAGE: See Multiple Condition Coverage.

BRANCH CONDITION COMBINATION TESTING: See Multiple Condition Coverage.

BRANCH CONDITION COVERAGE: See Condition Coverage.

BRANCH COVERAGE: The percentage of branches that have been exercised by a test suite. 100% branch coverage implies both 100% decision coverage and 100% statement coverage.

BRANCH TESTING: A white box test design technique in which test cases are designed to execute branches.

Page 93: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

93

BUG: See Defect.

BUSINESS PROCESS-BASED TESTING: An approach to testing in which test cases are designed based on descriptions and/or knowledge of business processes.

C

CAPABILITY MATURITY MODEL (CMM): A five-level staged framework that describes the key elements of an effective software process. The Capability Maturity Model covers best practices for planning, engineering and managing software development and maintenance.

CAPABILITY MATURITY MODEL INTEGRATION (CMMI): A framework that describes the key elements of an effective product development and maintenance process. The Capability Maturity Model Integration covers best practices for planning, engineering and managing product development and maintenance. CMMI is the designated successor of the CMM.

CAPTURE/PLAYBACK TOOL: A type of test execution tool where inputs are recorded during manual testing in order to generate automated test scripts that can be executed later (i.e. replayed). These tools are often used to support automated regression testing.

CAPTURE/REPLAY TOOL: See Capture/Playback Tool.

CASE: Acronym for Computer Aided Software Engineering.

CAST: Acronym for Computer Aided Software Testing. Also see Test Automation.

CAUSE-EFFECT GRAPH: A graphical representation of inputs and/or stimuli (causes) with their associated outputs (effects), which can be used to design test cases.

CAUSE-EFFECT ANALYSIS: See Cause-Effect Graphing.

CHANGE CONTROL BOARD: See Configuration Control Board.

CHECKER: See Reviewer.

CODE ANALYZER: See Static Code Analyzer.

CODE COVERAGE: An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have been executed, e.g. statement coverage, decision coverage or condition coverage.

CODE-BASED TESTING: See White Box Testing.

Page 94: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

94

COMMERCIAL OF-THE-SHELF SOFTWARE: See Of-the-Shelf Software.

COMPATIBILITY TESTING: See Interoperability Testing.

COMPLETE TESTING: See Exhaustive Testing.

COMPLETION CRITERIA: See Exit Criteria.

COMPLEXITY: The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify. Also see Cyclomatic Complexity.

COMPLIANCE: The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions.

COMPLIANCE TESTING: The process of testing to determine the compliance of component or system.

COMPONENT: A minimal software item that can be tested in isolation.

COMPONENT INTEGRATION TESTING: Testing performed to expose defects in the interfaces and interaction between integrated components.

COMPONENT TESTING: The testing of individual software components.

COMPOUND CONDITION: Two or more single conditions joined by means of a logical operator (ABD, OR OR XOR), e.g. ‘A>B AND C > 1000’.

CONCRETE TEST CASE: See Low-Level Test Case.

CONDITION: A logical expression that can be evaluated as True or False, e.g. A>B. Also see Test Condition.

CONDITION COMBINATION COVERAGE: See Multiple Condition Coverage.

CONDITION COMBINATION TESTING: See Multiple Condition Testing.

CONDITION COVERAGE: The percentage of condition outcomes that have been exercised by a test suite. 100% condition coverage requires each single condition in every decision statement to be tested as True and False.

CONDITION TESTING: A white box test design technique in which test cases are designed to execute condition outcomes.

CONDITION OUTCOME: The evaluation of a condition to True or False.

Page 95: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

95

CONFIDENCE TEST: See Smoke Test.

CONFIRGURATION: The composition of a component or system as defined by the number, nature, and interconnections of its constituent parts.

CONFIGURATION CONTROL: An element of configuration management, consisting of the evaluation, co-ordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification.

CONFIGURATION CONTROL BOARD (CCB): A group of people responsible for evaluating and approving / disapproving proposed changes to configuration items, and for ensuring implementation of approved changes.

CONFIGURATION IDENTIFICATION: An element of configuration management, consisting of selecting the configuration items for a system and recording their functional and physical characteristics in technical documentation.

CONFIGURATION ITEM: An aggregation of hardware, software or both, that is designated for configuration management and treated as single entity in the configuration management process.

CONFIGURATION MANAGEMENT: A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics, record and report change processing and implementation status, and verify compliance with specified requirements.

CONFIRGURATION TESTING: See Portability Testing.

CONFIRMATION TESTING: See Re-Testing.

CONFORMANCE TESTING: See Compliance Testing.

CONTROL FLOW: A sequence of events (paths) in the execution through a component or system.

CONTROL FLOW GRAPH: A sequence of events (paths) in the execution through a component or system.

CONTROL FLOW PATH: See Path.

COVERAGE: The degree, expressed as a percentage to which a specified coverage item has been exercised by a test suite.

Page 96: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

96

COVERAGE ANALYSIS: Measurement of achieved coverage to a specified coverage item during test execution referring to predetermined criteria to determine whether additional testing is required and if so, which test cases are needed.

COVERAGE TOOL: A tool that provides objective measures of what structural elements, e.g. statements, branches have been exercised by a test suite.

CYCLOMATIC COMPLEXITY: The number of independent paths through a program. Cyclomatic complexity is defined as L-N+2P, where:

- L = the number of edges/links in a graph - N = the number of nodes in a graph - P = the number of disconnected parts of the graph (e.g. a called graph and a

subroutine)

CYCLOMATIC NUMBER: See Cyclomatic Complexity.

D

DATA DEFINITION: An executable statement where a variable is assigned a value.

DATA DRIVEN TESTING: A scripting technique that stores test input and expected results in a table or spreadsheet, so that a single control script can execute all of the tests in the table. Data driven testing is often used to support the application of test execution tools such as capture/playback tools. Also see Keyword Driven Testing.

DATA FLOW ANALYSIS: A form of static analysis based on the definition and usage of variables.

DATA FLOW COVERAGE: The percentage of definition-use pairs that have been exercised by a test suite.

DATA FLOW TEST: A white box test design technique in which test cases are designed to execute definition and use pairs of variables.

DEAD CODE: See Unreachable Code.

DEBUGGER: See Debugging Tool.

DEBUGGING: The process of finding, analyzing and removing the causes of failures in software.

Page 97: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

97

DEBUGGING TOOL: Tool used by programmers to reproduce failures, investigate the state of programs, and find the corresponding defect. Debuggers enable programmers to execute programs step by step, to halt a program at any program statement and to set an examine program variables.

DECISION: A program point at which the control flow has two or more alternative routes. A node, with two or more links used to separate branches.

DECISION CONDITION COVERAGE: The percentage of all condition outcomes and decision outcomes that have been exercised by a test suite. 100% decision condition coverage implies both 100% condition coverage and 100% decision coverage.

DECISION CONDITION TESTING: A white box test design technique in which test cases are designed to execute condition outcomes and decision outcomes.

DECISION COVERAGE: The percentage of decision outcomes that have been exercised by a test suite. 100% decision coverage implies both 100% branch coverage and 100% statement coverage.

DECISION OUTCOME: The result of a decision (which therefore determines the branches to be taken).

DEFECT: A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.

DEFECT MANAGEMENT: The process of recognizing, investigating, taking action and disposing of defects. It involves recording defects, classifying them and identifying the impact.

DEFECT MANAGEMENT TOOL: A tool that facilitates the recording and status tracking of defects. They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of defects and provide reporting facilities. Also see Incident Management Tool.

DEFECT MASKING: An occurrence in which one defect prevents the detection of another.

DEFECT REPORT: A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function.

DEFECT TRACKING TOOL: See Defect Management Tool.

DELIVERABLE: Any (work) product that must be delivered to someone other than the (work) product’s author.

Page 98: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

98

DESIGN-BASED TESTING: An approach to testing in which test cases are designed based on the architecture and/or detailed design of a component or system (e.g. tests of interfaces between components or systems).

DESK CHECKING: Testing of software or specification by manual simulation of its execution. Also see Static Analysis.

DEVIATION: See Incident.

DEVIATION REPORT: See Incident Report.

DIRTY TESTING: See Negative Testing.

DOCUMENTATION TESTING: Testing the quality of the documentation, e.g. user guide or installation guide.

DRIVER: A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.

DYNAMIC ANALYSIS: The process of evaluating behavior, e.g. memory performance, CPU usage, of a system or component during execution.

DYNAMIC ANALYSIS TOOL: A tool that provides run-time information on the state of the software code. These tools are most commonly used to identify unassigned pointers, check pointer arithmetic and to monitor the allocation, use and de-allocation of memory and to flag memory leaks.

DYNAMIC COMPARISON: Comparison of actual and expected results, performed while the software is being executed, for example by a test execution tool.

DYNAMIC TESTING: Testing that involves the execution of software of a component or system.

E

ENTRY CRITERIA: The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria.

ENTRY POINT: The first executable statement within a component.

Page 99: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

99

EQUIVALENCE CLASEE: See Equivalence Partition.

EQUIVALENCE PARTITION: A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.

EQUIVALENCE PARTITION COVERAGE: The percentage of equivalence partitions that have been exercised by a test suite.

EQUIVALENCE PARTITIONING: A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.

ERROR: A human action that produces an incorrect result.

ERROR GUESSING: A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.

ERROR SEEDING: The process of intentionally adding known defects to those already in the component or system for the purpose of monitoring the rate of detection and removal, and estimating the number of remaining defects.

EXHAUSTIVE TESTING: A test approach in which the test suite comprises all combinations of input values and preconditions.

EXIT CRITERIA: The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used by testing to report against and to plan when to stop testing.

EXIT POINT: The last executable statement within a component.

EXPECTED RESULT: The behavior predicted by the specification, or another source, of the component or system under specified conditions.

F

FAIL: A test is deemed fail if its actual result does not match its expected result.

Page 100: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

100

FAILURE: Deviation of the component or system from its expected delivery, service or result.

FAILURE RATE: The ratio of the number of failures of a given category to a given unit of measure, e.g. failures per unit of time, failures per number of transactions, failures per number of computer runs.

FAULT: See Defect.

FAULT MASKING: See Defect Masking.

FAULT TOLERANCE: The capability of the software product to maintain a specified level of performance in cases of software faults (defects) or of infringement of its specified interface.

FEASIBLE PATH: A path for which a set of input values and preconditions exists which causes it to be executed.

FIELD TESTING: See Beta Testing.

FORMAL REVIEW: A review characterized by documented procedures and requirements, e.g. inspection.

FUNCTIONAL INTEGRATION: An integration approach that combines the components or systems for the purpose of getting a basic functionality working early. Also see Integration Testing.

FUNCTIONAL REQUIREMENT: A requirement that specifies a function that a component or system must perform.

FUNCTIONAL TEST DESIGN TECHNIQUE: Procedure to derive and/or select test cases based on an analysis of the specification of the functionality of a component or system without reference to its internal structure. Also see Black Box Test Design Technique.

FUNCTIONAL TESTING: Testing based on an analysis of the specification of the functionality of a component or system. Also see Black Box Testing.

FUNCTIONALITY: The capability of the software product to provide functions which meet stated and implied needs when the software is used under specified conditions.

FUNCTIONALITY TESTING: The process of testing to determine the functionality of a software product.

Page 101: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

101

G

GLASS BOX TESTING: See White Box Testing.

H

HIGH LEVEL TEST CASE: A test case without concrete (implementation level) values for input data and expected results. Logical operators are used, instances of the actual values are not yet defined and/or available. Also see Low Level Test Case.

HORIZONTAL TRACEBILITY: The tracing of requirements for test level through the layers of test documentation (e.g. test plan, test design specification, test case specification and test procedure specification or test script).

I

IMPACT ANALYSIS: The assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified requirements.

INCREMENTAL TESTING: Testing where components or systems are integrated and tested one or some at a time, until all the components or systems are integrated and tested.

INCIDENT: Any event occurring that requires investigation.

INCIDENT MANAGEMENT: The process of recognizing, investigating, taking action and disposing of incidents. It involves recording incidents, classifying them and identifying the impact.

INCIDENT MANAGEMENT TOOL: A tool that facilitates the recording and status tracking of incidents. They often have workflow-oriented facilities to track and control the allocation, correction and re-testing of incidents and provide reporting facilities. Also see Defect Management Tool.

Page 102: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

102

INCIDENT REPORT: A document reporting on any event that occurred, e.g. during the testing, which requires investigation.

INFORMAL REVIEW: A review not based on a formal (documented) procedure.

INPUT: A variable (whether stored within a component or outside) that is read by a component.

INPUT DOMAIN: The set from which valid input values can be selected. Also see Domain.

INPUT VALUUE: An n instance of an input. Also see Input.

INSPECTION: A type of peer review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a documented procedure. Also see Peer Review.

INSPECTION LEADER: See Moderator.

INSPECTOR: See Reviewer.

INSTALLABILITY: The capability of the software product to be installed in a specified environment. Also see Portability.

INSTALLABILITY TESTING: The process of testing the installability of a software product. Also see Portability Testing.

INTEGRATION: The process of combining components or systems into larger assemblies.

INTEGRATION TESTING: Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems. Also see Component Integration Testing, System Integration Testing.

INTERFACE TESTING: An integration test type that is concerned with testing the interfaces between components or systems.

INTEROPERABILITY: The capability of the software product to interact with one or more specified components or systems. [after ISO 9126].Also seeFunctionality.

INTEROPERABILITY TESTING: The process of testing to determine the interoperability of a software product. Also see Functionality Testing.

ISOLATION TESTING: Testing of individual components in isolation from surrounding components, with surrounding components being simulated by stubs and driver, if needed.

Page 103: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

103

K

KEYWORD DRIVEN TESTING: A scripting technique that uses data files to contain not only test data and expected results, but also keywords related to the application being tested. The keywords are interpreted by special supporting scripts that are called by the control script for the test. Also seeData Driven Testing.

L

LEARNABILITY: The capability of the software product to enable the user to learn its application. Also see Usability.

LOAD TESTING: A test type concerned with measuring the behavior of component or system with increasing load, e.g. number of parallel users and/or numbers of transactions to determine what load can be handled by the component or system. Also see Stress Testing.

LOW LEVEL TEST CASE: A test case with concrete (implementation level) values for input data and expected results. Logical operators from high level test cases are replaced by actual values that correspond to the objectives of the logical operators. Also see High Level Test Case.

M

MAINTENANCE TESTING: Testing the changes to an operational system or the impact of a changed environment to an operational system.

MAINTAINABILITY: The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, of adapted to a changed environment.

MAINTAINABILITY TESTING: The process of testing to determine the maintainability of a software product.

MEASURE: The number or category assigned to an attribute of an entity by making a measurement.

Page 104: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

104

MEASUREMENT: The process of assigning a number or category to an entity to describe an attribute of that entity.

MEMORY LEAK: A defect in a program’s dynamic store allocation logic that causes it to fail to reclaim memory after it has finished using it, eventually causing the program to fail due to lack of memory.

MODERATOR: The leader and main person responsible for an inspection or other review process.

MUTATION TESTING: See Back-To-Back Testing.

N

NEGATIVE TESTING: Tests aimed at showing that a component or system does not work. Negative testing is related to the testers’ attitude rather than a specific test approach or test design technique, e.g. testing with invalid input values or exceptions.

NON-FUNCTIONAL REQUIREMENT: A requirement that does not relate to functionality, but to attributes of such as reliability, efficiency, usability, maintainability and portability.

NON-FUNCTIONAL TESTING: Testing the attributes of a component or system that do not relate to functionality, e.g. reliability, efficiency, usability, maintainability and portability.

O

OPERATIONAL TESTING: Testing conducted to evaluate a component or system in its operational environment.

ORACLE: See Test Oracle.

OUTCOME: See Result.

OUTPUT: A variable (whether stored within a component or outside) that is written by a component.

OUTPUT DOMAIN: The set from which valid output values can be selected. Also see Domain.

Page 105: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

105

OUTPUT VALUE: An instance of an output. Also see Output.

P

PARTITION TESTING: See Equivalence Partitioning.

PASS: A Test is deemed to pass if its actual result matches its expected result.

PASS/FAIL CRITERIA: Decision rules used to determine whether a test item (function) or feature has passed or failed a test.

PATH:A sequence of events, e.g. executable statements, of a components or system from an entry point to an exit point.

PATH COVERAGE: The percentage of paths that have been exercised by a test suite 100% path coverage implies 100% LCSAJ coverage.

PATH TESTING: A white box test design technique in which test cases are designed to execute paths.

PEER REVIEW: A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples include inspection, technical review and walkthrough.

PERFORMANCE: The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate. Also see Efficiency.

PERFORMANCE TESTING: The process of testing to determine the performance of a software product. Also see Efficiency Testing.

PERFORMANCE TESTING TOOL: A tool to support performance testing and that usually has two main facilities: load generation and test transaction measurement. Load generation can simulate either multiple uses or high volumes of input data. During execution, response time measurements are taken from selected transactions and these are logged. Performance testing tools normally provide reports based on test logs and graphs of load against response times.

PHASE TEST PLAN: A test plan that typically addresses one test phase. Also see Test Plan.

Page 106: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

106

PORTABILITY: The ease with which software can be transferred from one hardware or software environment to another.

PORTABILITY TESTING: The process of testing to determine the portability of a software product.

POST CONDITION: Environmental and state conditions that must be fulfilled after the execution of a test or test procedure.

POST-EXECUTION COMPARISON: Comparison of actual and expected results, performed after the software has finished running.

PRE-CONDITION: Environmental and state conditions that must be fulfilled before the component or system can be executed with a particular test or test procedure.

PREDICTED OUTCOME: See Expected Result.

PRE TEST: See Intake Test.

PRIORITY: The level of (business) importance assigned to an item, e.g. defect.

PROBLEM: See Defect.

PROBLEM MANAGEMENT: See Defect Management.

PROBLEM REPORT: See Defect Report.

PROCESS: A set of interrelated activities, which transform inputs into outputs.

PRODUCT RISK: A risk directly related to the test object. Also see Risk.

PROJECT: A project is a unique set of coordinat3d and controlled activities with start and finish dates undertaken to achieve an objective conforming to specific requirements, including the constraints of time, cost and resources.

PROJECT RISK: A risk related to management and control of the (test) project. Also see Risk.

Q

QUALITY: The degree to which a component, system or process meets the specified requirements and/or user/customer needs and expectations.

Page 107: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

107

QUALITY ASSURANCE: Part of quality management focused on providing confidence that quality requirements will be fulfilled.

QUALITY ATTRIBUTES: A feature or characteristic that affects an item’s quality.

R

RANDOM TESTING: A black box test design technique where test cases are selected, possibly using a pseudo-random generation algorithm, to match an operational profile. This technique can be used for testing non-functional attributes such as reliability and performance.

RECORDER: See Scribe.

RECORD/PLAYBACK TOOL: See Capture/Playback Tool.

RECOVERABILITY:The capability of the software product to re-establish a specified level of performance and recover the data directly affected in case of failure. Also see Reliability.

RECOVERABILITY TESTING: The process of testing to determine the recoverability of a software product. Also see Reliability Testing.

RECOVERY TESTING: See Recoverability Testing.

REGRESSION TESTNG: Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed.

RELIABILITY TESTING:The process of testing to determine the reliability of a software product.

REPLACE ABILITY: The capability of the software product to be used in place of another specified software product for the same purpose in the same environment.

REQUIREMENTS PHASE: The period of time in the software life cycle during which the requirements for a software product are defined and documented.

RESULT: The consequence/outcome of the execution of a test. it includes outputs to screens, changes to data, reports, and communication messages sent out. Also see Actual Result, Expected Result.

RESUMPTION CRITERIA: The testing activities that must be repeated when testing is re-started after a suspension.

Page 108: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

108

RE-TESTING: Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.

REVIEW: An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough.

REVIEWER: The person involved in the review who shall identify and describe anomalies in the product under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.

REVIEW TOOL: A tool that provides support to the review process. Typical features include review planning and tracking support, communication support, collaborative reviews and a repository for collecting and reporting of metrics.

RISK: A factor that could result in future negative consequence; usually expressed as impact and likelihood.

RISK ANALYSIS: The process of assessing identified risks to estimate their impact and probability of occurrence (likelihood).

RISK-BASED TESTING: Testing oriented towards exploring and providing information about product risk.

RISK CONTROL: The process through which decisions are reached and protective measures are implemented for reducing risks to, or maintaining risks within, specified levels.

RISK IDENTIFICATION: The process of identifying risks using techniques such as brainstorming, checklists and failure history.

RISK MANAGEMENT: Systematic application of procedure and practice to the tasks of identifying, analyzing, prioritizing, and controlling risk.

S

SCALABILITY TESTING: Testing to determine the scalability of the software product.

SCRIBE: The person who has to record each defect mentioned and any suggestions for improvement during a review meeting, on a logging form. The scribe has to ensure that the logging form is readable and understandable.

Page 109: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

109

SCRIPTING LANGUAGE: A programming language in which executable test scripts are written, used by a test execution tool (e.g. a capture/playback tool).

SECURITY TESTING: Testing to determine the security of the software product. Also see Functionality Testing.

SEVERITY: The degree of impact that a defect has on the development or operation of a component or system.

SMOKE TEST: A subset of all defined/planned test cases that cover the main functionality of a component or system, to ascertaining that the most crucial functions of a program work, but not bothering with finer details. A daily build and smoke test is among industry best practices.

SPECIFICATION: A document that specifies, ideally in a complete, precise and verifiable manner, the requirements, design, behavior, or other characteristics of a component or system, and, often, the procedures for determining whether these provisions have been satisfied.

SPECIFICATION-BASED TESTING: See Black Box Testing.

SPECIFICATION-BASED TEST DESIGN TECHNIQUE: See Black Box Test Design Technique.

SPECIFIED INPUT: An input for which the specification predicts a result.

STATEMENT COVERAGE: The percentage of executable statements that have been exercised by a test suite.

STRESS TESTING: Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements. Also see Load Testing.

STUB: A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.

SUSPENSION CRITERIA: The criteria used to (temporarily) stop all or a portion of the testing activities on the test items [After IEEE 829].

SYSTEM TESTING: The process of testing an integrated system to verify that it meets specified requirements.

Page 110: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

110

T

TECHNICAL REVIEW: A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken.

TEST: A set of one or more test cases.

TEST APPROACH: The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.

TEST AUTOMATION: The use of software to perform or support test activities, e.g. test management, test design, test execution and results checking.

TEST BASIS: All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.

TEST BED: See Test Environment.

TEST CASE: A set of input values, execution preconditions, expected results and execution post-conditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.

TEST CASE SPECIFICATION: A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item.

TEST CASE SUITE: See Test Suite.

TEST ENVIRONMENT: An environment containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.

TEST EVALUATION REPORT: A document produced at the end of the test process summarizing all testing activities and results. It also contains an evaluation of the test process and lessons learned.

TEST EXECUTION: The process of running a test by the component or system under test, producing actual result(s).

Page 111: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

111

TEST EXECUTION PHASE: The period of time in a software development life cycle during which the components of a software product are executed, and the software product is evaluated to determine whether or not requirements have been satisfied.

Test execution tool: A type of test tool that is able to execute other software using an automated test script, e.g. capture/playback.

TEST LOG: A chronological record of relevant details about the execution of tests.

TEST LOGGING: The process of recording information about tests executed into a test log.

TEST MANAGER: The person responsible for testing and evaluating a test object. The individual, who directs, controls, administers plans and regulates the evaluation of a test object.

TEST MANAGEMENT: The planning, estimating, monitoring and control of test activities, typically carried out by a test manager.

TEST MANAGEMENT TOOL: A tool that provides support to the test management and control part of a test process. It often has several capabilities, such as test ware management, scheduling of tests, logging of results, progress tracking, incident management, and test reporting.

TEST MONITORING: A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actual to that which was planned. Also see Test Management.

TEST ORACLE: A source to determine expected results to compare with the actual result of the software under test. An oracle may be the existing system (for a benchmark), a user manual, or an individual’s specialized knowledge, but should not be code.

TEST PLAN; A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning. It is a record of test planning process.

TEST PLANNING: The activity of establishing or updating a test plan.

TEST SCRIPT: Commonly used to refer to a test procedure specification, especially an automated one.

TEST SPECIFICATION: A document that consists of a test design specification, test case specification and/or test procedure specification.

Page 112: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

112

TESTWARE: Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.

TOP-DOWN TESTING: An incremental approach to integration testing where the component at the top of the component hierarchy is tested first, with lower level components being simulated by stubs. Tested components are then used to test lower level components. The process is repeated until the lowest level components have been tested. Also see Integration Testing.

U

UNIT TESTING: See Component Testing.

UNREACHAB LE CODE: Code that cannot be reached and therefore is impossible to execute.

USABILITY: The capability of the software to be understood learned, used and attractive to the user when used under specified conditions.

USABILITY TESTING: Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions.

V

V-MODEL: A framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.

VALIDATION: Confirmation by examination and through provision of objective evidence that the requirements for specific intended use or application have been fulfilled.

VERIFICATION: Confirmation by examination and through the provision of objective evidence that specified requirements have been fulfilled.

Page 113: introduction to software testing and quality assurance ...saintangelos.com/studentdesk/Download/Manual_Testing_Guide.pdf · introduction to software testing and quality assurance.

113

VOLUME TESTING: Testing where the system is subjected to large volumes of data.

W

WALKTHROUGH: A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content. Also see Peer Review.

WHITE-BOX TEST DESIGN TECHNIQUE: Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.

WHITE BOX TESTING: Testing based on an analysis of the internal structure of the component or system.