Basics of Software Testing_Very Good__Modified

162
Software testing Concepts A Course on Basics of Software Testing By Bharathi Simha Technology Consultant Page 1

Transcript of Basics of Software Testing_Very Good__Modified

Page 1: Basics of Software Testing_Very Good__Modified

Software testing Concepts

A Course on Basics of Software Testing

By

Bharathi Simha

Technology Consultant

Page 1

Page 2: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Table of Contents

Module 1: Overview of System Development Life Cycleo What is SDLC o Overview of Life Cycle o SDLC Phase o Roles associated with SDLC Phases o SDLC with simple entry and exit gates o WaterFall Model o Spiral, Incremental Models o Incremental Model o Spiral Model o RAD o Evolutionary Model o V-Model o Agile SDLC

o Extreme programming o FDD o RUPo Scrum

Module 2: Introduction to Software Testingo Why software testing?o Definition of Software testingo Examples:

Real time Scenarios will be discussed for releasing a product by giving less Importance for testing

o Insight to Development and testingo Testing terminology

Module 3: Software Test Life Cycleo Test Life Cycle

Page 2

Page 3: Basics of Software Testing_Very Good__Modified

Software testing Concepts

o Understanding the Software test life CycleWhen to start testing in SDLC and Why?

o Test Process Definitiono Test Process in collaboration with SDLC

o Risk Based Testingo Agile Testingo Test Driven Development

Module 4: Types of Testing and testing in SDLCo Definition of white Box testing and Black Box testing

o Review of V-Model Verification and Validation Definition

o Test Coverageo Input Artifacts and Deliverables

Understanding testing in Design Phase Development Phase

Module 5: Test Levels and test Typeso Test Levels

o Unit Test [definitions & approach]o Smoke Testo Integration Testo System Testo User Acceptance Test – UAT

o Test Typeso Static vs. Dynamic Testo Regression Testo Performance Testo Security Testo Load Testo Stress Testo Volume Testo Usability Test

Page 3

Page 4: Basics of Software Testing_Very Good__Modified

Software testing Concepts

o Cross-browser testo Exploratory testo Alpha and beta testo Operation readiness test

Module 6: Overview of Unit Testing White Box testing Techniques

o Cyclomatic Complexityo Definition with examples

Exploratory testingo Testing in the Darko Anatomy of Exploratory Testingo Differences Between Exploratory and Ad-Hoc Testingo Knowing When Exploratory Testing is the Best Approacho Reporting

Module 7: Overview of Integration Testingo What is integration/Assembly testing

o Approaches for Integration testingo Input artifacts and Deliverableso Stubs & driverso Incremental and Big bang Integration testing

Module 8: Requirement Analysis and Reviews o Types of Requirements

o Business Requirementso System Requirementso Technical Requirements

o Customer Early Involvemento Requirements Modelingo Requirements Traceabilityo Requirements Documentationo Requirements Validation

o Reviews Formal Review

Inspectiono Process

Page 4

Page 5: Basics of Software Testing_Very Good__Modified

Software testing Concepts

o Teamo Activitieso Checklist

Informal Reviews Peer Review

o Processo Teamo Activitieso Checklist

Exerciseo Static Test

Static vs. Dynamic Test Coverage Requirements Ambiguity Analysis (Requirements Features) Risk analysis Exercise

Module 9: Dynamic Testingo Dynamic Testing

o Test Coverageo Test Design Techniques

o White vs. Black Box Test Techniqueso Boundary Value Analysiso Equivalence Partitioningo Decision Tableso Cause Effect Graphingo Network Graphingo Guess Testing/Error Guessingo Structure Testingo Scenario Based testingo Other white box techniqueso Test Scenarioso Different types of scenarioso Scenario based testingo Systematic vs. Non-systematic Testingo Functional vs. Non-functional Testing

Page 5

Page 6: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Module 10: Manual Test Scriptso What are Manual scriptso Techniques of writing Manual scriptso Manual Test Script Templateso Execution of Manual test scriptso Results evaluationo Tools used for writing Manual test scripts

Module 11: Defect trackingo What is a defecto What is a varianceo Differences between defects and varianceso Types of Defectso Causes for Defectso Defect life cycleo Defect Tracking

o Standardso Processo Defect Typeso Defect Severityo Defect Priorityo Defect Reports

Module 12: Test Planning and strategyo Test Strategy and Planning

o Test Strategyo Test Plano Team Buildingo Test Environment

o Test Metricso Requirements traceability matrixo Test Coverageo Defect Coverageo Test Plan coverage

Page 6

Page 7: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Module 1:

Overview of System Development Life Cycle

1.1 What is SDLC?

Systems development Life Cycle [SDLC] is a conceptual model used in project management that describes the stages involved in an information system development project, from an initial feasibility study through maintenance of the completed application

1.2 Overview of SDLC Life Cycle

• The life-cycle models have changed over time to reflect the changes in customer group problems, perceptions of quality, and technologies available.

• Generally they have been described as development life cycles, but some of them include or may be adapted to maintenance activities.

• In some of the models we will see that these are performed once and in others some or all the steps are repeated or broken down into sub steps.

1.3 SDLC Phases

• System development Life Cycle is divide into 5 phases that are

▫ Planning

▫ Requirements Analysis

▫ Design

▫ Build

▫ Test

Page 7

Page 8: Basics of Software Testing_Very Good__Modified

Software testing Concepts

1.3.1 PLANNING PHASE

• In this phase all the planning activities will be executed namely,

▫ Project plan

▫ Test plan

▫ Configuration plan

▫ Design Plan

▫ Define the milestones for the deliverables...and so on

This is the entry point for all the projects under construction / maintenance

Deliverables:

The Deliverable in the phase includes

Project Charter

All types of Plans

1.3.2 REQUIREMENTS PHASE

• This is the next phase in SDLC, the activities executed in the phase are

▫ Requirements feasibility

▫ Test the requirements

▫ Configuration and Change management

▫ Partial High Level Design activity

• The entry criteria for this activity is the Customer Requirements

• Any Flaw in the customer requirements are identified in this stage

Deliverables

The deliverables of this phase are, Detailed System Analysis

Page 8

Page 9: Basics of Software Testing_Very Good__Modified

Software testing Concepts

1.3.3 Design Phase

• This is the third phase in SDLC; this is usually executed after we gather correct requirements from the customer. The activities executed in the phase are

▫ Design the system architecture

▫ Database design

▫ Component design

▫ Design unit test cases

▫ Design the Units

▫ Design interface…

▫ Design functional test cases

▫ And so on……

Input Artifacts:

Detailed System Analysis

Deliverables

Detailed System Design

1.3.4 Build Phase

In this phase, every program written by the developer is being tested i.e., unit testing is done in this phase for every unit that a developer creates or every change

Build phase output is an exit criterion for Test Phase

Input Artifacts: Detailed system Design

Deliverables: System Build, Developer Guidelines and Application forms and reports.

Page 9

Page 10: Basics of Software Testing_Very Good__Modified

Software testing Concepts

1.3.5 Test Phase

This is one of the most important phases, this is where we define the types of testing required to execute for a given application

This is where we execute all regression test cases.

Input Artifacts:

System Build

Deliverables:

User Procedures

Test plan

1.4 Water Fall Model

Waterfall model is the oldest Process. The flows of activities are sequential and it is a non-iterative model. Since the activities are sequential it is a unidirectional model. Any changes in the requirements cannot be incorporated as we cannot go back to the previous phase.

Page 10

Page 11: Basics of Software Testing_Very Good__Modified

Software testing Concepts

1.4.1 Advantages of this model

1. Easy to use2. Easy to understand 3. Milestones are clear4. Importance is given to quality not for the cost and schedule of the Project

1.4.2 Disadvantages of this model:

1. Very Expensive2. Requirements are frozen3. No change management is implemented in this model4. Customer is given little opportunity to preview the requirements

Other Models:

To overcome the problems faced in Waterfall model, a series of incremental and iterative models were released from 1980’s after. Let us now discuss about some of the models like Incremental, Spiral, RAD and evolutionary. A quick preview on these models.

1.5 Incremental Model:

In this model, the end of each cycle is an operational product. In this model also, the requirements are frozen and product developed based on these requirements. There is no change management system implemented. The cost of the product increases because, the number releases increases. Initial releases are faster.

Page 11

Page 12: Basics of Software Testing_Very Good__Modified

Software testing Concepts

1.6 Spiral Model:

Spiral model is basically divided into 4 quadrants. They are

1 Planning2 Goal determination3 Risk Analysis4 Develop and test

This model emphasizes more on risk analysis. It is definitely iterative. But less time is spent on Design, implement, Build and test

The starting iterations, looks very simple. As the number cycles increases this model becomes too complex. It is not a cost effective model. High risk functionalities are implemented first. Users are closely tied to the all the life cycle steps.

1.7 RAD [Rapid Application Development]

Rapid Application Development (RAD) is a software development methodology that focuses on building applications in a very short amount of time.

It was originally intended to describe a process of development that involves application prototyping and iterative development

Page 12

Page 13: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Core Elements of RAD Prototyping/Demo Iterative Development Time Boxing RAD tools (Erwin, CASE tools, Rational Rose, Visio)

RAD Strengths??What mainly gets reduced by using RAD model?

Reduced cycle time and improved productivity with fewer people -means lower costsHow does customer involvement in the SDLC cycle help?

Customer is involved throughout the complete cycle minimizes risk of not achieving customer satisfaction and business needs

RAD Weaknesses??What is the risk associated with this model?

Due to time boxing, where features are pushed off to later versions in favor of delivering an application in a short time frame, RAD may produce applications that are less full featured than traditionally developed applications.

What is the drawback of customer being involved in the development cycle?Developers and customers must be committed to rapid-fire activities in an

abbreviated time frame.

When to use RADReasonably well-known requirements User involved throughout the life cycle

Project can be time-boxedFunctionality delivered in incrementsHigh performance not requiredLow technical risksSystem can be modularized

1.8 Prototype Model

We can see that the incremental, iterative, and spiral models all still suffer from the problem of the length of time from the start of the SDLC to delivery of software for use by the customers.

Page 13

Page 14: Basics of Software Testing_Very Good__Modified

Software testing Concepts

The evolutionary/prototype model was developed, because it helps to break down the software into chunks that can be delivered earlier to the customer; this means that the real-life problem is at least partly resolved more quickly.

Structured Evolutionary Prototyping Steps

1 A preliminary project plan is developed2 A partial high-level paper model is created3 The model is source for a partial requirements specification4 A prototype is built with basic and critical attributes5 The designer builds

i. the databaseii. user interface

iii. algorithmic functions6 The designer demonstrates the prototype; the user evaluates for problems

and suggests improvements.7 This loop continues until the user is satisfied

1.9 V-Model

Page 14

Page 15: Basics of Software Testing_Very Good__Modified

Software testing Concepts

In the V-model, V stands for Verification and Validation. So it is a verification and validation model, which emphasizes more on the testing activity. In this model, the testing activity is carried out throughout the lifecycle. In this model the change request management is not completely implemented.

A brief description on various phases in this model Project and Requirements Planning – allocate resources

Product Requirements and Specification Analysis – complete specification of the software system

Architecture or High-Level Design – defines how software functions fulfill the design

Detailed Design – develop algorithms for each architectural component

Production, operation and maintenance – provide for enhancement and corrections

System and acceptance testing – check the entire software system in its environment

Integration and Testing – check that modules interconnect correctly

Unit testing – check that each module acts as expected

Coding – transform algorithms into software

Advantages of V-Model

Which phase is more emphasized when using V Shaped Model?o Emphasize planning for verification and validation of the product in early

stages of product development o Each deliverable must be testable – More Emphasis laid on the Testing

phase. Ease of use? Easy to use

Disadvantages

Can V model handle concurrent events?o Does not easily handle concurrent events

Can V model handle dynamic changes in requirements?o Does not easily handle dynamic changes in requirements

Page 15

Page 16: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Does V model contain risk analysis activities?o Does not contain risk analysis activities

1.10 Agile SDLC

Agile means Fast, with Agile SDLC you will understand how to speed up the process of development and how to reduce defects by continuous testing across the life cycle

Agile SDLC can be defined as

Speed up or by pass one or more life cycle phases

Usually less formal and reduced scope

Used for time critical applications

Used in organizations that uses disciplined methods

1.10.0 Agile Methods

Adaptive Software Development (ASD) Feature Driven Development (FDD) Dynamic Software Development Method (DSDM) Rapid Application Development (RAD) Scrum Extreme Programming (XP) Rational Unify Process (RUP)

1.10.1 Rational Unified Process

This is a process defined by IBM-Rational. Unified stands for, same approach. RUP four phases,

Inception Elaboration Construction Transition

Page 16

Page 17: Basics of Software Testing_Very Good__Modified

Software testing Concepts

In each of these phases there are a set of activities executed. No hard and fast rule that all the activities in a phase have to be executed. Depending on the requirement we need to execute them. In this model, testing activity starts from the inception and ends in the transition, until the product is delivered to the customer.

End of every phase is known as a milestone. End of every iteration is also known as a milestone.

Change management system and Configuration management system is completely implemented in this process.

1.10.2 Extreme Programming

For small-to-medium-sized teams developing software with vague or rapidly changing requirements Coding is the key activity throughout a software project

Communication among teammates is done with code Life cycle and behavior of complex objects defined in test cases – again in code

Page 17

Page 18: Basics of Software Testing_Very Good__Modified

Software testing Concepts

XP-Practices

1 Planning game – determine scope of the next release by combining business priorities and technical estimates

2 Small releases – put a simple system into production, then release new versions in very short cycle

3 Metaphor – all development is guided by a simple shared story of how the whole system works

4 Simple design – system is designed as simply as possible (extra complexity removed as soon as found)

5 Testing – programmers continuously write unit tests; customers write tests for features

6 Refactoring – programmers continuously restructure the system without changing its behavior to remove duplication and simplify

7 Pair-programming -- all production code is written with two programmers at one machine

8 Collective ownership – anyone can change any code anywhere in the system at any time.

9 Continuous integration – integrate and build the system many times a day – every time a task is completed.

10 40-hour week – work no more than 40 hours a week as a rule11 On-site customer – a user is on the team and available full-time to answer questions12 Coding standards – programmers write all code in accordance with rules

emphasizing communication through the code

1.10.4 XP is “extreme” because

Common sense practices taken to extreme levels If code reviews are good, review code all the time (pair programming) If testing is good, everybody will test all the time If simplicity is good, keep the system in the simplest design that supports its

current functionality. (simplest thing that works) If design is good, everybody will design daily (refactoring) If architecture is important, everybody will work at defining and refining the

architecture (metaphor)

Page 18

Page 19: Basics of Software Testing_Very Good__Modified

Software testing Concepts

If integration testing is important, build and integrate test several times a day (continuous integration)

If short iterations are good, make iterations really, really short (hours rather than weeks)

1.10.5 Feature Driven Design (FDD)

Five FDD process activities

1 Develop an overall model – Produce class and sequence diagrams from chief architect meeting with domain experts and developers.

2 Build a features list – Identify all the features that support requirements. The features are functionally decomposed into Business Activities steps within Subject Areas.

Features are functions that can be developed in two weeks and expressed in client terms with the template: <action><result> <object>i.e. Calculate the total of a sale

3 Plan by feature -- the development staff plans the development sequence of features

4 Design by feature -- the team produces sequence diagrams for the selected features5 Build by feature – the team writes and tests the code

Summary:

At the end of this module, you will be able to identify a model for the given requirements. Why Agile SDLC. Advantages of using, iterative development. Advantages and disadvantages of different modules.

Page 19

Page 20: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Module 2: Introduction to Software Testing

2.0 Why Software testing?

Testing should intentionally attempt to make things go wrong to determine if things happen when they shouldn't or things don't happen when they should. It is oriented to 'detection'.

“Primary role of testing is not demonstration of correct performance, but the exposure of hidden defects”.

2.0.1 Definition of Software Testing

Software testing has different definitions:1. Software testing is a process of verification and validation

2. Software testing is to test whether the system satisfies all customer requirements.

3. Software testing is a destructive process for developing a better quality product.

4. Software testing is to define the quality of the system in terms of a. Functionality,b. Reliabilityc. Robustnessd. Efficiencye. Best performance etc.,

Let us now discuss about some of the scenarios in real time which helps us to visualize why Software testing is important

2.0.2 Scenario1:

Page 20

Page 21: Basics of Software Testing_Very Good__Modified

Software testing Concepts

A software problem contributed to a rail car fire in a major underground metro system in April of 2007 according to newspaper accounts.

The software reportedly failed to perform as expected in detecting and preventing excess power usage in equipment on a new passenger rail car, resulting in overheating and fire in the rail car, and evacuation and shutdown of part of the system.

2.0.3 Scenario 2:

In August of 2006 a U.S. government student loan service erroneously made public the personal data of as many as 21,000 borrowers on its web site, due to a software error.

The bug was fixed and the government department subsequently offered to arrange for free credit monitoring services for those affected

2.0.4 Scenario 3:

A software error reportedly resulted in overbilling of up to several thousand dollars to each of 11,000 customers of a major telecommunications company in June of 2006.

It was reported that the software bug was fixed within days, but that correcting the billing errors would take much longer

From the above scenarios that we have discussed, it is very clear that testing is one of the important aspects of development process. To ensure that the product meets all the criteria’s defined. We need to test the Software. we also understand that testing activity plays a very important role in defining the quality of the system in terms of

Robustness Efficiency Safety Stability Reliability and so on….

2.1.0 Why do we need Software Testing?

All software contains defects/bugs – there is no such thing as an error-free software.

In the absence of Software Testing, the following can be affected:

Page 21

Page 22: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Companies: Loss of money, time, reputation. Inability to meet contractual (SLAs) or legal requirements.

Environment: water tank overflow, radiation leak.People: medical devices and safety critical systems

Page 22

Page 23: Basics of Software Testing_Very Good__Modified

Software testing Concepts

2.1.1 Insight to Development

A software development process is a structure imposed on the development of a software product. Similar terms include software life cycle and software process. There are several models for such processes, each describing approaches to a variety of tasks or activities that take place during the process. Some people consider a lifecycle model a more general term and a software development process a more specific term.

It is a constructive process. The quality of the product completely depends on the type of the process we choose to develop an Application. Development process starts with a Requirements phase, Design phase, Implement, Build and test. From this, we understand that testing is one of the phases in development process. 2.2 Testing terminology

The terminology used in software testing field is a different when compared with other areas of Software

Test Requirements Test Scenarios Test Cases Test Steps Testers Test Manager Test Lead Test metrics Test coverage Test Driven Development Documents used Test plan Test Artifacts Test Deliverables Test procedures SFS[System/software Test specifications] Test Strategy…etc.,

Page 23

Page 24: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Module 3: Software Test Life Cycle

3.0 Test Life Cycle

Like development life cycle, there is a test life cycle defined for testing an application, the test life cycle defines the various phases in testing and the rules that can be adopted for testing an application

Every model has testing as an activity, to execute this activity we need to understand the Test Life Cycle

3.1 Testing Phases

o Planning Processo Test Designo Implementation / Performing Testo Executiono Evaluation

Defect Tracking and Management Quantitative Measurement Test Reporting

The figure below represents the test Life Cycle

Page 24

Page 25: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Planning Process: The activities in this phase are

Pre-Planning Activities Test Planning Post-Planning Activities

3.1.0 Test Design Phase: The activities in this phase are

Design Preparation

o Test Bed / Test Lab

o Test Coverage

Design Execution

o Specifications

o Cases

o Scripts

o Data

Design Preparation

Test Bed / Test Lab Adaptation or development of the approach to be used for test design and

test execution. Test Coverage

Adaptation of the coverage objectives in the test plan to specific system components

Design Execution

o Specifications

Creation of test design requirements, including purpose, preparation and usage.

o Cases

Page 25

Page 26: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Development of test objectives, including techniques and approaches for validation of the product. Determination of the expected result for each test case.

Design Execution

o Scripts

Documentation of the steps to be performed in testing, focusing on the purpose and preparation of procedures; emphasizing entrance and exit criteria.

o Data

Development of test inputs, use of data generation tools. Determination of the data set or sub-set needed to ensure a comprehensive test of the system. The ability to determine data that suits boundary value analysis and stress testing requirements.

3.1.1 Execution/Performing test

Execute Tests

o Perform the activities necessary to execute tests in accordance with the test plan and test design (including setting up tests, preparing databases, obtaining technical support, and scheduling resources).

Compare Actual vs. Expected Results

o Determine if the actual results met expectations (note: comparisons may be automated).

Test Log

o Logging tests in a desired form. This includes incidents not related to testing, but still stopping testing from occurring.

Record Discrepancies

o Documenting defects as they happen including supporting evidence.

Evaluation

Page 26

Page 27: Basics of Software Testing_Very Good__Modified

Software testing Concepts

o Evaluate the test results to identify defects

Defect Tracking

o Defect Recording

o Defect Reporting

o Defect Tracking

Testing Defect Management

o Validation

o Regression Testing

o Verification

Test Completion Criteria

Test Metrics

Management By Fact

Test Completion Criteria

o Code Coverage

Purpose, methods, and test coverage tools used for monitoring the execution of software and reporting on the degree of coverage at the statement, branch, or path level.

o Requirement Coverage

Monitoring and reporting on the number of requirements exercised, and/or tested to be correctly implemented.

Test Metrics

o Metrics Unique to Test

Includes metrics such as Defect Removal Efficiency, Defect Density, and Mean Time to Last Failure.

o Complexity Measurements

Page 27

Page 28: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Quantitative values accumulated by a predetermined method, which measure the complexity of a software product.

3.2 Test Process:

Software test process:

Software testing can also be stated as the process of validating and verifying that a software program/application/product:

meets the business and technical requirements that guided its design and development; works as expected; and

Can be implemented with the same characteristics.

3.2.0 Test Process in collaboration with SDLC

Let us understand the testing activity in a process. We will take the example of V-model

V- Model provides a structured testing framework to guide work throughout the development process.

Page 28

Page 29: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Designed to support the achievement of stage containment by organizing the verification, validation in and across all of the methodology elements throughout the Delivering phase of the methodology.

3.2.1 Benefits of V-Model

• Integrates inspection and test activities throughout.

• Finds errors earlier.

• Enables stage containment.

• Puts quality assurance back in the hands of the developers

The V-model framework is based upon the following Guiding principles:

• Stage Containment

• Verification & Validation

• Entry and Exit Criteria

Stage Containment:

• Identify problems at each stage in a product’s development, before the problems can be passed on to the next stage

• Minimize Problems and Maximize efficiency

Page 29

Page 30: Basics of Software Testing_Very Good__Modified

Software testing Concepts

3.3 Agile Testing

Some Agile Principles:

Satisfy the customer through early and continuous delivery of valuable software.

Page 30

Page 31: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Working software is the primary measure of progress.

Deliver working software frequently, from a couple of weeks to a couple of months.

Welcome changing requirements, even late in development.

The most efficient and effective method of conveying information to and within a development team is face-to-face conversation.

Business people and developers must work together daily throughout the project.

Simplicity--the art of maximizing the amount of work not done--is essential

3.4 Agile methodologies and testing

eXtreme programming

Scrum

DSDM

FDD

LD

ASD

MS Synch-and-Stabilize

Crystal family

Some define strict disciplined testing practices

Some do not say much about testing approach

E.g. FDD: “… processes used for testing are not the main process issues with which the organisations are struggling … and most organizations already have reasonable testing processes in place”

Page 31

Page 32: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Value of testing in agile development

Testing in agile software development should provide the information that stakeholders need to make decisions and steer the development into the right direction

This information must be provided promptly

Testing provides data on the status of the deliverables and generates project intelligence

Project intelligence is knowledge of risks and benefits

Knowledge of risks, benefits, test records and results are more valuable than test documentation and infrastructure

We can increase the value of testing most by

Improved intelligence

Providing intelligence earlier

3.4.1 Challenges for testing in agile software Development

What information is the testing based on?

o What to test and what are the expected results?

o How to make testing, development and business collaborate?

o How to involve customer and business people in testing?

How do we know where we are?

o Working software is the primary measure of progress

o Testing should provide the data

o If it’s not tested it doesn’t exist

How to keep up with the pace of the development?

Page 32

Page 33: Basics of Software Testing_Very Good__Modified

Software testing Concepts

o How to produce and communicate relevant information promptly?

o How to test early but not do anticipatory test design?

3.4.1 What is Test Driven Development?

Test Driven development is a practice that adds reliability to the development process.

3.4.1.0 Why Test Driven Development

Many projects fail because they lack a good testing methodology.

Its common sense, but it isn’t common practice.

The sense of continuous reliability and success gives you a feeling of confidence in your code, which makes programming more fun.

3.4.1.1 How does it work

Have a requirement. Let’s say “create a new random card, as for a card game”.

Think about what could be a reasonable test to verify compliance with the requirement.

Write the test before writing the code. Of course, the test will fail, and that’s ok.

Keep things simple.

Then, write only the minimally necessary code to make the test pass.

This is a process of discovery; as you identify possible improvements, refactor your code relentlessly to implement them.

Build, keep, and frequently run your cumulative collection of tests.

Summary:

At the end of this module, you will be able to understand

Need for V-Model testing.

Page 33

Page 34: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Testing in different phases on V-Model. Different verifications methods. A brief idea on what is TDD, need for TDD. Methods of TDD.

Page 34

Page 35: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Module 4: Types of Testing and testing in SDLC

4.1 Definitions in Testing

What is White Box Testing?

White-box testing is verification technique, software engineers can use to examine if their code works as expected. WBT is also known as

Glass Box testing

Structural testing

What is Black Box Testing?

Black-box test design treats the system as a "black-box", so it doesn't explicitly use knowledge of the internal structure.

Black-box test design is usually described as focusing on testing functional requirements. Synonyms for black-box include:

Behavioral,

Functional,

Opaque-box, and

Closed-box.

What is Verification?

Verification Testing: Doing it the right way:

Checks that a deliverable is correctly derived and internally consistent.

Checks that output and the process conform to the standards.

Page 35

Page 36: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Is performed by inspecting and reviewing

What is Validation?

Validation Testing: Right things working right:

Checks that a specification is properly implemented.

Is performed by executing the code.

What is Test Coverage?

The aim of coverage based testing method is to 'cover' the program with test cases that satisfy some fixed coverage criteria

4.2 Test Input Artifacts:

Test input documents used for any kind of testing is known as Test Input Artifacts

The major test input artifacts are

SRS[ System Requirement Specification]

Design Documents like

Architecture design Docs

Database design Docs

Interface design Docs

Component Design docs

Test Plan

Test Strategy

Testing can be done at various phases of SDLC

- Requirements Phase

- Design Phase

- Coding Phase

Page 36

Page 37: Basics of Software Testing_Very Good__Modified

Software testing Concepts

- Maintenance Phase

4.3 Testing in Requirements Phase:

- Verify the Problem definition and Requirements definition

- Requirements must be “Clear”, “Complete”, “Adequate”, “Unambiguous”, “Verifiable”, “Feasible”, “Traceable” etc

- Test Strategy/Test Approach preparation

- Test Conditions should be generated during this phase of testing (Requirements Testability

4.4 Testing in Design Phase:

- Verify the Problem definition and Requirements definition

- Requirements must be “Clear”, “Complete”, “Adequate”, “Unambiguous”, “Verifiable”, “Feasible”, “Traceable” etc

- Test Strategy/Test Approach preparation

- Test Conditions should be generated during this phase of testing (Requirements Testability)

4.5 Testing during Coding phase:

- Static analysis : Code walkthroughs and Inspections

- Dynamic Analysis : Execution of the code

- Manual or Automated

4.6 Testing during Maintenance Phase:

- Post implementation all the defects are handled via Maintenance phase, correction of defects or enhancing the system

- System needs to be tested after each maintenance- Documentation related to the project needs to be updated

Page 37

Page 38: Basics of Software Testing_Very Good__Modified

Software testing Concepts

- Previously executed Test cases are useful during Regression

4.7 Principles of Testing:

Testing is done throughout the SDLC to detect defects as early as possible and prevent defect migration to subsequent phases.

-Software Testing = Verification + Validation

(Static testing) + (Dynamic Testing)

4.8 Risk based Testing

Identify risk in the software with specific reference to its quality characteristic namely:

o low reliability,

o compatibility,

o portability,

o efficiency,

o security,

o accessibility etc

Prepare test cases for such identified characteristics

Summary:

At the end of this module, you will understand the

What is verification and validation?

Test principles, coverage matrix,

Need for testing in different phases of SDLC.

Page 38

Page 39: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Module 5: Test Levels and test Types

5.1 Test Levels:

Different types of testing performed for a given application is known as Test Levels.

Unit Testing:

lowest level

tested in isolation

most thorough look at detail

Error handling

Interfaces

usually done by programmer

also known as unit, module, program testing

5.2 Non-functional product Testing

Sanity testing or Smoke testing

Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort. For example, if the new software is crashing systems every 5 minutes, bogging down systems to a crawl, or corrupting databases, the software may not be in a 'sane' enough condition to warrant further testing in its current state.

System Testing:

Business process-based testing

Expected user profiles

Page 39

Page 40: Basics of Software Testing_Very Good__Modified

Software testing Concepts

What will be used most often?

What is critical to the business?

Business scenarios

Typical business transactions (catalogue to purchase)

Use cases

Prepared cases based on real situations

User Acceptance Testing

Final stage of validation Customer (user) should perform or be closely involved

Customer can perform any test they wish, usually based on their business processes

Final user sign-off

Approach

Mixture of scripted and unscripted testing

5.3 Test Types

Static vs. Dynamic Test

Regression Test

Performance Test

Security Test

Load Test

Stress Test

Volume Test

Usability Test

Page 40

Page 41: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Cross-browser test

Exploratory test

Alpha and beta test

Operation readiness test

5.4 Static vs. Dynamic Testing

Static testing is a technique which do not execute code

Dynamic testing is a technique, where we test the code, test the AUT for Functionality, performance etc.,

Static testing is done during the inception and the elaboration phase of SDLC

Dynamic testing is done during the Construction and the Transition phases.

Load Testing

Testing an application under heavy loads, such as, Testing of a web site under a range of loads to determine at what point the system's response time degrades or fails.

Stress Testing:

Testing the system under extreme load conditions until the system breaks down.

For example, if the lift can take only 10 person’s load, then we will test the system for a load greater than 10

Volume Testing

This type of testing is performed to see the volume of data transferred from the client side to the server side and also to test the scalability of the servers

Usability Testing

This type of testing is performed to see the user friendliness of the Application interface. The interface design document will be an input.

Cross-Browser testing

Page 41

Page 42: Basics of Software Testing_Very Good__Modified

Software testing Concepts

This type of testing is performed to see the look feel of the application across various browsers and the browser support

Exploratory testing

This type of testing is performed by a laymen/developer/anybody to see the other bugs in the system. This is also known as Ad-Hoc testing/monkey testing

Exploratory testing involves simultaneously learning, planning, running tests, and reporting / troubleshooting results."

Alpha and beta test

These are the two types of Acceptance testing.

Alpha testing is performed by the developers and the AQ people in the presence of the Client in the Development environment. This is performed to see the variances in the Software

Beta testing is performed by the end-user after the application goes online.

Operation readiness testing

Tests the readiness of the production environment to handle the new system

Summary:

On Completion of this module, you will be able to define

Different types of testing Why Exploratory testing What is ORT? Differences between Static and Dynamic testing

Page 42

Page 43: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Module 6: Overview of Unit Testing

6.1 Unit testing

We have understanding of Unit testing; now let us understand the techniques used for unit testing.

Unit Testing Techniques/White Box testing Techniques

1. Basis Path Analysis/path Coverage2. Code Coverage/Statement Coverage3. Branch Coverage/Conditional coverage/Decision Coverage4. Loop Coverage

Control Flow Testing:

Statement coverage: All statements in the code be executed once

Decision coverage: Code constructs e.g... if-else, do-while, for etc. are be evaluated for all decisions

Condition coverage: All conditions using relational and logical operators are checked for all outcomes.

Multiple Condition coverage: Consider combinations of multiple conditions to derive test cases. Optimize test cases to cover all such permutations and combinations.

– Basis path testing

Linear Combination of Paths: All paths can be built as a linear combination statements in the code be executed once

Cyclomatic Complexity: No of test cases = (e – n + 2) also known as Complexity measure

Page 43

Page 44: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Test Coverage

Statement Coverage

Percentage of executable statements exercised by a test suite

= number of statements exercised/ total number of statements

Example:

- program has 100 statements- tests exercise 87 statements- statement coverage = 87%

Example of statement coverage

1 Read(a)2 If a>6 then

Page 44

Page 45: Basics of Software Testing_Very Good__Modified

Software testing Concepts

3 B=a4 End if5 Print b

Test Case Input Expected Output

1 7 7

As all 5 statements are ‘covered’ by this test case, we have achieved 100% statement coverage

Decision Coverage:

Percentage of decision outcomes exercised by a test suite

=number of decisions outcomes exercised/ total number of decision outcomes

Example:

- program has 120 decision outcomes- tests exercise 60 decision outcomes- decision coverage = 50%

Example for Decision coverage:

1 Read A2 If A>0 then

i) If A = 21 then(a) Print “key”

ii) End if3 Print “key1”4 End if

Cyclomatic Complexity: 3Min tests to achieve:

Page 45

Page 46: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Statement Coverage: 1Branch Coverage: 3

6.2 Non-systematic test techniques

1. Trial and error / Ad hoc 2. Error guessing / Experience-driven 3. Unscripted Testing

6.2.0 Error Guessing is

- Always worth including- After systematic techniques have been used- Can find some faults that systematic techniques can miss

Page 46

Page 47: Basics of Software Testing_Very Good__Modified

Software testing Concepts

6.2.1 Exploratory testing

What is Exploratory Testing?

"Exploratory testing involves simultaneously learning, planning, running tests, and reporting / troubleshooting results." Dr. Cem Kaner (2001)

"Exploratory testing is an interactive process of concurrent product exploration, test design and test execution.” ” To the extent that the next test we do is influenced by the result of the last test we did, we are doing exploratory testing.” James Bach, Satisfice (2001)

6.2.2 When to use Exploratory Testing?

A common goal of exploration is to probe for weak areas of the program.

Test team’s resource consumption per week:

25% of the group’s time developing new tests

50% executing old tests (including bug regression)

25% on exploratory testing

When there is little or no specifications and / or requirements

When you have little or no domain knowledge

When you don’t have time to specify, script and test

Uncertainty and Time Pressure!

Exploratory Testing is extremely useful when faced with software that is

Untested

Unknown or

Unstable

Page 47

Page 48: Basics of Software Testing_Very Good__Modified

Software testing Concepts

The tester must create a map of the application as he goes on testing it.

Take a more scripted approach when:

There are little uncertainty about how to test

New tests are relatively unimportant

The need for efficiency and reliability in executing tests is worth the effort of scripting

We are prepared to pay the cost of documenting and maintaining tests

Page 48

Page 49: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Module 7: Overview of Integration Testing

7.1 Integration testing

After developing the modules, the modules are integrated / joined. Once the modules are integrated they need to be tested. That is when we perform integration testing. There are two approaches of Integration testing, namely

1. Top-Down Approach 2. Bottom-up approach

7.1.0 Top-Down Approach

The top-down approach to integration testing requires the highest-level modules be test and integrated first.

This allows high-level logic and data flow to be tested early in the process and it tends to minimize the need for drivers.

However, the need for stubs complicates test management and low-level utilities are tested relatively late in the development cycle.

Another disadvantage of top-down integration testing is its poor support for early release of limited functionality

Stubs:

Stubs are virtual modules which is used in Top-Down approach of Integration testing

7.1.1 Bottom-up Approach

The bottom-up approach requires the lowest-level units be tested and integrated first.

These units are frequently referred to as utility modules.

Page 49

Page 50: Basics of Software Testing_Very Good__Modified

Software testing Concepts

By using this approach, utility modules are tested early in the development process and the need for stubs is minimized.

The downside, however, is that the need for drivers complicates test management and high-level logic and data flow are tested late.

Like the top-down approach, the bottom-up approach also provides poor support for early release of limited functionality

Drivers:

The approach is to write a program that passes input data to the unit under test and compares the output to truth. Test drivers are higher-level routines that call lower-level subprograms.

There are two types of Integration Testing, they are:

1. Big-Bang Integration2. Incremental Integration

7.1.2 .Big-Bang Integration

In theory:

-if we have already tested components why not just combine them all at once? Wouldn’t this save time?

- (based on false assumption of no faults)

In practice:

- takes longer to locate and fix faults- re-testing after fixes more extensive- end result? takes more time

7.1.3 Incremental Integration

Baseline 0: tested component

Baseline 1: two components

Page 50

Page 51: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Baseline 2: three components, etc.

Advantages:

-easier fault location and fix

-easier recovery from disaster / problems

-add to tested baseline

7.2 Input Artifacts for Integration Testing

Requirements

Use Case Model

User Interface Design

Requirements Traceability Matrix

7.2.0 Deliverables

Test Scenarios (Usually created by Lead)

Test Conditions

Expected Results

Summary:

At the end of this module, you will able to

Define the test process

Understand various testing types

Understand various testing techniques

Understand different artifacts for testing, and Deliverables

Page 51

Page 52: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Module8: Test Requirements

8.0 What is SRS?

An SRS is basically an organization's understanding (in writing) of a customer or potential client's system requirements and dependencies at a particular point intime (usually) prior to any actual design or developmentwork.

Background

Test Requirements Hierarchy” is a term coined from Rational’s SQA Team Test software.

The principle of identifying, organizing, and measuring test requirements is universal to many test processes and methodologies

Test Requirements

What exactly is a Test Requirement?

Why identify Test Requirements?

Where does a Test Requirement come from?

What exactly is a Test Requirement?

Identifies the “WHAT” of testing

What needs to be tested, AND

What are you going to validate about it

Includes both normal and error conditions

Covers business rules, functionality, non-functional standards

Page 52

Page 53: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Do NOT have case specific data values assigned to them yet (data appears in test cases, the “How” of testing) examples

Example 1: Testing the inserting of a record to a table

Test Requirements Identified (among others):

“Validate that you can insert an entry”

“Validate that insertion fails if entry already present”

“Validate that insertion fails if table already full”

“Validate that you can insert an entry to an empty table (initial)”

These are test requirements NOT tests because they do not describe the data element being inserted

The data is irrelevant at this level, it will appear in the test cases used to cover these test requirements

“Validate you can insert ‘John Doe’” is a test case not a test requirement

Why identify Test Requirements?

QC workbench is all about Requirements-based or Function-based testing

It’s the basis for establishing the completion of testing

Helps determine the scale of the testing effort

Governs the types of resources you will need

Serves to identify automation strategies you can use

Becomes a roadmap for your testing effort

Can be a tool for leverage and dialog with developers and business analysts

Dev Team can sign off on them (Verification!)

Where does a TR come from?

Traditional: Business Requirements, functionality, internal logic…

Marketing specs, Functional specs, Technical specs

Page 53

Page 54: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Reality:

“Interview Analysis”, Non-Functional Checklists (standards & compliance), Use Cases (from business scenarios and users), Discovery during testing, any other deliverables from previous workbenches (diagrams, modeling, flowcharts, etc.)

How do Test Requirements relate to the Test Plan?

Traditionally, the Test Plan has represented “what” was going to be tested, even including test cases.

Paradigm is shifting: Test Plan should relate what your testing process (and deliverables) will be for a particular project.

A Test Plan should build confidence in the testing process for a project, making approaches clear.

A Test Plan can include the Test Requirements

However, if the Test Requirements are too lengthy, they should become their own document: A Test Requirements Hierarchy

Drilling down: Where test requirements fit into the picture

Page 54

Page 55: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Drilling Down:

ATM Example: Practice Writing Test Requirements

Page 55

Page 56: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Example 2: Testing Withdrawals on an ATM

Test Requirements Identified (among others):

“Validate that a withdrawal option is available”

"Validate that a withdrawal of a multiple of $20, between $20-$300 can be done"

"Validate that <$20 is not allowed"

"Validate that >$300 is not allowed"

"Validate that $20 multiples >$300 is not allowed"

"Validate that non-$20 multiples between $20-$300 not allowed"

"Validate strange numeric amounts/combinations not allowed (all zero's, all 9's, 20.0000)"

“Validate that the withdrawal received is equal to the amount requested”

"Validate that a valid withdrawal amount must be below the account balance”

These are test requirements NOT tests because they do not describe the data element being used (like $20, $40, $60, $1)

The data is irrelevant at this level, it will appear in the test cases used to cover these test requirements

Entrance Criteria for Business Requirements to generate Test Requirements Visible?

Clear? (unambiguous)

Complete?

Consistent? (conflicting requirements must be prioritized)

Page 56

Page 57: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Reasonable? (achievable)

Measurable? (quantifiable)

Modifiable? (Will it change or is it stable?)

Traceable? (the source is known)

Dependent requirements identified?

Testable? (given current environment, resources, skills)

Exit Criteria for Test Requirements Can another tester create test cases/scenarios from these?

Does a Test Requirement specify what is being tested and what about it we are validating? (Clear?)

Are the Test Requirements…?

Complete?

Consistent? (conflicting requirements must be prioritized)

Reasonable? (achievable)

Measurable? (quantifiable for measuring test coverage)

Modifiable? (Will it change or is it stable?)

Traceable? (the source is known)

Testable? (given current environment, resources, skills)

Do the test requirements cover the complete scope of the project?

Are all the test requirements verified and signed off by the Development Team?

When creating Test Requirements (“Do”)... Use “action” verbs & words

o “Validate that…”

o “Check for…”

Page 57

Page 58: Basics of Software Testing_Very Good__Modified

Software testing Concepts

o “Test that…”

Trace them back to the source

Remember that different applications arrange in different ways

o Think of MF, batch, C/S, web, e-commerce, GUI, etc.

o Use “testing considerations” checklists that generally cover what kinds of things should be considered when testing your specific situation

Make your Test Requirements document a “living document”

Maintain a level of balance between too much & too little

o Too High level: won’t be useful, vague, can’t generate test cases from it.

o Too low level: Over-process, over documentation, no productivity

o General rule: 5-7 levels deep in an outline format

Organize them!

o Outline/Hierarchy format recommended

o Look at how the BA breaks down the project into areas

o Look at how the PA breaks down the project into areas

o Organize by functional areas

o Organize by “types” of testing (Function vs. System vs. Non-Functional)

Organizing by Functional areas

Most testers also perform User Interface Style Tests

These are generally separate from the functionality that the software will provide

Usually encompass the architectural standards & compliance (like Windows Design Standards)

Page 58

Page 59: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Also includes tests of navigation, menus, admin functions (like printing, saving)

Remember this…Drilling down

Decomposing: Drilling down within a Test Requirement

Page 59

Page 60: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Test Requirement Decomposition

Page 60

Page 61: Basics of Software Testing_Very Good__Modified

Software testing Concepts

You can start on the high level functional areas early into the project & build in lower level areas as they become available

Any level can contain a test requirement which can also be made up of (or broken down into) lower level test requirements

Business Function Level Try to identify “groups” of functions or functions connected by similar themes

File management functions, printing functions, help functions, car rental functions, reservation functions, ticket purchase functions, claim reporting functions

Be sure all areas of the system are covered. If something is left out or doesn’t fit into a group, it becomes its own group.

It may be easier to identify functional areas by “window” instead of by function.

At this level, the idea is seeing the connections, integration, and interactions of the system’s functionality.

May not necessarily be identifying a test requirement at this level as much as just identifying the functional area.

Task Level: Break down each Function area into the tasks within the function

For complex tasks, it is possible to break them down further into sub-tasks

Some Business Functions cannot decompose into further tasks (example: Check Writing function)

Page 61

Page 62: Basics of Software Testing_Very Good__Modified

Software testing Concepts

8.2 Transaction Level

From this level down, we start to address the internal things that occur to make the functions and tasks happen

Identify any logical transactions that ties the task to the database or any other transactions necessary to perform the task.

Identify any data processing, calculation, data formatting type transactions

Note: A screen or window may cause the execution of several different logical transactions

8.2.0 Transaction Data Type Level

Identify which of the four types the transaction can become: Add, Change, Delete, Inquire

It is entirely possible that a transaction can be multiple types.

If a transaction is only one type, you can wrap this level up into the higher level.

8.2.1 Field Validation Level

Most testers like to jump directly to this level. It’s the most obvious at times.

Field validation covers all edits & cross edits on fields and data.

Be careful of the detail you document at this level. Remember the separation between test requirement and test case.

Page 62

Page 63: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Not all test requirements (especially at this level) fit well to be documented in this manner.

Example: Testing all the stated properties of windows objects

Use “Validate that the properties of all the objects in this window match the properties listed on the Object Properties Reference Table in Appendix B upon initialization of the window”

Don’t list each property check as a separate test requirement if it can be summarize under one test requirement

This is a judgment call YOU make for your given project.

8.2.1 Example 3: Rental Car Application

1. Validate that a Rental can occur.

1.1 Check Customer policy coverage

1.2 Query Car availability

1.3 Query Car rates

1.4 Open a Rental ticket

1.4.1 Validate that a customer record can be entered

1.4.2 Validate that credit card approval is obtained

1.4.3 Validate that status on the car record is changed from “waiting” to “rented”

2. Billing Function

3. Reservation Function

1. Validate that a Rental can occur.

Page 63

Page 64: Basics of Software Testing_Very Good__Modified

Software testing Concepts

1.4 Open a Rental ticket

1.4.1 Validate that a customer record can be entered

1.4.1.1 Validate that a new customer can be added to the customer table

1.4.1.1.1 Validate that the first name is all alpha

1.4.1.1.2 Validate that the age is > 21.

1.4.1.1.3 Validate that the phone number is numeric

1.4.1.1.4 Validate area code is an existing area code number.

1.4.1.2 Validate changing an existing customer

1.4 Open a Rental ticket

1.4.2 Validate that credit card approval is obtained

…fill in the lower level test requirements!

First, identify any sub-areas (further tasks, or even separate transactions within this)

Then, identify the lowest level field validation test requirements (think about what is typically involved with credit card authorizations)

Possible Test Requirements...

Validate that a Rental can occur.

1.4 Open a Rental ticket

1.4.2 Validate that credit card approval is obtained

1.4.2.1 Validate the expiration date is a valid future date

1.4.2.2 Validate the expiration date is not within 1 month of

Expiring.

1.4.2.3 Validate that the CC# is 12 digits

Page 64

Page 65: Basics of Software Testing_Very Good__Modified

Software testing Concepts

1.4.2.4 Validate that the $ amount is <= credit balance available

1.4.2.5 Validate that authorization # is received

8.3 Test Coverage Measures

Test Requirements are the “what” of testing & are the basis for establishing the completion of testing

TR’s give us the point of measurement for test coverage

Each TR should receive a Priority, Risk, and Weight

Each TR should be tracked for Verification (P) and Validation (%)

Summary:

On Completion of this module, you will be able to

Define Test requirements from SRS

Identify the Business requirements

How to drill down to the Child requirements

How to find the test requirements coverage matrix.

Page 65

Page 66: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Module 9: Dynamic Testing

9.0 Static Testing:

People techniques -individual:

desk-checking, proof-reading-group:

Reviews: informal / formal

Walkthrough: for education

Inspection: to find faults (most formal)

Static analysis: control flow & data flow

10 times reduction in faults reaching test, testing cost reduced by 50% to 80%

-Freedman & Weinberg, Handbook of Walkthroughs, Inspections & Technical Reviews

25% reduction in schedules, remove 80% - 95% of faults at each stage, 28 times reduction in maintenance cost, many others -Gilb & Graham, Software Inspection

9.0.1 What can be inspected?

policy, strategy, business plans, marketing or advertising material, contracts

system requirements, feasibility studies, acceptance test plans

test plans, test designs, test cases, test results

system designs, logical & physical

software code

Page 66

Page 67: Basics of Software Testing_Very Good__Modified

Software testing Concepts

user manuals, procedures, training material

9.0.1 What can be reviewed?

Anything which could be Inspected -i.e. anything written down

plans, visions, “big picture”, strategic directions, ideas

project progress-work completed to schedule, etc

Cost of Reviews

Rough guide: 5%-15% of development effort

Effort required for reviews

planning (by leader / moderator)

preparation / self-study checking

meeting

fixing / editing / follow-up

recording & analysis of statistics / metrics

Process improvement (should!)

9.1 Types of Review

Informal Review undocumented

Widely viewed as useful and cheap (but no one can prove it!), A helpful first step for chaotic organisations.

Technical Review (or peer review)

Includes peer and technical experts, no management participation. Normally documented, fault-finding.

Decision-making Review

Page 67

Page 68: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Group discusses document and makes a decision about the content, e.g. how something should be done, go or no-go decision, or technical comments.

9.1.0 Walkthrough

- author guides the group through a document and his or her thought processes, so all understand the same thing, consensus on changes to make

9.1.1 Inspection

- formal individual and group checking, using sources and standards, according to generic and specific rules and checklists, using entry and exit criteria, Leader must be trained & certified, metrics required

9.2 What can static analysis do?

A form of automated testing

Check for violations of standards

Check for things which may be a fault

Descended from compiler technology

A compiler statically analyses code, and “knows” a lot about it, e.g. variable usage; finds syntax faults

Static analysis tools extend this knowledge

Can find unreachable code, undeclared variables, parameter type mis-matches, uncalled functions & procedures, array bound violations, etc.

9.2.0 Static Analysis

Data flow analysis

This is the study of program variables

n := 0

read (x)

n := 1

Page 68

Page 69: Basics of Software Testing_Very Good__Modified

Software testing Concepts

while x > y do

begin

Z := 0

read (y)

write( n*y)

x := x - n

end

Control flow analysis

Highlights:

nodes not accessible from start node

infinite loops

multiple entry to loops

whether code is well structured

whether code conforms to a flowchart grammar

any jumps to undefined labels

any labels not jumped to

cyclomatic complexity and other metrics

Unreachable code example

Macro definitions

Buffsize: 1000

Mailboxmax: 1000

IF Buffsize < Mailboxmax THEN

Page 69

Page 70: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Error-ExitENDIF

Static Analysis finds the THEN clause unreachable, so will flag a fault

9.2.1 Cyclomatic complexity

Cyclomatic complexity is a measure of the complexity of a flow graph (METRICS)-(and therefore the code that the flow graph represents)

The more complex the flow graph, the greater the measure, it can most easily be calculated as:

-complexity = number of decisions + 1

Other Metrics - LOC - fan-in, fan-out - Nesting levels

Which flow graph is most complex?

Limitations and advantages

Page 70

Page 71: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Limitations:

Cannot distinguish "fail-safe" code from programming faults or anomalies

Does not execute the code, so not related to operating conditions

Advantages:

Can find faults difficult to "see"

Gives objective quality assessment of code

9.3 Dynamic Testing

In this type of testing, we expect the code to be ready so that we can execute a condition in the application and check for the actual result. In Dynamic Testing for every input we expect an output.

Dynamic Testing is implemented with help of some testing techniques. Now let us understand what is a testing technique.

Exhaustive testing (use of all possible inputs and conditions) is impossible

Must use a subset of all possible test cases

Must have high probability of detecting faults

Need thought processes that help us select test cases more intelligently

Test case design techniques are such thought processes

9.3.0 What is a testing Technique?

A procedure for selecting or designing tests

Based on a structural or functional model of the software

Successful at finding faults

A way of deriving good test cases

Page 71

Page 72: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Advantages of Testing Techniques

Different people: similar probability find faults gain some independence of thought

Effective testing:

More faults with less effort

Focus attention on specific types of fault

Know you're testing the right thing

Efficient use of time:

Avoid duplication

Systematic techniques are measurable

9.3.1 Three types of systematic technique

Static (non-execution)

Examination of documentation, source code listings, etc.

Functional (Black Box)

Based on behaviour / functionality of software

Structural (White Box)

Based on structure of software

Black Box v/s White Bos=>

Black box appropriate at all levels but dominates higher levels of testing White box used predominately at lower levels to compliment black box

Page 72

Page 73: Basics of Software Testing_Very Good__Modified

Software testing Concepts

9.4 Black Box testing Techniques

Specification derived tests

Equivalence partitioning

Boundary value analysis

Risk Based Testing

Decision Tables

9.4.0 Specification derived tests

Consider the specification for a function to calculate the square root of a real number:

→ Input - real number

→ Output - real number

→ When given an input of zero or greater, the positive square root of the input shall be returned.

→ When given an input of less than 0, the error message "Square root error - negative input" shall be displayed and a value of 0 returned.

→ The library routine Print_Line shall be used to display the error message.

Page 73

Page 74: Basics of Software Testing_Very Good__Modified

Software testing Concepts

1. There are three statements in this specification, which can be addressed by two test cases

2. Print_Line conveys structural information in the specification

1. Test Case 1: Input 4, Return 2

• Exercises the first statement in the specification

("When given an input of 0 or greater, the positive square root of the input shall be returned.").

1. Test Case 2: Input -10, Return 0, Output "Square root error - illegal negative input" using Print_Line

1. Exercises the second and third statements in the specification

("When given an input of less than 0, the error message "Square root error - illegal negative input" shall be displayed and a value of 0 returned. The library routine Print_Line shall be used to display the error message.").

9.4.1 Equivalence partitioning (EP)

divide (partition) the inputs, outputs, etc. into areas which are the same (equivalent)

assumption: if one value works, all will work

one from each partition better than all from one

Page 74

Page 75: Basics of Software Testing_Very Good__Modified

Software testing Concepts

9.4.2Boundary value analysis (BVA)

faults tend to lurk near boundaries

good place to look for faults

test values on both sides of boundaries

Example: Loan application

Page 75

Page 76: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Customer Name

Number of characters

Conditions Valid Partitions

Invalid Partitions

Valid Boundaries

Invalid Boundaries

Customer name

2 to 64 chars valid chars

< 2 chars > 64 chars invalid chars

2 chars 64 chars

1 chars 65 chars 0 chars

Account Number

Page 76

Page 77: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Conditions Valid Partitions

Invalid Partitions

Valid Boundaries

Invalid Boundaries

Account number

6 digits 1st non-zero

< 6 digits > 6 digits 1st digit = 0 non-digit

100000 999999

5 digits 7 digits 0 digits

9.4.2.0 Condition template

Page 77

Page 78: Basics of Software Testing_Very Good__Modified

Software testing Concepts

9.4.2.1 Design test cases

9.4.2.2 Decision tables

explore combinations of inputs, situations or events,

it is very easy to overlook specific combinations of input

start by expressing the input conditions of interest so that they are either TRUE or FALSE

record found

file exists

code valid

policy expired

account in credit

due date > current date

Determine input combinations

add columns to the table for each unique combination of input conditions.

each entry in the table may be either ‘T’ for true, ‘F’ for false.

Page 78

Page 79: Basics of Software Testing_Very Good__Modified

Software testing Concepts

9.4.3 Cause-effect graphing

In too many instances, an attempt to translate a policy or procedure stated in a natural language into a software causes frustration and problems. Cause-effect graphing is a test case design approach that offers a concise depiction of logical conditions and associated actions.

9.5 Procedure (Scenario) testing

Understanding Scenario Based Testing

Scenario Based Tests (SBT) are best suited when you need to tests need to concentrate on the functionality of the application, than anything else.

What do you do for deriving Scenarios?

We can use the following as the basis for deriving scenarios:

Page 79

Page 80: Basics of Software Testing_Very Good__Modified

Software testing Concepts

From the requirements, list out all the functionalities of the application.

Using a graph notation, draw depictions of various transactions which pass through various functionalities of the application.

Convert these depictions into scenarios.

Run the scenarios when performing the testing.

9.5.0 Characteristics of Good Scenarios

A scenario test has five key characteristics.

It is

(a) a story that is

(b) motivating,

(c) credible,

(d) complex, and

(e) easy to evaluate.

9.5.1 Non-systematic test techniques

Trial and error / Ad hoc Error guessing / Experience-driven Unscripted Testing

9.5.2 Error Guessing is

always worth including after systematic techniques have been used can find some faults that systematic techniques can miss

Page 80

Page 81: Basics of Software Testing_Very Good__Modified

Software testing Concepts

9.6 Functional vs. Non-functional Testing

Functional Testing

Testing the application against business requirements. Functional testing is done using the functional specifications provided by the client or by using the design specifications like use cases provided by the design team. Non-functional is more on the performance aspect of the system like load, stress, volume, configuration etc.,

9.6.0 Functional Testing covers:

1. Unit Testing2. Smoke testing / Sanity testing3. Integration Testing (Top Down,Bottom up Testing)4. Interface & Usability Testing5. System Testing6. Regression Testing7. Pre User Acceptance Testing(Alpha & Beta)8. User Acceptance Testing9. White Box & Black Box Testing10. Globalization & LocalizationTesting

9.7 Summary:

On the completion of this module, you will understand

Need for testing techniques

Different types of testing techniques

Static and dynamic testing techniques

Functional and non-functional testing

Scenario based testing

Page 81

Page 82: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Module 10: Manual Test Scripts

10.1 What is Test Execution?

Test execution is the phase held after test planning and test preparation that involves a set of required deliverables. Test execution is conducted with either manual or automated test scripts

Test Execution Objective:

The main objective of test execution is to demonstrate that the actual test results for each test step match the expected test results.

Alternatively, test execution may identify that the system configuration does not meet requirements, which causes the test team to log defects and fix them, thereby increasing the quality of the product.

A Developer’s Perspective:

Testers and Developers work closely together. What do you think a developer will be concerned about?

Whether all the business requirements have been properly implemented.

If end users are able to successfully execute end-to-end business scenarios.

If the system is fully integrated, stable, and operates according to requirements.

Page 82

Page 83: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Tester’s role in Test Execution:

10.1 What are the key responsibilities of a Tester?

Primary Tester Responsibilities:

Execute test scripts.

Record potential defects as SIRs.

Work with the application team to resolve identified defects.

Participate in the release control process to ensure that solutions meet business requirements.

Validate application fixes.

Inform the test lead of any issues that may affect the schedule, budget, or quality of the application or testing process.

Page 83

Page 84: Basics of Software Testing_Very Good__Modified

Software testing Concepts

10.1.0 Who Should Execute the Test?

Responsible Group Unit Integration System AcceptanceTesters X X XDevelopers X X XEnd-Users X X

10.2 Testing Process Overview

10.2.0 Test Process Flow Overview - Execute Test

First Pass: “BaseLine”

• Confirm the test environment has been established properly, modules have been installed, and connectivity is working

Page 84

Page 85: Basics of Software Testing_Very Good__Modified

Software testing Concepts

• Execute all test scripts for the test phase at least once to drive out defects as early as possible

• Resolve any defects that are impeding test execution

Second Pass: “Retest”

• Retest defects identified from the First Pass and new defects found in the Second Pass based on priority (iterative)

• Re-execute other previously passed, selective test scripts, at the discretion of the test lead

Third Pass: “Freeze”

• Retest all test scripts for the test phase

• Identify open defects that can be deferred to a later test phase or to post-production

• Address all remaining defects

Page 85

Page 86: Basics of Software Testing_Very Good__Modified

Software testing Concepts

10.3 Test Steps/Test Scripts;

Test Script creation and execution happens during the Perform test stage of the STLC

Deliverables:

The deliverables required are

Test Script

Test Data

SIR [System investigation Request]

Test Results

Issue Log [Details of the Faults in the system/Environment]

10.3.0 Test Execution Entry and Exit Criteria

As you pass from one phase of testing into the next, there is a need for control.

There is a need for the first phase (or the supplier) to retain control of their testing phase, until such time as it is deemed ready for release.

The 2nd phase (or the recipient) on the other hand, needs to ensure that the testing performed by the supplier has achieved a sufficiently high standard as to be acceptable.

The means of achieving this is referred to as Exit Criteria (for the supplier) and Entry Criteria (for the recipient). These criteria defines the standards that should be achieved entering and exiting, the test phase described by the document.

10.3.1 Entry Criteria:

Page 86

Page 87: Basics of Software Testing_Very Good__Modified

Software testing Concepts

A set of decision-making guidelines that indicate whether the project is ready to enter a particular phase of testing.

10.3.2 Exit Criteria:

A set of decision-making guidelines that indicate whether the project is ready to exit a particular phase of testing.

10.4 Application Product Test Entry Criteria: The following are the set of entry criteria to control the quality of the deliverables related to application product test.

10.4.0 Plan and Prepare

Plan and prepare inputs to the application product test to meet the exit criteria: Test Approach and Requirements.

Meet with users to (re)establish conditions.

Application Product Test Entry Criteria – Execute

10.4.1Execute

Fully test assemblies.

Complete root cause analysis of assembly test.

Ensure inputs to execute the application product test meet their exit criteria: Test Approach and component and assembly tested work units.

Set up test environment hardware.

Page 87

Page 88: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Configure test environment software.

Use latest release of operations architecture and execution architecture.

Ensure the application product test environment models the production environment (database size, LAN configurations, automated processes, and manual processes) and is independent of the assembly test environment.

Configure environment for production efficiency.

Reserve back-up hardware and equipment.

Install and test the test execution and version control tools.

Obtain test data.

Populate databases.

Establish responsibility for introducing code into the environment.

Refine promotion procedures.

Define responsibility for running batch jobs.

Run a mini-pilot of the test stage to ensure a stable environment.

Reserve back-up hardware and equipment.

Install and test the test execution and version control tools.

Obtain test data.

Populate databases.

Establish responsibility for introducing code into the environment.

Refine promotion procedures.

Define responsibility for running batch jobs.

Run a mini-pilot of the test stage to ensure a stable environment.

Application Product Test Entry Criteria – Approach

10.4.2 Approach

Complete the Test Approach deliverable, including:

Page 88

Page 89: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Test objectives and scope

Risks

Regression test approach

Test environment requirements

Metrics

Entry and exit criteria

Test resources and work plan

Reference existing test models.

Define the test strategy.

Preliminarily define test cycles.

Ensure the work plan includes all tasks, resources, and budgets.

Application Product Test Entry Criteria – Plan

10.4.3 Plan

Complete the Test Plan deliverable.

Ensure text for cycle description is understandable.

Ensure all cycles test an appropriate number of conditions and are tied to business processes.

Write conditions in functional terms.

Ensure conditions thoroughly test the business function.

Cross-reference conditions and test cycles.

Tie conditions to business benefits.

Identify functional, quality, and technical test conditions.

Logically group conditions into test cycles.

Page 89

Page 90: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Ensure interfaces account for all data sources and destinations.

Make sure audit trails can reconstruct processing.

Ensure conditions test the following:

Communication

Data conversion

End-to-end processing (horizontal)

Entire function (vertical)

Error/restart/recovery

Loading of database

Performance

Periodic functions

Primary interfaces

Secondary interfaces

Stress

Ensure the application complies with the organization’s policies and external regulations.

Ensure the security of all appropriate data and transactions.

Ensure documents identify their purpose and audience.

Present reports and documents in a timely manner.

Make the application and documentation consistent.

Include quality and technical conditions in one or more cycles.

Group cycles by above condition types.

Application Product Test Entry Criteria – Prepare

Page 90

Page 91: Basics of Software Testing_Very Good__Modified

Software testing Concepts

10.4.4 Prepare

Complete the Test Script and Test Conditions and Expected Results deliverables.

Augment the test conditions, if necessary.

Ensure the expected results are easy to understand by the executor.

Cross-reference the test conditions with test cycles, input data, and output data.

Ensure one can understand what characteristics the data need in order to re-execute.

Ensure all output references demonstrate the conditions being tested.

Define the test configuration.

Ensure all inputs test specified conditions.

Document expected results.

Create scripts.

Document input data, output data, and the test configuration.

Have the test model reviewed by functional experts and the test stage manager.

Ensure the planner walks through the test model with the executor.

Ensure the test is repeatable based on the information given.

Make test scripts appropriately modular (high-level, detailed, error processing) and at an appropriate level for the executioner

10.5 Application Product Test Exit Criteria: The following are the set of exit criteria to control the quality of the deliverables related to application product test.

Page 91

Page 92: Basics of Software Testing_Very Good__Modified

Software testing Concepts

10.5.0 Application Product Test Exit Criteria – Complete Deliverables

Complete Deliverables

View the actual results.

Identify any unresolved problems.

Resolve those problems.

Gain user sign-off.

10.5.1 Application Product Test Exit Criteria – Meet Standards

Meet Standards

Ensure the actual results show proof of testing.

Create cross-references to the test cycles and conditions.

Ensure the flow of the test is clear.

Complete point sheets with problems, and include resolutions.

Ensure the environment is clean for the next cycle.

10.5.2 Application Product Test Exit Criteria – Follow the Process

Follow the Process

Ensure actual results match expected results.

Ensure all conditions tested successfully.

Page 92

Page 93: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Ensure the application product test manager and the original analyst review the application product test packet after the test.

Fill out the sign-off sheet completely.

Submit data and migration requests as necessary.

Update test cycles and test conditions with testing status.

Update the product’s status.

Update the checklist cross-references.

Collect the appropriate metrics.

Obtain the final sign-off by test management.

10.5.5 Application Product Test Exit Criteria – Meet Criteria

Meet Criteria

Ensure the product performs all aspects as defined.

Determine whether one can gain a basic understanding of the product from the documentation.

Make sure the product meets quality requirements.

10.5.6Pre-requisites for Test Execution:

Planning and preparation are complete.

Page 93

Page 94: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Inspections have occurred.

Test Data are done.

Environment is well established and ready.

Test team is trained.

Migration procedures exist.

Test tools are used.

Test Execution exit criteria:

Results from completed test plan indicating exit criteria are met.

The application’s final version is saved for upcoming development/test activities.

The actual results against original work estimates are saved for use in future estimates.

The end-of-testing database content is saved to a back-up file

10.5.7 Test Execution deliverables includes:

Test Steps

High Level Scenarios.

Data Set up.

Environment Set up.

Configuration Set up.

Test summary reports.

Test Completion report

Test Steps:

Test Steps define the tests to be performed and should always match the development stages they relate to, alongside the requirements and designs they intend to prove.

Page 94

Page 95: Basics of Software Testing_Very Good__Modified

Software testing Concepts

A test condition is defined as an item or event that could be verified by one or more test case (e.g. a function, transaction, quality characteristic or structural element).

Test Data:

Can be defined as system transactions that are created for the purpose of testing the application system.

Usually documented in the test setup section of the test script.

Can be used at the different stages of test.

Test Environment Considerations:

Determine environments required for testing.

Determine set-up requirements and lead time.

Emphasize configuration management.

Verify readiness of the test environment prior to starting the test.

Develop standards to ensure that the promotion process is conducted thoroughly and properly.

Implement checklists and test scripts in test standards.

Execute the Test

Execute test.

Execute the test scripts, compare the actual results to the expected results, and identify and resolve any discrepancies.

The deliverables are documented actual results, fixed defects, and a fully executed Test Plan.

Support Test Environment.

Ensure that the test environment is effectively supported.

Manage test.

Periodically measure and report the test progress.

Page 95

Page 96: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Document this with test progress metrics and weekly progress reports (issues, actions, risks, change requests, etc., as appropriate).

10.6 Steps in Test Execution:

10.6.0 Test Execution

What is a Test Script?

A test script is a granular, step by step representation of a test condition.

It describes an input, action, event and expected response, to determine if a requirement has been partially or fully satisfied.

It may take many test scripts to determine that a requirement is fully satisfied.

Test Script activity:

Testing environment/configuration contains information about configuration of hardware or software which must be met while executing test script.

Actions step-by-step to be done to complete test.

Input data description.

Results:

Expected results contains description of what tester should see after all test steps has been completed.

Actual results contains a brief description of what the tester saw after the test steps has been completed.

Page 96

Page 97: Basics of Software Testing_Very Good__Modified

Software testing Concepts

10.6.1 Test script execution involves the following key tasks:

Step-by-step execution of scripts

Defect management.

Re-test.

Test execution result management.

Adjust sequence/schedule as appropriate.

10.6.2 Tools for Building Test Reports

Cause and Effect Diagram

Useful tools to visualize, clarify, link, identify, and classify possible causes of a problem.

Also referred to as a "fishbone diagram," or an "Ishikawa diagram," or a "characteristics diagram.“

A diagnostic approach for complex problems, this technique begins to breakdown root causes into manageable pieces of a process.

10.6.3 Check Sheets

A technique or tool to record the number of occurrences over a specified interval of time.

The recording of data, survey, or sample is to support or validate objectively the significance of the event.

Usually follows the Pareto analysis and cause and effect diagram to validate and verify a problem or cause and is often used to establish a frequency or histogram chart.

10.6.4 Run Charts

Page 97

Page 98: Basics of Software Testing_Very Good__Modified

Software testing Concepts

A run chart is a graph of data (observation) in chronological order displaying shifts or trends in the central tendency (average).

The data represents measures, counts or percentages of outputs from a process (products or services).

Established for measuring or tracking events or observations in a time or sequence order.

10.6.5 Control Charts

A statistical technique to assess, monitor and maintain the stability of a process.

The objective is to monitor a continuous repeatable process and the process variation from specifications.

The intent of a control chart is to monitor the variation of a statistically stable process where activities are repetitive.

Two types of variation are being observed:

1. Common, or random. 2. Special or unique events.

10.6.6 Results Sign-off:

This occurs when test execution exit criteria goals are met.

Stakeholders formally review and sign off on test execution results to verify exit criteria are met.

This formally indicates that test execution is completed.

Sign-off process typically includes, but is not limited to the following:

Confirm that all test conditions/scripts executed successfully and receive signed off.

Confirm that all defects required for test completion are closed and regression tested, and that any remaining defects are deferred for future reference.

10.6.7 Test Completion:

Page 98

Page 99: Basics of Software Testing_Very Good__Modified

Software testing Concepts

This occurs after the test execution exit criteria are verified. At this point all the test planning, test preparation, test execution and defect fix is complete and the product under test is ready to go live.

Perform a final close-out on the activities after verifying the exit criteria:

Save or flag the final tested code set, configurations, and other technical components.

Close or defer any open defects.

Save database content to a back-up file.

10.6.8 Test Metrics:

Planned completed test conditions/scripts per day/week.

Planned earned value for test execution.

Planned defect rate.

Defects as a percentage of steps completed.

Defects per test script.

10.7 Summary:

On completion of this module, you will be able to

Define the test inputs

Deliverables

Test Script creation

Test script execution

Test Metrics

Page 99

Page 100: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Module 11: Defect tracking

11 What is a Defect?

A program P is considered accurate with respect to a specification S, if and only if:

• For each valid input, the output of P is in accordance with the specification S.

11.0 What is the meaning of Defect in the Software Terminology?

• A variance from the desired product quality.

• Any situation where the system does not behave as indicated in the specification.

The software does not do something that the requirement specifications mention

A defect is a manifestation of an error within the software.

It can be defined with respect to the following three categories:

Error – A human action that produces an incorrect result.

Fault – A manifestation of an error within the software, also known as a defect or bug.

Failure– The departure of operational system behaviour from the user requirements.

Defects can be classified as following:

Page 100

Page 101: Basics of Software Testing_Very Good__Modified

Software testing Concepts

11.1 Defect Classification by Category

Defects generally fall into the following three categories:

Wrong

The specification has been implemented incorrectly

This defect is a variance from customer/user specification

Missing

A specification or wanted requirement is not in the built product

This can be:

A variance from specification

An indication that the specification was not implemented

A requirement of the customer identified during or after the product was built

Extra

A requirement included into the product that was not specified

This is always a variance from specification

Page 101

Page 102: Basics of Software Testing_Very Good__Modified

Software testing Concepts

11.2 Defect Classification by Phase of Origin

Defects generally involve the following:

Poor programming

Poor requirements

Missing requirements

Missed defects in testing

Misinterpreted customer requests

Untested usage scenarios

Lack of error check and program unit interface definition in the design process

Defects generally involve the following:

Poor programming

Poor requirements

Missing requirements

Missed defects in testing

Misinterpreted customer requests

Untested usage scenarios

Lack of error check and program unit interface definition in the design process

11.2.0 Defect Classification by Severity

Severity indicates the impact of the defect on the business:

Blocker or Show stopper Critical Major Minor Enhancement

Page 102

Page 103: Basics of Software Testing_Very Good__Modified

Software testing Concepts

11.2.1 Defect Classification by Priority

Priority is used to indicate the precedence of the defect to be fixed:

Critical

High

Medium

Low

The defect will be fixed based on the priority level according to the time basis.

11.2.2 Defect Classification by Type

Common types of Product Test defects:

System Functionality

Impact how the application operates as a whole.

User Interface

Impact the appearance or use of the interface.

Integration

Impact the system’s interaction with external systems.

Security

Impact the fulfillment of security requirements.

Technical Architecture

Impact how the application’s architecture works.,….

11.3 Causes of Defects

The following are the possible cause categories for defects found during various types of testing:

Page 103

Page 104: Basics of Software Testing_Very Good__Modified

Software testing Concepts

The cause categories are:

Omission

Lack of knowledge

Miscommunication

Accidental

Others:

Specification may be wrong

Specification may be a physical impossibility

Faulty program design

Incorrect program

Omission: Failure to include or address an item which results in a defect.

Reasons Solutions

The source document is unclear or incomplete

Revise the source document.

Use review procedures, such as document review and inspections, to ensure clarity and completeness.

No direction is given as to what should be included when producing the deliverable

Provide standards or guidelines.

Provide training.

Provide checklists.

Changes are not included Establish a formal change mechanism, such as a change request procedure.

Standards, guidelines, or checklists do Revise standards, guidelines, or

Page 104

Page 105: Basics of Software Testing_Very Good__Modified

Software testing Concepts

not identify all necessary items checklists

Lack of Knowledge

Staffs incorrectly address an item due to lack of knowledge or understanding.

Reasons Solutions

Staff have not been given the necessary information or is a novice

Provide training.

Provide checklists.

Information is not shared

Conduct regular status and informational meetings.

Establish a communication mechanism, such as programmer bulletins.

Miscommunication

Incorrect communication causes a defect.

Reasons Solutions

Incorrect information is communicated.

Establish communication using a central and accountable source, such as a team leader or project manager.

When information affects multiple staff, communicate to staff all at once so that everyone gets the same message. (For example, through training, or a staff meeting as applicable).

Communicate critical information in writing (For example, use programmer bulletins and

Page 105

Page 106: Basics of Software Testing_Very Good__Modified

Software testing Concepts

checklists).

Reasons Solutions

Information is misinterpreted. When giving directions, ask staff to repeat what was said.

When giving an assignment, request a small sample to ensure understanding.

Written instructions, such as the source document, standards, guidelines or checklists are unclear or incorrect.

Revise source document, standards, guidelines, or checklists.

Use review procedures, such as document reviews and inspections, to ensure clarity and accuracy.

Accidental

Staff member has the necessary knowledge and information, but inadvertently makes a mistake.

Reasons Solutions

Working conditions prohibit the concentration necessary.

Provide a more protected environment, (For example, allows a person to work temporarily in an isolated environment, such as a conference room, or provide blocks of protected phone time).

Automate activities, when possible.

Wrong specification

Requirements may be incorrectly translated.

Page 106

Page 107: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Specification may be a physical impossibility.

The requirements may not be achievable.

Faulty program design

The program design may miss out steps to meet the requirements.

Infinite looping may have been introduced.

Program may be incorrect

Requirements may have been misinterpreted.

The program created did not achieve what was required

11.4 Defect Life Cycle

What is a Defect Life Cycle?

The duration between the detection of a defect and closing the defect successfully is called as the software defect life cycle.

Testers, test lead, developers and development lead play a vital role in this life cycle.

11.4.0 High Level Defect Life Cycle:

Page 107

Page 108: Basics of Software Testing_Very Good__Modified

Software testing Concepts

11.4.1 Detailed Defect Life Cycle

11.5 Status of Defects

Page 108

Page 109: Basics of Software Testing_Very Good__Modified

Software testing Concepts

New

The defect is in the “New” state when it is detected the first time.

The tester logs the defect with the status as “New” in the defect report.

Assigned

Here the defect is assigned to the developer to fix.

The development lead logs the status as “Assigned” in the defect report.

Open

The developer changes the status as “Open” when he starts fixing the defect.

Fix In progress

Development team members change the status while working the fix.

Fixed

Once the developer has addressed the defect, he changes the status as “Fixed” which is reviewed by the development lead and it is forwarded to test lead.

Not yet deployed in the target environment.

Ready to Test

Once the fix is deployed in the target environment.

Retest

The test lead changes the status as “Retest” and sends it to tester to retest to check whether the defect is fixed.

Reopen

When Ready to Test Defect fails retest, it is reopened and assigned back to development.

Closed

The tester checks whether the defect is fixed or not or when the fix is retested to success in the target environment.

Page 109

Page 110: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Rejected

The test lead reviews the defect and if the defect is not valid then the state is changed to “Rejected” or rejected by development if the defect is invalid.

Withdrawn

Test team member withdraws as a user error or duplicate.

Deferred

Defects insignificant for current release, can be planned or moved to the next release.

Awaiting Information:

Insufficient information provided with the defect for its analysis and reproducibility.

11.6 What is Defect Management?

The process of tracking and managing the discovery, resolution, and re-test of system defects identified during test execution.

This process involves recording, reviewing, and prioritizing defects, assigning defects to developers for fixing, and assigning testers to re-test fixes.

Essential to allow managers to accurately monitor the number, priority, and status of defects, so they can best manage the continued progress of the system development project.

11.6.0 Objectives of Defect Management

Primary objective is to prevent defects.

Find defects early in the Software Development Life Cycle and minimize the impact on the software application.

To ensure defect resolution is undertaken in a controlled and timely manner.

Use the defect information for process improvement and increase predictability.

Defect Management as a risk-driven approach to reduce risk of failure.

Page 110

Page 111: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Align relevant tools for data collection, analysis, measurement and reporting.

11.6.1Defect Management Team

Roles and Responsibilities

Test Analyst

Performs testing and reports test results (issues, defects).

Re-execute test scripts to validate application defect fixes.

Test Lead / Test Manager

Coordinate, review and track status of test execution.

Reviews defects, filters on duplicate defects, prioritise and assigns defects to Development Lead.

Attends defects review meetings, interacts with the development manager and assigns defects to test team for retest.

Development Lead / Project Manager

Attends defects review meetings.

Reviews, approves fixes and assigns defect logs to developers.

Confirms and communicates deployment of fixes to test environments.

Single point of contact for all test and business teams for defect fixes.

Development Team Member

Analyzes defects assigned by the development lead / manager.

Applies the defect resolution and informs the development lead.

Performs testing of the fix applied before deploying to test environments.

11.7 Scope of Defect Management

Which factors define the scope of Defect Management?

Page 111

Page 112: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Defect Identification

Defect Logging

Defect Tracking

Defect Resolution

Defect Closure

Process Optimization

Analysis of Defect Metrics

Defect Prevention Planning

Process Improvement

11.7.0 Defect Identification

Identify

Find out the deviation from the expected result as per the requirement specifications.

Analyze

Understand the impact on the testing as well as on the business, rate the severity and priority.

Record

Document the details of the deviation.

• Is it the test, the tester, or the product?

• Reproducible versus Repeatable

• Can it be isolated?

• Find related problems

Page 112

Page 113: Basics of Software Testing_Very Good__Modified

Software testing Concepts

• Look for alternate paths to the problem

• Is it worth reporting?

11.7.1 Defect Logging

It is also known as Defect Recording.

A Defect is best presented when it is:

Clear without ambiguity.

Consistent with respect to its representation.

Correct with respect to information provided.

Complete with all information, even screenshots and steps to reproduce.

Defects are generally recorded / logged in the Defect Management tools where the complete defect life-cycle is managed from its inception till its closure.

11.7.2 Major defect fields providing defect information

• Defect ID

• Summary

• Description

• Status (as discussed in part 1)

• Severity

• Priority

• Test Stage

• Detected By

• Detected Date

• Assigned To

• Environment

Page 113

Page 114: Basics of Software Testing_Very Good__Modified

Software testing Concepts

• Module

• Defect Category

• Closed Date

• Additional Fields, which can be added:

• Project

• Detected in Version

• Closed in Version

• Estimated Fix Time

• Actual Fix Time

• Related Test Case ID

• Any other field, depending of need of the project for purpose of Tracking.

11.8 Defect Reporting Essentials

• Look for duplicates

• Talk with the developer

• Enter it into the system

• Make sure it will get noticed

Page 114

Page 115: Basics of Software Testing_Very Good__Modified

Software testing Concepts

11.8.0 Components of Defect Reporting (cont.)

Identifying information – Components

Defect ID / Number

Release Name

Project Name

Module Name

Submitter

Submit Date

Program or product the reporting is against

Version or revision of the product

Description of the problem – Components

Title

Page 115

Page 116: Basics of Software Testing_Very Good__Modified

Software testing Concepts

It must convey enough information in limited number of words so that the problem can be understood. Any abbreviations must be generally acknowledged or explained.

Description

This is the problem itself. What did happen / what should have happened?

The Test Case used

Any other helpful information

Particularly any attachments

Description of the problem – Points to remember

Explain it in three to four lines.

Avoid vague or confusing terms such as “frequently” or “sometimes”.

Avoid uncommon abbreviations.

Use any standard terminology.

Pay attention to spelling and grammar.

Status Indicators – Components

Report Status

Severity

Priority

Resolution Status

Status Indicators – Components (cont.)

Severity: How “bad” is it?

Generally related to the defect’s effect on testing

Priority: How urgent is it?

Page 116

Page 117: Basics of Software Testing_Very Good__Modified

Software testing Concepts

When does this have to be fixed?

Severity 1: Critical / Showstopper!

Testing cannot continue until the defect is addressed there is no possible workaround.

Severity 2

The functionality is very important in production – there is a workaround in production, but it is very difficult.

Severity 3

The functionality is important in production, but there is a workaround.

Severity 4

The functionality is “nice to have” in production, but is not necessary prior to go live.

Severity 5: Enhancement!

If an item is defined as an enhancement the change proposal is required to be completed and the form submitted to the defined change control board (CCB) for approval.

Status Indicators

Priority 1: Critical

Generally reserved for fatal errors that mean that testing cannot continue without fix, and/or means that the service cannot go-live. Must be fixed before go-live.

Priority 2: High

Used when there is a problem that means that testing can continue on the scenario using difficult workarounds, and/or significantly impacts the business' ability to use the application or AB's ability to operate the service. Must be fixed before go-live.

Page 117

Page 118: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Priority 3: Medium

Used when there is a problem that means that testing can continue with relatively straightforward workarounds, and/or has a minor impact on the business’ ability to use the application. Must be fixed before go-live unless agreed otherwise

Priority - 4: Low

Used to highlight minor SIRs that will be fixed only if time permits and does not impact the businesses ability to use the application or AB's ability to operate the service (For example, cosmetic).

Priority -5: Change Request

Tester raises a SIR, but when a subject expert (For example, designer) later reviews it, it is determined as a change in scope. The SIR is prioritized as a CR. Once a CR is raised formally, the SIR will be closed.

Supporting information – Components

Environment

Test case id/ name

Test Case type

Expected result

Actual result

Data files or dataset files

Memory dump

Trace files

Error logs

Error printouts

Screenshots of the error/ defects

Page 118

Page 119: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Miscellaneous – Components

Steps to reproduce

Include setup information.

Anyone with the appropriate setup should be able to reproduce the problem.

Could be as simple as a Test Case ID or lengthy procedure.

Pay attention to problems that are merely repeatable or reproducible.

Environment

Target Release

Closed Release

Closed Date

Discovered by

Defect Type

Software component affected

Fix Hours

Test Hours

Workarounds

11.9Defect Tracking

It must have a process in place.

The purpose is to make sure that the defect progresses from being submitted to getting closed.

Page 119

Page 120: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Examples of Defect Management Tools

Rational Test Manager (IBM-Rational)

Test Director / Quality Centre (HP-Mercury)

TrackRecord (Compuware)

Silk Central Issue Manager (Segue)

11.10 Summary:

On Completion of this module, you will be able to

Define a defect

Identify a defect

Various components of defects

Page 120

Page 121: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Important components of a defect

Defect Life Cycle

Tracking the defects

Page 121

Page 122: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Module 12: Test Planning and strategy

12 What is test Strategy?

Test strategy defines an applicable Test Process to achieve quality that we promise to user. This is your overall test strategy for this test plan

12.1Test Strategy changes from project to project

Are any special tools to be used and what are they (Need to mention the tools that are using in the current project)?

Will the tool require special training?

Software(Need to mention the software if we are using specially)

If this is a master test plan the overall project testing approach and coverage requirements must also be identified.

12.1.0 Test Plan

A document that indicates what testing will occur, how it will occur, and what resources will be necessary for it to occur.

A test plan also details how results and defect reporting will occur.

12.1.1 Purpose of test plan:

This document describes the general approach, along with the specific methodologies and techniques used by the Quality Assurance testing team to plan, organize and manage the testing activities for the overall testing

12.1.2 When Test Plan comes into picture

After completion of Test Process finalization, for corresponding project, Author (Test Manager or Test Lead) prepares the document. Test plan is prepared based on the BRS and SRS documents.

Before writing the Test plan the below workbench follow.

Page 122

Page 123: Basics of Software Testing_Very Good__Modified

Software testing Concepts

1. Team Formation

2. Identify Tactical Risks

3. Test Plan writing

4. Test Plan Review

12.1.3 Identifying Tactical risks

During Test Plan writing author concentrate on identifying risks w.r.t team formation

Lack of knowledge of Testers in the domain.

Lack of Budget.

Lack of Resources.

Delays in Delivery.

Lack of Test Data.

Lack of Development Process Rigor.

Lack of communication (In between testing team and development team)

After completion of Team Formation and Risk Finding, Author(Test manager or Test Lead) start writing Test Plan document

Here are the steps of writing the test plan

12.1.4 References:

The following documents are useful for reference in preparing the Test Plan Document.

Documents that can be referenced include:

Business Requirements Specifications(BRS)

Functional Requirements Specifications(FRS)

Project Plan

High level design document (HLD)

Low level design document (LLD)

Page 123

Page 124: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Introduction:

Describes about the Testing policy, Test strategy(Applicable Test process), Company Standards and purpose of the test plan.

Test Item (Feature as a module or function or service)

In this section, we are going to describe the things we intend to test.

Features to be tested

This is a listing of what is to be tested from the USERS viewpoint of what the system does.

(Based on BRS QA Manager decides the Features to be tested)

We can set the level of risk for each feature that need to test. We can use a simple rating scale such as (H-High, M-Medium, and L-Low). These types of levels are understandable to a User. We need to be prepared to discuss why a particular level was chosen.

Features not to be tested

This is a listing of what is NOT to be tested from both the Users viewpoint of what the system does and a configuration management/version control view.

We need to identify and justify why the feature is not to be tested, there can be any number of reasons. We can see some of the reasons below.

Not to be included in this build or release of the Software.

Low risk has been used before and is considered stable(if the priority is low).

(QA Manager will decide which Features not to be tested based on BRS)

Test Design Specifications for responsible modules:

There are 2 Types of Test Design Specifications

1) Business logic based test case design

2) Input Domain based test case design

Page 124

Page 125: Basics of Software Testing_Very Good__Modified

Software testing Concepts

12.2 Business logic based test case design

A Tester prepares list of Test Cases depends on Use cases or Functional Requirements Specifications in SRS(System Requirements Specification).

One Use case describes that how a user can access a specific functionality in the system

Using the process below testers write the test cases.

N

12.2.0 Input Domain based Test case design

Sometimes use cases are Functional Specifications does not provide size and type of input objects, because the main purpose of use case is to define functionality

To cover this type of input objects sizes and types Testers concentrate on Input domain based test case design method. In this method test engineers follow below approach to write the test case.

1)Study data model of the application (ER diagrams in LLD)

2)Identify and study the attributes of each identity in terms of size data type and constraints.

3)Identify the critical attributes that are used for data manipulations and retrieving.

Example:

Page 125

Page 126: Basics of Software Testing_Very Good__Modified

Software testing Concepts

12.3 Test Life Cycle [planning Process]

Pre-Planning Activities

Test Planning

Post-Planning Activities

12.3.0Testing Life Cycle (Planning Process - Pre-Planning Activities)

Success Criteria / Acceptance Criteria

Test Objectives

Assumptions

Entrance Criteria / Exit Criteria

12.3.1 Testing Life Cycle (Planning Process – Test Planning)

Test Plan

Requirements / Traceability

Estimating

Scheduling

Staffing

Approach

Test Check Procedures

Page 126

Page 127: Basics of Software Testing_Very Good__Modified

Software testing Concepts

12.3.2Testing Life Cycle (Planning Process - Test Planning)

Test Plan

The deliverables to meet the test’s objectives; the activities to produce the test deliverables; and the schedule and resources to complete the activities.

Test Objectives

Entrance Criteria / Exit Criteria

Schedule

Test Approach

What tested and what not tested

12.3.3 Testing Life Cycle (Planning Process – Test Planning)

Requirements / Traceability

Defines the tests needed and relates those tests to the requirements to be validated.

Written or verbal

Informal user requirements

Organized by requirement or test case

12.3.4 Testing Life Cycle (Planning Process - Test Planning)

Estimating

Determines the amount of resources required to accomplish the planned activities.

Personnel / Equipment / Space

Time Frame (hours, days)

Estimating tools (MS Project)

Page 127

Page 128: Basics of Software Testing_Very Good__Modified

Software testing Concepts

12.3.5 Testing Life Cycle (Planning Process - Test Planning)

Test Check Procedures

Set of procedures based on the test plan and test design, incorporating test cases that ensure that tests are performed correctly and completely.

Checklist

Milestone Review

Defect Tracking Analysis

Traceability Matrix

12.3.6 Testing Life Cycle (Planning Process - Post-Planning Activities)

Check Management

Modifies and controls the plan in relationship to actual progress and scope of the system development.

Versioning / Change Control / Change Management / Configuration Management

Methods to control, monitor, and achieve change.

12.3.7 Team Building

Test Manager concentrates on below factor to form Testing Team for corresponding project. Test Manager will check for Availability of test engineers (Selection will be made in 3:1 ratio)

12.3.8 Staffing and Training needs

In this section we specify the names of the testing team those who involve in this project : Training on the application/system, Need to provide training if the Testers are not aware of the domain, Training for any test tools to be used

Need to provide schedule for training to the Testers on the required tool of the project.

12.3.9 Roles and Responsibilities

Page 128

Page 129: Basics of Software Testing_Very Good__Modified

Software testing Concepts

In this section the Roles and Responsibilities of all the Testing Team right from QA manger to Tester is specified.

For example let’s take the Responsibility of a tester.

Understanding Requirements

Writing test cases

Executing test cases

Preparing test log

Defect reporting

12.3.10 Schedule (Dates and Times)

Should be based on realistic and validated estimates. If the estimates for the development of the application are inaccurate, the entire project plan will slip and the testing is part of the overall project plan.

We need to explain how slippages in the schedule will to be handled should also be addressed here.

If the users know in advance that a slippage in the development will cause a slippage in the test and the overall delivery of the system, they just may be a little more tolerant, if they know it’s in their interest to get a better tested application.

12.3.11 Test Environmental Needs

In this section we mention the Test Hardware Requirements, Web/Application Servers,

Database Servers that are required for the corresponding project.

12.3.12 Test Deliverables

What is to be delivered as part of this plan?

Test Plan.

Test Cases.

Test Scripts.

Page 129

Page 130: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Defect Reports.

Test Summary Report.

12.3.13 Risks (list of Finding Risks)

We need to identify what software is to be tested and what the critical areas are, such as:

A. Delivery of a third party product.

B. Ability to use and understand a new package/tool, etc.

C. Extremely complex functions.

D. Poorly documented modules or change requests.

12.3.14 Approvals

The final authority of the QA (Ex. Test manager) has to approve the test plan, test cases…

12.3.15 Review Test plan

After completion of Test Plan writing, The author of Test Plan and corresponding Test Engineers review the test plan document for completeness and correctness depends on the below factors.

Business requirement based coverage (Brs)

Functional specifications based coverage (S/Wrs)

12.4 Summary:

At the end of this module, you will be able to

Define the test plan

Define test strategy

Different components of test plan

Testing Staff

Page 130

Page 131: Basics of Software Testing_Very Good__Modified

Software testing Concepts

Test Bed

Page 131