Software testing
description
Transcript of Software testing
Manual Testing v1.0
Software Testing
Testing is the process of executing a program with the intention of finding errors.
A Quality Measurement Activity aimed at evaluating a software item against the given system requirements.
This Includes, but not limited to the process of executing program or application with the intent of finding software bugs.
Manual Testing v1.0
Definitions – Software Testing
Testing is a process of gathering information by making observations and comparing them to expectations.
A test is an experiment designed to reveal information, or answer a specific question, about the software or system. -Dale Emery & Elisabeth Hendrickson
Testing is an Empirical technical investigation done to provide stakeholders, information about quality of a product or a service. --Cem Karnar
Manual Testing v1.0
Why Testing is needed ?
Errors occur because we are not perfect and, even if we were, we are working under constraints such as delivery deadlines.Testing is the measurement of software quality. We measure how closely we have achieved quality by testing the relevant factors such as a)Correctness b)Reliability, c)Usabilityd)Maintainability, e)Reusability, f)Testability..etc., -- Continued
Manual Testing v1.0
Why Testing is needed ?
Cost of failure associated with defective products getting shipped and used by customer is enormous
To find out whether the integrated product work as per the customer requirements
To identify as many defects as possible before the customer finds
Manual Testing v1.0
Evolution of Software Testing
In 1950 Testing is considered as Debugging.
In 1960 Testing is considered as separate activity from debugging.
In 1968 Software Engineering is first used in NATO workshop at West German.
In 1970 Software Testing is considered as technical discipline activity.
For more information Visit http://www.testingreferences.com/testinghistor
y.php
Manual Testing v1.0
Chapter I
Process
Manual Testing v1.0
Contents
Introduction to Process
Introduction to ISO
History of CMM
CMM Levels
Deming PDCA Cycle
Manual Testing v1.0
Introduction to Process What is Process?
A framework for the task are required to build high quality software
Why it is important?It provides stability, control and organization to an activity that if left uncontrolled become quite chaotic
Example ATM card Transaction.
Manual Testing v1.0
Process Standards ( ISO ) What is ISO?
International Organization for Standards
The ISO standards are structured around the Process Approach concept
Process Approach - Understand and organize company resources and activities to optimize how the organization operates.Eg : System Approach to Management - Determine sequence and interaction of processes and manage them as a system. Processes must meet customer requirements.
Manual Testing v1.0
ISO 9001 and 14001 ISO – 9001ISO 9001 defines the rules and guidelines for implementing a quality management system into organizations of any size or description. The standard includes process-oriented quality management standards that have a continuous improvement element. Strong emphasis is given to customer satisfaction.
ISO – 14001ISO 14001 defines Environmental Management best practices for global industries. The standard is structured like the ISO 9001 standard. ISO 14001 gives Management the tools to control environmental aspects, improve environmental performance and comply with regulatory standards. The standards apply uniformly to organizations of any size or description.
Manual Testing v1.0
CMM History Capability Maturity Model (CMM) is a collection of instructions an organization can follow with the purpose to gain better control over its software development process
History of CMM
1991: SW-CMM v1.0, released. 1993: SW-CMM v1.1, released. 1997: SW-CMM revisions halted in support for CMMI. 2000: CMMI v1.02, released. 2002: CMMI v1.1, released
Manual Testing v1.0
Capability Maturity Model
Introduction to CMM
CMM Levels Diagram
CMM Level Description
Introduction to CMMI
Different KPA
Manual Testing v1.0
CMM Diagram
Manual Testing v1.0
CMM Level Description Level 1 : Initial / adhoc
Company has no standard process for software development. Nor does it have a project-tracking system that enables developers to predict costs or finish dates with any accuracy.
Level 2 : RepeatableCompany has installed basic software management processes and controls. But there is no consistency or coordination among different groups.
Level 3 : DefinedCompany has pulled together a standard set of processes and controls for the entire organization so that developers can move between projects more easily and customers can begin to get consistency from different groups. (Cont..)
Manual Testing v1.0
CMM Level Description Level 4 : Managed
In addition to implementing standard processes, company has installed systems to measure the quality of those processes across all projects.
Level 5 : Optimizing Company has accomplished all of the above and can now begin to see patterns in performance over time, so it can tweak its processes in order to improve productivity and reduce defects in software development across the entire organization.
Manual Testing v1.0
Key Process Area ( KPA) CMM
18 Key Process Area.
CMMI25 Key Process Area..
Manual Testing v1.0
KPA for CMMI Defined ( 15 KPA )
Decision Analysis and Resolution Integrated Project ManagementIntegrated Supplier ManagementIntegrated TeamingMeasurement and Analysis Organizational Environment for IntegrationOrganizational Process DefinitionOrganizational Process FocusOrganizational Training Product Integration Requirements DevelopmentRisk Management Technical SolutionValidationVerification. (Cont…)
Manual Testing v1.0
KPA for CMMI Managed ( 2 KPA )
Organizational Process Performance Quantitative Project Management
Optimizing ( 2 KPA )Causal Analysis and Resolution Organizational Innovation and Deployment. .
Manual Testing v1.0
Deming PDCA Cycle To effectively manage and improve your processes, use PDCA cycle as a guide.
PLAN: Design or revise business process components to improve results.
DO: Implement the plan and measure its performance.
CHECK: Assess the measurements and report the results to decision makers.
ACT: Decide on changes needed to improve the process.
.
Manual Testing v1.0
Chapter II
Quality Introduction
Manual Testing v1.0
Contents Quality Introduction
Quality Assurance
Quality Control
Difference between QA & QC
Manual Testing v1.0
Introduction to Quality What is Quality?
Quality is totality of features and characteristics of a product or services that bear on its ability to satisfy stated or implied needs – ISO 8402Conformance to requirements – Producers view.Quality is “degree of excellence”
Definition of QualityQuality is defined as meeting the customer’s requirements the firsttime and every time.
Quality is also absence of defects and meets customer expectations.
Manual Testing v1.0
QA & QC Quality Assurance
All those planned and systematic actions necessary to provide adequate confidence that a product or service will satisfy given requirements for quality.
Quality Control
The operational techniques and activities that are used to fulfill requirements for quality.
Manual Testing v1.0
Difference – QA & QC
Quality Assurance
Quality Control
Prevention Based Detection Based
Process Oriented Product oriented
Organization Level
Producer Responsibility
Phase Building Activity
End Phase Activity
Manual Testing v1.0
QA Activities
Quality Assurance ActivitiesConduct of Formal Technical Review (FTR)
Enforcement of Standards (Customer imposed standards or management imposed standards) Control of Change (Assess the need for change, document the change)
Measurement (Software Metrics to measure the quality, quantifiable)
Manual Testing v1.0
Static Testing
What is Static Testing Verification performed without executing the system’s code
Different types of Static Testing:
Code Walkthrough
Inspection
Reviews
Manual Testing v1.0
Code Walkthrough
What is a Code Walkthrough?A 'walkthrough' is an informal meeting for evaluation or informational purposes.
Little or no preparation is usually required.
Conducted by Development team.
Language knowledge is required.
Manual Testing v1.0
Inspection
What is an inspection?
An inspection is more formalized than a 'walkthrough‘.
Typically with 3-8 people including a moderator, reader (the author of whatever is being reviewed), and a recorder to take notes.
The subject of the inspection is typically a document such as a requirement spec or a test plan
Purpose is to find problems and see what is missing, not to fix anything. (Cont..)
Manual Testing v1.0
Attendees should prepare for this type of meeting by reading through the document.
The result of the inspection meeting should be a written report.
Most cost-effective methods of ensuring quality. Skills may have low visibility to software development organization.
Bug prevention is far more cost effective than bug detection.
Manual Testing v1.0
Reviews A process or meeting during which a work product, or set of work products, is presented to project personnel, managers, users or other interested parties for comment or approval
Review the product, not the producer.
Set an agenda and maintain it.
Limit debate.
Take written notes.
Manual Testing v1.0
Types of Reviews Informal Reviews
One-on-One meeting ( Peer Review )Request for input.No agenda required.Occurs as needed throughout each phase.
Semi Formal Reviews
Facilitated by the author of the material.No solutions are discussed for issues.Occurs one or more times during a phase. ( Cont..)
Manual Testing v1.0
Types of Reviews Formal Reviews
Facilitated by knowledgeable person (Moderator)
Moderator is assisted by recorder.
Planned in advance, material is distributed.
Issue raised are captured and published.
Defect found are tracked through resolution.
Formal review may be held at any time.
Manual Testing v1.0
Chapter III
Introduction to Software Testing
Manual Testing v1.0
Introduction to Software Testing Quality Control Activities
Execution of code
What is Software Testing?Software testing is the process used to help identify the correctness, completeness, security and quality of developed computer software.
DefinitionSoftware Testing is the process of executing a program with the intent of finding bugs.
Testing is a process of exercising or evaluating a system component, by manual or automated means to verify that it satisfies a specified requirement.
Manual Testing v1.0
Goals of Testing
Determine the quality of the executable work product. Provide input regarding the readiness of the application for launch.
Help identify defects (via associated failures).
Provide input to improve the development process.
Manual Testing v1.0
Who Tests the Software
.
Dev Team Testing Team
Must learn about the system. But, will attempt to Break it.. and is driven by Quality
Understands the system but, will test gently and, it is driven by “delivery”
Manual Testing v1.0
Verification (Process Oriented)
.Are we developing the “System Right”?
Verification involves checking to see whether the program conforms to specification.
It focuses on process correctness.
Verification typically involves
Reviews and meetings to evaluate documents, plans, code, requirements, and specifications. This can be done with checklists, issues lists, walkthroughs, and inspection meetings
Manual Testing v1.0
Validation (Product Oriented)
.Are we developing the “ Right System”? Validation is concerned with whether the right functions of the program have been properly implemented, and that this function will properly produce the correct output when given some input value.
Validation typically involves actual testing and takes place after verifications are completed.
The term 'IV & V' refers to Independent Verification and Validation.
Manual Testing v1.0
Context of V&V
.
Static Dynamic
Reviews Inspection Code Walkthrough
V & V
Execution of Code
Unit Testing Integration Testing
System Testing
Manual Testing v1.0
Software Testing Methods
.
Methods
Strategies
white-boxmethods
black-box methodsblack-box methods
Manual Testing v1.0
Testing Methods
.
Functional or Black Box TestingBlack box testing - We are checking the functionality of the application
Logical or White Box TestingWhite box testing – Checking the logic and structure of programDevelopers will do the White box testing
Gray Box TestingCombination of White & Black box is testing is called Gray box testing
Manual Testing v1.0
White-Box Testing
.
Also called ‘Structural Testing’ is used for testing the code keeping the system specs in mind.
Different Methods in White Box TestingPath CoverageStatement CoverageDecision CoverageCondition Coverage
Manual Testing v1.0
Structure..
.
Sequence If-then-else
While-do
Repeat until
Case
Selection Loops
if
thenelse
Manual Testing v1.0
White box Testing.
. Does White Box Testing lead to 100% correct program?
The Answer is:
NOIt is not possible to
exhaustively test every program path because the number paths is simply too
large…
Manual Testing v1.0
Black box Testing.
. What is Black Box Testing
Testing based on external specifications without knowledge of how the system is constructed..
Black Box
Known Inputs Known Outputs
???
Manual Testing v1.0
Chapter V
.
Introduction To SDLC
Manual Testing v1.0
Introduction to SDLC
.
What is SDLC?
The software development life cycle (SDLC) is the entire process of formal, logical steps taken to develop a software product.
Levels of SDLC
Requirements GatheringSystems DesignCode GenerationTestingMaintenance (Cont..)
Manual Testing v1.0
Introduction to SDLC
.
Manual Testing v1.0
How Software is developed today
.
Manual Testing v1.0
Types of SDLC
. Waterfall Model
Prototyping Model
Incremental Model
Spiral Model
Agile Model
Manual Testing v1.0
Waterfall Model Diagram
.
Manual Testing v1.0
Waterfall Model
.
IntroductionFirst proposed software development model in 1970 by W. W. Royce
Flow steadily through the phases
Linear Sequential Model
Each phase well defined with start & end point
Manual Testing v1.0
Waterfall Model – Pros & Cons
.
Pros
Minimizes planning overhead since it can be done up frontStructure minimizes wasted effort, so it works well for technically weak or inexperienced staff.
Cons
Inflexible for analyst to collect all requirementCustomer should have patience still the phase end to see the working deliverable.Adding new requirement at later phase is difficult.
Manual Testing v1.0
Prototype Model
.
Prototype ModelNo detailed requirement Build a prototypeCustomer evaluation Mechanism for identifying requirementsPrototype is thrown away“FIRST SYSTEM” which developers build something immediate
Manual Testing v1.0
Prototype Model – Pros & Cons
. Pros
Better understanding of requirements.Good starting point for other process models (e.g. waterfall).Prototype may be used as a starting point rather than thrown away.
Cons
Bad idea: prototypes typically have poor design and quality.Bad decisions during prototyping may propagate to the real product.
Manual Testing v1.0
Incremental Model
. Incremental Model
Combines the element of waterfall & prototyping
First increment is the core functionality
Successive increments are add/fix functionality and Final increment is the complete product
Outcome of each iteration: tested, integrated, executable system
Manual Testing v1.0
Incremental Model Diagram
.
Manual Testing v1.0
Incremental Model -Pros & Cons
. Pros
Operation products in weeksLess traumatic to organizationSmall capital outlay, rapid ROI
Cons
Too many builds – over headToo few builds – build and fixNeed an open architecture No overall design at start
Manual Testing v1.0
Spiral Model
. Spiral Model
Defined by Barry Boehm Each loop in the spiral represents a phase in the process. Customer communicationRisks are explicitly assessed and resolved throughout the process.Uses prototyping
Manual Testing v1.0
Spiral Model – Pros & Cons
. -Plan,risk analysis,dev,evaluationPros
Good for large and complex projects Customer EvaluationRisk Evaluation
Cons
Difficult to convince some customers that the evolutionary approach is controllable Needs considerable risk assessment If a risk is not discovered, problems will surely occur
Manual Testing v1.0
Agile Model
. Conceptual framework Attempt to minimize risk by developing software in short time, called iterations. Typically one to four weeks Face – to - Face communication Cowboy Coding
Manual Testing v1.0
V – Model
. The V- Model illustrates that testing activities
(Verification and Validation) can be integrated into each phase of the product life cycle. Validation part of testing is integrated in the earlier phases of the life cycle which includes reviewing end user requirements, design documents etc.There are variants of V-Model however we will take a common type of V-model example. The V-model generally has four test levels.Unit Testing Integration Testing System Testing UAT In practice the V-Model may have more granular test levels like unit integration testing after unit testing.
Manual Testing v1.0
V – Model
.
Manual Testing v1.0
Testing Levels
.
Testing is expanded in SDLC phase into different levels.
Unit Testing.
Integration Testing.
System Testing.
Acceptance Testing.
NOTENOTEAll these levels
are discussed in detail.
Manual Testing v1.0
Documents Prepared in V Model
.
Requirement Phase
SRS DocumentFunctional Specification Document
High Level Design PhaseArchitecture Design Document
Low Level Design PhaseDetail Design Document
Manual Testing v1.0
Testing Activities in V Model
. System Testing
Once SRS or FSD document are prepared, System testing activities are started
Testing activities areSystem Test PlanSystem Test Case
Integration TestingOnce Architectural Document are prepared, Integration testing activities are started
Testing activities areIntegration Test PlanIntegration Test Cases
Manual Testing v1.0
Chapter VI
.
Levels of Testing
Manual Testing v1.0
Levels of Testing Unit Testing
Integration Testing
System Testing
UAT
Manual Testing v1.0
Unit Testing What is called an Unit?
ModuleScreen / ProgramBackend database
Who will do Unit testing?Unit Testing is primarily carried out by the developers themselves
What is Unit Testing ? Testing individual unit of the software in isolationLowest level of testingDeals functional correctness and the completeness of individual program units.
Manual Testing v1.0
Integration Testing What is Integration.
Integration is the process of assembling unit-tested modules
What is Integration testing.Testing of a partially integrated application to identify defects involving the interaction of collaborating components.
Objective of Integration testing.Determine if components will work properly together. Identify defects that are not easily identified during unit testing Data Dependency between modulesData Transfer between modules.
Manual Testing v1.0
Testing Approach Big Bang approach
Incremental approachTop Down approachBottom Up approach
Examples of Integration.
Manual Testing v1.0
Big Bang Approach
Big Bang approach consists of testing each module individually and linking all these modules together only when every module in the system has been tested.
Manual Testing v1.0
Pros & Cons of Big Bang Pros
Advantageous when we construct independent module concurrently
Cons
Approach is quite challenging and risky as we integrate all modules in a single step and test the resulting system.
Locating interface errors, if any, becomes difficult here.
Manual Testing v1.0
Incremental Approach
Software units are gradually built, spreading the integration testing load more evenly through the construction phase.
Incremental approach can be implemented in two distinct ways:
Top-down Bottom-up.
Manual Testing v1.0
Top-down Integration Program is merged and tested from top to
bottom.
Modules are integrated by moving downward through the control hierarchy, beginning with the main control module.
A module will be integrated into the system only when the module which calls it has been already integrated successfully.
Manual Testing v1.0
Stubs What is 'Stub‘?
Dummy routine that simulates a behavior of a subordinate.
If a particular module is not completed or not started, we can simulate this module, just by developing a stub.
To simulate responses of M2, M3 and M4 whenever they are to be invoked from M1, “stubs” are created.
Manual Testing v1.0
Pros and Cons of Top down Pros
It is done in an environment that closely resembles that of reality, so the tested product is more reliable.
Stubs are functionally simpler than drivers and therefore, stub can be written with less time and labor.
Cons
Unit testing of lower modules can be complicated by the complexity of upper modules.
Manual Testing v1.0
Bottom up Approach Program is merged and tested from bottom
to top.
The terminal module is tested in isolation first, then the next set of the higher level modules are tested with the previously tested lower level modules.
Here we have to write 'Drivers‘ Driver is nothing more than a program, that accept the data passed to the module
Manual Testing v1.0
Pros & Cons of Bottom up Pros
Unit testing of each module can be done very thoroughly.
Cons
Test Drivers have to be generated for modules at all levels, except for top controlling module.
Manual Testing v1.0
System Testing What is System Testing
System testing is an black-box type testing that is based on overall requirements specifications, covers all combined parts of a system.
Objective of System Testing
In system testing, we need to ensure that the system does what the customer wants it to do
System testing consist of Performance & Functional testing
Manual Testing v1.0
System Testing Types Testing involved in System testing
Sanity TestingCompatibility TestingExploratory TestingStress TestingVolume TestingLoad TestingAcceptance TestingAd-hoc TestingAlpha Testing:Benchmark TestingEnd-to-end TestingFor More types Visit
http://blog.enfocussolutions.com/Powering_Requirements_Success/bid/173061/Types-of-System-Testing
Manual Testing v1.0
System Testing Types Sanity testing
Typically an initial testing effort to determine if a new software version is performing well enough to accept it for a major testing effort.
Compatibility testingTesting how well software performs in a particular hardware/software/OS/network/environment.
Exploratory testing
Test design and Test execution at the same time.
Stress testingTesting conducted to evaluate a system or component at or beyond the limits of its specified requirements
Manual Testing v1.0
System Testing Types Volume testing
Testing where the system is subjected to large volumes of data.
Load testingTesting conducted to evaluate the compliance of a system or component with specified performance requirements
Acceptance testingFormal testing conducted to determine whether or not a system satisfies its acceptance criteria and to enable the customer to determine whether or not to accept the system. This testing process is usually performed by customer representatives.
Manual Testing v1.0
System Testing Types Ad-hoc Testing
Testing performed without planning and documentation - the tester tries to 'break' the system by randomly trying the system's functionality. This testing process is performed by testing teams.
Alpha Testing
Type of testing a software product or system conducted at the developer's site. This testing process is usually performed by the end user.Benchmark Testing
Testing technique that uses representative sets of programs and data designed to evaluate the performance of computer hardware and software in a given configuration. This testing process is performed by testing teams.
Manual Testing v1.0
System Testing Types End-to-end Testing: Similar to system testing, involves testing of a complete application environment in a situation that mimics real-world use, such as interacting with a database, using network communications, or interacting with other hardware, applications, or systems if appropriate. This testing process is performed by QA teams.
Functional Testing: Type of black box testing that bases its test cases on the specifications of the software component under test. This testing process is performed by testing teams.
Non-functional Testing: Testing technique which focuses on testing of a software application for its non-functional requirements. Can be conducted by the performance engineers or by manual testing teams.
Manual Testing v1.0
System Testing Types What is Acceptance Testing?
Acceptance testing allows customers to ensure that the system meets their business requirements.
What is Regression testing?Testing of a previously verified program or application following program modification for extension or correction to insure no defects have been introduced.
What is Re testing?Testing of a previously verified program or Retesting is testing the new version of AUT .once the new version is ready test the previously found bugs are actually being fixed.
Manual Testing v1.0
Chapter VII
What to Test ?
Manual Testing v1.0
Field level Check & Validation
Seven types of check in fields
Null characters Unique characters Length Number Date Negative values Default values.
Manual Testing v1.0
Functionality Check Different Functionality Check
Button Drop down Functionality of Screen Links
Manual Testing v1.0
GUI Check Different GUI Checks
Short cut keys Help Tab movement Arrow key Message box Readability of controls Tool tip Consistency
Manual Testing v1.0
Chapter VIII
STLC / Test Plan
Manual Testing v1.0
Software Testing Life Cycle
Test PlanRoad map for testing activities
Test CaseA document with actions and expected reaction
Test ExecutionTesting application for defects using test cases
Defect ReportingReporting defects
Manual Testing v1.0
Test Plan What is Test Plan
Different type of Test Plan
Who create Test Plan
When to Create Test Plan
Attributes of Test Plan
Manual Testing v1.0
Test Plan Why plan?
Define the road map for testing activities.
What is Test Plan?Test plan is the document, which specify the test conditions, features, functions that will be tested for a specific level of testing.
Manual Testing v1.0
Different Types of Test Plan Unit Test Plan
Integration Test Plan
System Test Plan
Acceptance Test Plan
Manual Testing v1.0
When to Create Test Plan ?Test plans should be prepared as soon as the corresponding document in the development cycle is produced.
The preparation of the test plan itself validates the document in the development cycle.
Who Create Test Plan ?
Test Plan is created by Test Lead or Test Manager. They have the knowledge about the testing approach. Experienced in Effort Estimation & Scheduling. Communication between different team.
Test Plan Creation
Manual Testing v1.0
Document Scope Project Overview Document Reference Intended Audience Assumptions, Dependencies and Constraints Environment Feature not to be tested Suspension & Redemption Criteria Tools Metrics Defect Reporting & Tracking Deliverables Entry & Exit Criteria Schedule Feature to be tested Roles & Responsibilities Test Approach Risks
Attributes of Test Plan
Manual Testing v1.0
Document Scope Project Overview Document Reference Intended Audience Assumptions, Dependencies and Constraints Environment Feature not to be tested Suspension & Redemption Criteria Tools Metrics Defect Reporting & Tracking Deliverables Entry & Exit Criteria Schedule Feature to be tested Roles & Responsibilities Test Approach Risks
Test Plan Attributes details
Manual Testing v1.0
Document Scope
The scope of this document is to explain the Testing Strategies, Test Environment, and Resource Usage for testing the <Product Name> application.
Project OverviewIn this section, provide overview of the product’s functional specifications that would be explained in detail later
Document ReferenceThe documents used to create the Test Plan and testing activities.Example
Functional Specification Document v 1.0Design Document v 1.0
Intended AudienceThe audiences for this document include Project Leader, Team Members and Test Engineers.
Test Plan Attributes details
Manual Testing v1.0
Assumptions, Dependencies, Constraints
List any assumptions being made with regard to the application e.g., the user is planning 25% growth across the board on all transaction types.
List any dependencies that may impact testing, e.g., critical system resources must be available such as database availability.
List any constraints that impact testing, e.g., live production data for parallel tests will only be available after 5 P.M each day
Test Plan Attributes details
Manual Testing v1.0
Test Environment
Describe the test environments for unit or integration or system or acceptance test. Describe any interfaces that must be established. Reference the Technical Design or other documents where this information can be found.
Roles & ResponsibilitiesProject & Test team members RolesResponsibilities of each members
Test ApproachDescribe the overall approach to testing: who does it, main activities, techniques, and tools used for each major group of features. How will you decide that a group of features is adequately tested?
Test Plan Attributes details
Manual Testing v1.0
Entry & Exit Criteria
List the set of conditions that determine when System Testing can begin and end
Features to be tested Cross-reference them to test design specifications
Features not to be testedWhich ones not a part of testing
RiskSchedulePersonalRequirementsTechnicalManagement.
Test Plan Attributes details
Manual Testing v1.0
Schedule
Dates of the activities in the test phaseTest Case completion dateExecution dateDefect review and closure dateDeployment date into production
Suspension and Resumption criteria
List anything that would cause you to stop testing until it’s fixed.What would have to be done to get you to restart testing.
Test Plan Attributes details
Manual Testing v1.0
Tools
List the tools used for testing the application
Functional Testing ToolsPerformance Testing ToolsDefect Tracking ToolsConfiguration Tools
MetricsList all metrics collectedResponsible person for metrics
Test Plan Attributes details
Manual Testing v1.0
Defect Tracking & Reporting
Defect SeverityDefect Life CycleDefect TrackingDefect Reporting ProcessRoles & Responsibility
Deliverables
List all the documents and scripts to be delivered in this section
Test Plan Attributes details
Manual Testing v1.0
STLC / Test Cases
Chapter IX
Manual Testing v1.0
What is Test Case?
What is good test case
Positive and Negative Test Case
Test Case Template
Test Case Design Techniques
Contents
Manual Testing v1.0
A test case is a document that describes input, action, or event and an expected response, to determine if a feature of an application is working correctly.
Test Case is a document which describes the test steps to execute and its corresponding expected results.
What is a Test Case ?
Manual Testing v1.0
Accurate - tests what it’s designed to test
Economical - no unnecessary steps
Repeatable, reusable - keeps on going
Traceable - to a requirement
Appropriate - for test environment, testers
Self standing - independent of the writer
Self cleaning - picks up after itself
What is a good Test Case ?
Manual Testing v1.0
Positive testing checks that the software does what it should. Negative testing checks that the software doesn't do what it shouldn't.
Negative testing should always come first in the test case scenario.
Positive testing should follow after the Negative testing.
Positive & Negative Scenario.
Manual Testing v1.0
Attributes of Test CaseProject Name Project Version Test Case ID Test Case Version Test Case Name Designer Creation Date Step Design Status Test Description Expected Result Actual Result
Manual Testing v1.0
Test Case Template Project Name : Name of the project
Project Version : v1.0
Test Case ID : TC_1
Test Case Name : <Project Name>_<Module Name>_<Screen Name>
Test Case Version : v1.1
Status : Design / Review / Complete
Designer : Sivakumar Krishnan
Creation Date : 08/March/2013
Execution Status : Design
Manual Testing v1.0
Test Case Design Techniques Black Box Technique
Equivalence Class PartitioningBoundary Value AnalysisError GuessingCause Effect GraphingState Transition Testing
White Box Technique
Branch TestingCoverage Testing
Manual Testing v1.0
ECP – Equivalence Class PartitioningEquivalence partitioning is a method for deriving test cases. In this method, classes of input conditions called equivalence classes are identified such that each member of the class causes the same kind of processing and output to occur.
A software testing technique that involves identifying a small set of representative input values that invoke as many different input conditions as possible.
Manual Testing v1.0
ECP – For Test Case
Group of test forms an equivalent class, ifThey all test the same thingIf one test finds a defect, the others willIf one test does not find a defect, the others will not
Tests are grouped into one equivalence class when
They involve the same input variablesThey result in similar operations in the programThey affect the same output variables
.
Manual Testing v1.0
Finding ECP Identify all inputs Identify all outputs Identify equivalence classes for each input Identify equivalence classes for each output Ensure that test cases test each input and output equivalence class at least once
ECP Diagram
Manual Testing v1.0
Eg: ECPA program takes in marks, student type and returns the grades for the student in that subject.
Inputs are :Marks : (0 - 100)Student Type : First time/Repeat
Outputs are :Grade : F, D, C, B, A
Rules :Student type First Time
0-40 = F 41-50 = D 51-60 = C 61-70 = B 71-100 = A
Student type Repeat 0-50 = F 51-60 = D 61-70 = C 71-80 = B 81-100 = A
Manual Testing v1.0
Boundary Value Analysis leads to selection of test cases that exercise boundary values BVA is a test case design technique that complements equivalence partitioning. Rather than selecting any elements of an equivalence, BVA leads to the selection of test case at the 'edges' of the class.
Identify all inputs & outputs Identify equivalence classes for each input Identify equivalence classes for each output For each input equivalent class, ensure that test cases include
one interior pointsall extreme pointsall epsilon points
Boundary Value Analysis ( BVA)
Manual Testing v1.0
Example
If input condition is a range bounded by values 'a' and 'b‘, test case should be designed with values 'a' and 'b', just above and just below a & b.
If input condition specifies a number of values, test case should be developed that exercises the minimum and maximum numbers. Values just above and just below the maximum and minimum should be tested.
Eg : BVA
Manual Testing v1.0
Pros
Very good at exposing potential user interface/user input problems.Very clear guidelines on determining test cases.Very small set of test cases generated.
Cons
Does not test all possible inputs.Does not test dependencies between combinations of inputs.
Eg : Pros & Cons of BVA
Manual Testing v1.0
A test case design technique where the experience of the tester is used to postulate what faults exist, and to design tests specially to expose them.
This is a testers ‘intuition’ skill that can be applied in all other testing techniques to produce more effective tests.
.
Error Guessing
Manual Testing v1.0
A graphical representation of inputs or stimuli (causes) with their associated outputs (effects), which can be used to design test cases. A testing technique that aids in selecting, in a systematic way, a high-yield set of test cases that logically relates causes to effects to produce test cases.
.
Cause Effect Graphing
Manual Testing v1.0
What is State Transition
Changes in the attributes of an object or in the links an object has with other objects.
When to use themState models are ideal for describing the behavior of a single object
What is State Transition DiagramState-based behavior of the instances of a class.
State Transition
Manual Testing v1.0
Operation of an Elevator
Elevator has to go to all the 5 floors in a building. Consider each floor as one state.Let the lift be initially at the 0th floor (initial state), now a request comes from the 5th floor it has to respond to that request and the lift has move to 5th floor (next state) and now a request also comes from the 3rd floor( another state) it has to respond to this request also. Like wise the requests may come from other floors also.Each floor means a different state, the lift has to take care of the request from all the states and has to transit to all the state in sequence the request comes.
Eg : State Transition
Manual Testing v1.0
State Transition Diagram
Manual Testing v1.0
Test Case Writing Seven Common Mistakes in Test Case Writing
Making cases too longIncomplete, incorrect, or incoherent setupLeaving out a stepNaming fields that changed or no longer existUnclear whether tester or system does actionUnclear what is a pass or fail resultFailure to clean up
Manual Testing v1.0
Chapter X
SDLC / Test Execution
Manual Testing v1.0
Test Execution What is Test Execution
The processing of a test case suite by the software under test, producing an outcome.
Today Test ExecutionThe execution is often undisciplined, poorly quantified, and difficult to duplicate. The impact goes right to the bottom line – costing unnecessary time and expense.
Test SetTest Set is an collection of Test Case Test Set can be formed for every release of testing Each and every cycle of testing in an release can have individual test set.
(Cont..)
Manual Testing v1.0
Test Execution Testing is potentially endless. We can not test till all the defects are unearthed and removed -- it is simply impossible. At some point, we have to stop testing and ship the software. Realistically, testing is a trade-off between budget, time and quality. It is driven by profit models.
Pessimistic ApproachUnfortunately most often used approach.Whenever some, or any of the allocated resources -- time, budget, or test cases -- are exhausted.
Optimistic Approach Stopping rule is to stop testing when either reliability meets the requirement, or the benefit from continuing testing cannot justify the testing cost.
Manual Testing v1.0
Chapter XI
Defect Tracking & Reporting.
Manual Testing v1.0
Chapter XI Why software has defects or bugs?
Software complexityProgramming errorsChanging in requirementPoor Business UnderstandingMiscommunication between the groupsSoftware versions upgradesLack of skill set.
.
Manual Testing v1.0
Defect Reporting What is defect?
A defect is a failure to conform to requirementsAny type of undesired result is a defect.A failure to meet one of the acceptance criteria of your customers.
.
Expected Happened on Real time.
Manual Testing v1.0
What is Defect Severity
A classification of a software error or fault based on an evaluation of the degree of impact that error or fault on the development or operation of a system (often used to determine whether or when a fault will be corrected)
The five Levels of severityCriticalMajorAverageMinorCosmetic
Defect Severity
Manual Testing v1.0
Critical
The defect results in the failure of the complete software system, of a subsystem, or of a software unit (program or module) within the system.
Major
The defect results in the failure of the complete software system, of a subsystem, or of a software unit (program or module) within the system. There is no way to make the failed component's, however, there are acceptable processing alternatives which will yield the desired result.
Average
The defect does not result in a failure, but causes the system to produce incorrect, incomplete, or inconsistent results, or the defect impairs the systems usability.
Defect Severity level Description
Manual Testing v1.0
Minor
The defect does not cause a failure, does not impair usability, and the desired processing results are easily obtained by working around the defect.
CosmeticThe defect is the result of non-conformance to a standard, is related to the aesthetics of the system, or is a request for an enhancement. Defects at this level may be deferred or even ignored.
Defect Severity level Description
Manual Testing v1.0
What is Defect Priority
Defect priority level can be used with severity categories to determine the immediacy of defect repair or fix
Levels of Defect PriorityUrgentHighMediumLowDefer
Defect Priority
Manual Testing v1.0
Urgent
Further development and/or testing cannot occur until the defect has been repaired. The system cannot be used until the repair has been effected.
HighThe defect must be resolved as soon as possible because it is impairing development/and or testing activities. System use will be severely affected until the defect is fixed.
MediumThe defect should be resolved in the normal course of development activities. It can wait until a new build or version is created.
Defect Priority level Description
Manual Testing v1.0
Low
The defect is an irritant which should be repaired but which can be repaired after more serious defect have been fixed.
DeferThe defect repair can be put of indefinitely. It can be resolved in a future major system revision or not resolved at all.
Defect Priority level Description
Manual Testing v1.0
Defect Life Cycle
Manual Testing v1.0
Defect Status
NewA new defect reported to Development Team.
OpenDefect which is reproducible & accepted by
AD team and assigned to concern developers by AD Manager
Not A DefectIf a reported defect is out of scope or future
enhancements, then the defect will be assigned as ‘Not a Defect’ by AD Manager.
Not Clear If the defect description is unclear, the Project
Manager has the option of assigning it to “Not Clear” state and sending it back to whomever raised the defect for clarification. Once clarified and accepted by the AD Manager, the defect moves into ‘Open’ status.
Development (Work in Progress)When a developer started to work on the
assigned defects. ( Cont..)
Manual Testing v1.0
Defect Status
FixedDefects once worked on by AD team and sent
to QA for re-testing.Reject
Unsatisfactory test results during retest will result in QA rejecting the defect and sending them back to AD team.
ClosedSatisfactory test results during retest (i.e.
actual results map to expected results) will result the defect to “Closed” by QA.
ReopenedPreviously closed defect found during
regression testing has the option of assigning the defect to “Reopened” rather than creating as “New” defect.
Manual Testing v1.0
Defect Report Template
Attributes of Defect Report Template
Defect ID Assigned To Subject ProjectSeverity StatusPriority DescriptionDetected By Detected in VersionDetected Date EnvironmentModule
Manual Testing v1.0
Defect Report Template
Attributes of Defect Report Template
Defect ID Assigned To Subject ProjectSeverity StatusPriority DescriptionDetected By Detected in VersionDetected Date EnvironmentModule
Manual Testing v1.0
Defect Report Template
Manual Testing v1.0
Defect Tracking
What is Defect Tracking?Defect tracking is a process of identifying the new defects found by tester is already reported by any other tester in the defect database.Example :
Defect is checked in the database using the filter options by using the defect report attributes like Module, Sub Module, Severity, Environment etc.,
Manual Testing v1.0
Discussion
Questions & Answer Session
Manual Testing v1.0
Chapter XII
Traceability Matrix &
Software Metrics
Manual Testing v1.0
Traceability MatrixWhat is Traceability Matrix?
Mapping of customer requirements in each and every phase of testing using a matrix is called Traceability Matrix. The different phases in testing are
Test CaseTest ExecutionAutomation Script CreationDefect ReportingEg :
House RelocationIdentify items Packing Loading Un Loading Coverage
Manual Testing v1.0
Traceability Matrix Template
Requirement ID
Feature Tested Test Case ID Test case Description Test ExecutionAutomated Test
Script IDDefect ID
1 Login Screen STC_001 Yahoo Login Screen Yahoo_AutomationScripts_Mail ATS_YML_0011.1 User Name Validation STC_001 Yahoo Login Screen Yahoo_AutomationScripts_Mail ATS_YML_001 11.1.1 Alphanumeric Check STC_001 Yahoo Login Screen Yahoo_AutomationScripts_Mail ATS_YML_001 31.1.2 Length Check STC_001 Yahoo Login Screen Yahoo_AutomationScripts_Mail ATS_YML_0011.1.3 Null Check STC_001 Yahoo Login Screen Yahoo_AutomationScripts_Mail ATS_YML_001 41.1.4 Default Check STC_001 Yahoo Login Screen Yahoo_AutomationScripts_Mail ATS_YML_0011.2 Password Field Validation STC_001 Yahoo Login Screen Yahoo_AutomationScripts_Mail ATS_YML_0011.2.1 Null Check STC_001 Yahoo Login Screen Yahoo_AutomationScripts_Mail ATS_YML_001 121.3 GUI object validation STC_001 Yahoo Login Screen Yahoo_AutomationScripts_Mail ATS_YML_0011.4 Logo Color Check STC_001 Yahoo Login Screen Yahoo_AutomationScripts_Mail ATS_YML_001 131.5 Login & Logoff Button validation STC_001 Yahoo Login Screen Yahoo_AutomationScripts_Mail ATS_YML_0012 Inbox STC_002 Yahoo Inbox Yahoo_AutomationScripts_Mail ATS_YML_0012.1 Inbox Main Screen STC_002 Yahoo Inbox Yahoo_AutomationScripts_Mail ATS_YML_001 22.1.1 Select All button STC_002 Yahoo Inbox Yahoo_AutomationScripts_Mail ATS_YML_0012.1.2 Delete Button check STC_002 Yahoo Inbox Yahoo_AutomationScripts_Mail ATS_YML_0012.1.3 Mark button STC_002 Yahoo Inbox Yahoo_AutomationScripts_Mail ATS_YML_001 52.1.4 Move Button STC_002 Yahoo Inbox Yahoo_AutomationScripts_Mail ATS_YML_0012.1.5 View Message STC_002 Yahoo Inbox Yahoo_AutomationScripts_Mail ATS_YML_0012.2 Inbox Mail Check Screen STC_002 Yahoo Inbox Yahoo_AutomationScripts_Mail ATS_YML_001 62.2.1 Reply button Check STC_002 Yahoo Inbox Yahoo_AutomationScripts_Mail ATS_YML_0012.2.2 Add to Address STC_002 Yahoo Inbox Yahoo_AutomationScripts_Mail ATS_YML_001 72.2.3 Next Link STC_002 Yahoo Inbox Yahoo_AutomationScripts_Mail ATS_YML_0012.2.4 Previoud Link STC_002 Yahoo Inbox Yahoo_AutomationScripts_Mail ATS_YML_001 82.2.5 Back to Message STC_002 Yahoo Inbox Yahoo_AutomationScripts_Mail ATS_YML_0012.2.6 Flag Message STC_002 Yahoo Inbox Yahoo_AutomationScripts_Mail ATS_YML_001 93 Compose STC_003 Yahoo Compose Yahoo_AutomationScripts_Mail ATS_YML_0013.1 Compose Screen GUI STC_003 Yahoo Compose Yahoo_AutomationScripts_Mail ATS_YML_001 103.1.1 To field validation STC_003 Yahoo Compose Yahoo_AutomationScripts_Mail ATS_YML_0013.1.2 Subject field validation STC_003 Yahoo Compose Yahoo_AutomationScripts_Mail ATS_YML_0013.1.3 Button check ( Save, Send, Cancel ) STC_003 Yahoo Compose Yahoo_AutomationScripts_Mail ATS_YML_001 113.1.4 Attach Button STC_003 Yahoo Compose Yahoo_AutomationScripts_Mail ATS_YML_001
Manual Testing v1.0
Software Metrics are numerical data collected in software development and testing activities for analysis.
Software Metrics support 4 main function
Planning Organizing Controlling Improving
Software Metrics
Manual Testing v1.0
Planning metrics
Cost estimating Training planning Resource planning Scheduling Budgeting
Organizing metrics Size Schedule
Metrics Types
Controlling Status Track
Improving Process Improvement Effort Improvement
Manual Testing v1.0
Sample Testing Metrics
% Effort Variation
(Actual Effort - Estimated Effort) * 100(Estimated Effort)
Example : Actual Effort = 100 DaysEstimation Effort = 80 Days
Effort Variation = 100 – 80 * 100 = 25 %
80
Manual Testing v1.0
Sample Testing Metrics
Defect Density (Defects per Person day) (Total Number of Defects in the
cycle) * 100 (Actual Effort in person days for
the cycle) Example
Total number of defects in cycle 1 = 75Actual effort spent in testing in days = 5 days
Defect Density ≠75 / 5 = 15 defects per day
Manual Testing v1.0
Sample Testing Metrics
Defect Removal Efficiency%(Total no. of Defects found by tester)* 100
(Total no. of Defects found by tester + Total no. of Defects found by customer and others)
ExampleTotal no. of defects found by tester = 120Total no. of defects found by others = 20Defect Removal Efficiency 120 / 120 + 20 * 100
= 85.71 %
Manual Testing v1.0
Effort Estimation
Software Estimation is an process of predicting the duration of an activity in project.
Different Estimation Techniques Function Point Estimation COCOMO Test Case Point Estimation Metrics Based Estimation
Manual Testing v1.0
Process in Estimation
Identify of Requirements Task can be understood by document Separate the task in groups
Categorize the requirements into criticality as Simple, Average and Complex
Login - Simple Inbox - Complex Compose – Average
Multiply the requirements with metrics value for category
Simple task - 5 min for test case creation Average task – 10 min for test case creation Complex task – 15 min for test case creation
Add the Adjustment Factor Buffer time of 20 % should be added to the estimated time
Report the effort estimation
Manual Testing v1.0
Example in Estimation
Test Estimation for Yahoo Mail application Identification of requirements
Login Screen Inbox Screen Compose Screen Address Screen
Categorize them based on criticality Login ~ Simple Inbox ~ Complex Compose, Address ~ Average ( Cont.. )
Manual Testing v1.0
Example in Estimation
Login – 5 mins Inbox – 15 mins Compose – 10 mins Address – 10 mins
Multiply with metrics value
Simple - 1 * 5 = 5 Average – 2 * 10 = 20 Complex – 1 * 15 = 15 Total = 40 mins
Add the adjustment factor 40 mins * 20 % = 8 mins
The total time required to write test case for Yahoo Mail application is 48 Mins for functionality like Login, Inbox, Compose and Address.
Manual Testing v1.0
Chapter XIII
Introduction to Automation Testing
Manual Testing v1.0
Contents
Testing Techniques
Why to Automate?
Different Testing Tools
Which Test Case to Automate
Which Test Case Not to Automate
Benefits of Automation
Manual Testing v1.0
Testing Techniques
Manual Testing
Automation Testing
Manual Testing v1.0
Why to Automate ?
Automated testing is recognized as a cost efficient way to increase application reliability, while reducing the time and cost of software quality programs
Manual Testing v1.0
Different Type of Tools
Functional Testing ToolsQuick Test Pro.Selenium
Test Management ToolsQuality Center Performance Testing
Performance Testing ToolsLoad Runner QA Load
Manual Testing v1.0
Which Test Cases to Automate?
Tests that need to be run for every build of the application (sanity check, regression test)
Tests that use multiple data values for the same actions (data driven tests)
Tests that require detailed information from application internals (e.g., SQL, GUI attributes)
Stress/load testing
More repetitive execution?Better candidate for automation.
More repetitive execution?Better candidate for automation.
Manual Testing v1.0
Which Test Cases Not to Automate?
Usability testing"How easy is the application to use?"
Tests without predictable results "ASAP" testing
"We need to test NOW!" Ad hoc/random testing
based on intuition and knowledge of application
One-time testing
Improvisation required?Time required for automation.
Improvisation required?Time required for automation.
Manual Testing v1.0
Benefits Of Automation Testing
Test Repeatability and ConsistencyAutomated tests should provide the same inputs and test conditions no matter how many times they are run. Human testers can make errors and lose effectiveness, especially when they are fairly convinced the test won’t find anything new.
Expanded and Practical Test Reuse Automated tests provide expanded leverage due to the negligible cost of executing the same test multiple times in different environments and configurations, or of running slightly modified tests using different input records or input variables, which may cover conditions and paths that are functionally quite different.
Manual Testing v1.0
Benefits Of Automation Testing
Practical Baseline Suites
Automated tests make it feasible to run a fairly comprehensive suite of tests as an acceptance or baseline suite to help ensure that small changes have not broken or adversely impacted previously working features and functionality. As tests are built, they are saved, maintained, and accumulated. Automation makes it practical to run the tests again for regression testing even small modifications.
Manual Testing v1.0
It’s Time to Conclude ….