Test Lifecycle and Test Process. Test Level Process Test Planning and Control Test Analysis and...
-
Upload
randolf-hodges -
Category
Documents
-
view
251 -
download
2
Transcript of Test Lifecycle and Test Process. Test Level Process Test Planning and Control Test Analysis and...
SOFTWARE TESTINGTest Lifecycle and Test Process
Test Level ProcessTest Planning and
Control
Test Analysis and Design
Test Implementation and Execution
Evaluating Exit Criteria and Reporting
Test Closure Activities
oDefine test scope, objective and goaloTest strategy
oRisk analysisoMake strategy
oExit CriteriaoEstimationoOrganizationoSchedulingoTest Mgmt. and controloMonitoring (Reporting)oReporting planning / designoStatus reporting
Test Planning and Control
Test Analysis and Design•Review test basis•Identify test condition•Decide test design tech.•Evaluate testability•Setup Environment
Test Implementation and Execution•Specify TC, priority data, procedure•Pre Test•(Re) Test execution
Evaluating Exit criteria•Check exit criteria•Write summary report
Test Closure Activities•Testware configuration•Evaluate Test Process
Test Planning and Control Activities
Determine scope /
Objectives / Levels
Risk Analysis
Test Strategy
Test Estimation
Test Planning
Relationship with Stakeholder
Test Activites
Managing Test Process (Control)
Reporting
Level’s Test Plan
Describe specific activities for each test level when test level need to be extended from
Master Test Plan Provide detail task, work schedule and
milestone for each level that are not dealt in Master Test Plan
Describe standard and template for test specifications for each test level
Planning is influenced by the test policy and strategy of the organization and aligned by Master test Plan
Maintenance test management documentation can replace a level test plan
Master Test Plan
A master test plan is necessary when multiple test levels are used, it directs and controls each test level
Master Test Plan
Acceptance Test plan
System Test plan
Integration Test plan
Component Test plan
Test Plan Doc. And Template Test plan documentation template
has been developed and applied for commonality and readability in many organizations.
IEEE829 “Standard for Software Testing Documentation” provide testing documentation templates including test plan and planning activities.
Test Planning Activities
Determining scope and risk, and identify the objectives of testing
Defining the overall approach of testing (test technique, test item, coverage, interface between stakeholders, testware)
Assigning resources for the different test activities (ex : HR, test environment, PC etc)
Implementation test strategy and aligning with test policy
Scheduling test analysis and design activities Scheduling test implementation, execution and
evaluation Defining the exit criteria
Test Policy
Describe the philosophy of testing organization Apply to all testing performed on all projects within
the organization (Generally short and simple, An executive-level document). Definition of testing Mission and target of testing (Quality level, main quality
characteristics) Strategic high level view on testing and testing duty Core roles of testing organization Define Test Process (including level of independence) Testing approach for customer’s satisfaction Test Process Improvement (Goal, KPI, Model)
Aligned with the quality policy of the organization
Exit Criteria
Limited time and budget Number of defects which is not fixed yet
(Severity) Number of retest Number of defects per hour -> 0 All test cases executed at least once (When
those are wll designed an considered related risk & coverage) + No major defects
Prevented damage < cost of testing Proper combination of above
Test Control Activities
Measure and analyze test result Monitoring and Reporting the status of test
progress, test coverage and exit criteria Corrective activities against Test Plan Making decision of changing or holding of
testing
Traditional Approach for Test Strategy
Analytical approaches Risk-based testing
Model-based approaches Stochastic testing(reliability growth model,
operational profiles) Methodical approaches
Error guessing, fault attacks, checklist based, Quality characteristics based
Process or standard-compliant approaches IEEE829
Dynamic and heuristic approaches Exploratory testing, bug based attack
Test Strategy -1
Describe approaches including product risk and project risks
High level documentation of testing (aligned with test policy)
Explain test levels to be executed Entry or exit criteria for each test and overall guide about
relationship between levels
Describe project and product risks and planning risk management Explain the relationship between the risk and testing clearly
Test strategy could be established for organization or project Project test strategy to be aligned with organizational test strategy
Test Strategy -2
Test strategy includes Intergration Procedure Test specification technique Level of test independence Mandatory/Optional standards Test Achievement Test Automation Testware reusability Retesting, Regression Testing Test control, reporting Measure and metrics for testing Incident (Defect) management Testware configuration
Risk at Testing
Risk ? - considering limited time and budget Defect -> failure -> Risk Defect : specific cause of failure (related to product) Failure : actual fail of the component or system from
its expected function (Related to events) Risk : Cost causes by failure (A factor that could
result in future negative consequences) RISK = likelihood of failure x damage
Likelihood of failure = freq of use x chance of fault
Risk based strategy - Procedure
Identify items where risks are possible
Analyze items whether they are important, complex and potential defects (determine the priority) RISK = LIKELIHOOD x IMPACT
Planning to mitigate risk based on risk analysis (test strategy to reduce or mitigate risk)
Monitoring the risk and risk mitigation actions
Risk Management
Identify Risk
Analyze Risk
Planning Risk
Tracking Risk
Risk based strategy – Identify Risk
Classify functional / technical items High Level test items based on the
requirements Low level test items based on the
architecture Brainstorming session would be
helpful Less than 35 risk items are
recommended
Risk Management
Identify Risk
Analyze Risk
Planning Risk
Tracking Risk
Likelihood Impact
Risk items 1
Risk items 2
Risk items 3
Risk based strategy – Analyze Risk
Risk = likelihood x impact Determine Risk factors based on
defect patterns of previous project Classify Risk into technical risk
(unit/integration) and business risk (acceptance)
Risk Management
Identify Risk
Analyze Risk
Planning Risk
Tracking Risk
Likelihood- Technical Risk
Impact – Business Risk
Development Testing
Acceptance testing
Risk based strategy – Analyze Risk Risk Factors (experience based, defect
patterns/ history) Factors for likelihood
Complexity New development (level of re-uses) Interrelationship (number of interfaces) Size (line of code) New Technology (difficulty) Inexperience of development team
Factors for business impact User importance (selling item) Financial damage (e.g. safety) Usage intensity External visibility
Customization needed Weighting can be applied
Risk Management
Identify Risk
Analyze Risk
Planning Risk
Tracking Risk
Risk based strategy – Analyze Risk
All possible stakeholders participate Identify stakeholders (internal and
external of the project) Stakeholder thinks that all items are
risky Risk level : 9 = critical, 5= high. 3 =
normal, 0=none Try to reach a consensus through the
meeting about the risk level.
Risk Management
Identify Risk
Analyze Risk
Planning Risk
Tracking Risk
Risk Factors
Risk Items
Likelihood Impact
Risk items 2
Risk items 3
Risk based strategy – Analyze Risk
Risk MatrixRisk Management
Identify Risk
Analyze Risk
Planning Risk
Tracking Risk
Discussion needed
ITA
FTA STTA
STA
STA – Severe Test AreaSTTA – Strong Test AreaITA – Intensive Test AreaFTA – Fundamentals Test Area
Risk Based Testing – Planing Risk
Example of test design technique – Low level test
Discussion needed
ITA
FTA STTA
STAFormal Test BoundaryValue Analysis DecisionDecision coverage 70%Code Inspection
Decision Coverage 70%Peer Review
Indirect Testing (Test More if possible)
Decision Coverage 70%
Risk Management
Identify Risk
Analyze Risk
Planning Risk
Tracking Risk
Risk Based Testing – Planing Risk
Example of test design technique – High level test
Discussion needed
ITA
FTA STTA
STAUse case testing (all flows included)Decision table testingBoundary value analysisPairwise testign
Use case testing (basic flow only)Equivalence partitioning
Use case testing (basic flow only)
Use Case testing (all flows included)Equivalence partitioning
Risk Management
Identify Risk
Analyze Risk
Planning Risk
Tracking Risk
Risk Based Testing – Planing RiskRisk Management
Identify Risk
Analyze Risk
Planning Risk
Tracking Risk
Risk Level Priority Design Technique
Exit Criteria Test Design Review
HR Allocation
Regression test
1. STA 1
2. STTA 2
3. ITA 3
4. FTA 4
Risk Based Testing – Planing RiskRisk Management
Identify Risk
Analyze Risk
Planning Risk
Tracking Risk
Risk Based Testing Strategy• Test specification technique•Exit Criteria•Test basis / test design review•Positioning test members•Re-testing•Regression testing•Test control, reporting•Level of test independence•Determining priority•Integration procedure•Mandatory/optional standards•Test environment•Test automation•Testware reusability•Measure & matrics for testing•Incident management•Testware configuration
Risk Based Testing – Planing Risk Risk matrix Analyze requirement coverage by
test case
Risk Based Test Strategy - Macroscopic Risk Based test Strategy in Test Plan
Each test level has its own risk based strategy
Functionality Connectivity
Reliability Recoverability
Performance
Suitability
Relative weighting
40 10 20 5 15 20
Unit Test
SW Integration Testing
SW/HW Integration Testing
System Test
Acceptence Test
Field Test
Test estimation
Estimation method Estimation of test effort based on the metrics of
similar or previous projects Estimation by the owner of task and experts
Based on test assignment, risk analysis and test strategy Collection of experience-based estimates from task
owners and experts for the detailed task Consensus on the estimates with task owners an experts
collectively Other estimation methods
TPA (Test point analysis) Estimation tools
Test Environment
Define the physical and h/w test environments CPU, RAM, VGA card, NIC.. Other products or equipments that the system
interacts with Test equipment and infrastructure (server, chamber,
NW traffic generator, telecommunication network facilities, broadcasting network etc)
Test automation tools (optiional)
Define the software test environment OS Browsers DirectX Application softwares
Test Level Process
Test Planning and Control
Test Analysis and Design
Test Implementation and Execution
Evaluating Exit Criteria and Reporting
Test Closure Activities
•Define test scope, objective and goal•Test strategy
•Risk analysis•Make strategy
•Exit Criteria•Estimation•Organization•Scheduling•Test Mgmt. and control•Monitoring (Reporting)•Reporting planning / design•Status reporting
Test Planning and Control
Test Analysis and Design•Review test basis•Identify test condition•Decide test design tech.•Evaluate testability•Setup Environment
Test Implementation and Execution•Specify TC, priority data, procedure•Pre Test•(Re) Test execution
Evaluating Exit criteria•Check exit criteria•Write summary report
Test Closure Activities•Testware configuration•Evaluate Test Process
Test Analysis & Design Activities Review the test basis Identify test requirements, test conditions
and test data based on analysis of items, the specification, behavior and structure
Designing and prioritizing test cases Evaluating testability of the requirement
and test objects Designing the test environment set-up
and identifying any required infrastructure and tools
Test Designing Procedure
Designing Test against test condition Test conditions : an item or even of a component or system
that could be verified by one or more test cases, e.g a function, transaction, feature, quality attribute, or structural element.
Specifying Test Cases Specifying Test Procedure (or test script) Test Requirement documentations (Test basis)
Requirements Architecture Design Interfaces Programming specifications Programming code User manual
Test Cases
Definition A set of input values, execution preconditions,
expected results and execution post-conditions, developed for a particular objective or test condition
Describe test condition to be tested + input values and expected result
A set of information or entity for test execution
Objective of test cases To detect defects as many as possible with test cases To guarantee test coverage
Test case - exampleID Class Pre-Cond. Step Expected
ResultPass/
Fail
Traceabilit
y
Importance
Others
xx.1.1
Searching phone number by name
-App loaded-Test data ( kwon wonil, k wonil, 03476, *&^$%#@, null)
-Move to the phone book page-Start search phone number using input data
- Normal name is found with phone numberAbnormal name is not found
Not tested
Design spec. yy.7.3
Medium
xx.1.2
-App loaded-Test data ( kwon wonil, k wonil, 03476, *&^$%#@, null)
-Click menu button and find “4. phone book” and click to open that page
-Select “searching phone number” from menu and click-Input each test data and click “OK “
-when input name (“kwon wonil”), the name is foundWhen input name “K wonil”, “can’t find the name”. Message displays for 2 seconds- “03739”…
Blocked
Design spec. yy.7.3
Medium
Test case and execution scope
Requirement
Software to be tested
Requirement – Test Oracle
Requirement exist, but implemented
Implemented b y the requirement, but malfunction
Implemented without requirement
Implemented without requirement and malfunction
Implemented system
Categories of Test Design Techniques Specification-based techniques
Models, either formal or informal, are used for the specification
From these models, test cases can eb driven systematically
Structure-based techniques Information about how the software is constructed is used
to derive the test cases, for example, code and design The extent of coverage of the software can be measured
for existing test cases, and further test cases can be derived systematically to increase coverage
Experience-based techniques Knowledge of testers, developers, users and other stakeholders
about the software, its usage and its environment Knowledge about likely defects and their distribution Documentation needed
Specification-based techniques
Equivalence partitioning Boundary value analysis Pairwise testing Decision table testing State transition testing Use case testing
Write Test Cases
[Specification] “The group of age 16 and 65 have to
pay tax. If income is less than RM20,000, then the tax is due at 20% and the other case is due at 50%. If they have child, then 10% tax reduction will be applied.
Equivalence Partitioning (EP)
Input / output spaces are divided into groups that are expected to exhibit similar behavior. So they are likely to be processed in the same way
Test can be designed to cover partitions EP can be found for both valid and invalid data Application procedure
Divide into equivalence classes, have similar characteristic using given information of systems (e.g program spec)
Select a data from each partitions Make test cases to cover all valid equivalence classes Make test cases to cover all invalid equivalence classes
EP - Example
Requirement
Test cases
V < 2.50 2.50 <= V < 2.80
2.80 <= V < 3.30
3.30 <= V < 3.80
3.80 <= V < 4.30
4.30 <= V
2.50V 2.80V 3.30V 3.80V 4.30V
Test Cases 1 2 3 4
Input 2.75 2.98 3.57 4.03
Partition 2.50 <= V < 2.80 2.80 <= V < 3.30 3.30 <= V < 3.80 3.80 <= V < 4.30
Expected result
EP - Practice Use EP to design a test cases for the specification
given previously Equivalence Classes
Test Cases
Attribute Rule Valid ECs Invalid ECs
Age Range 16-65 <16, >65
Income Boundary Value
<20,000, >=20k
-
chile Boolean Y, N -
Attribute 1 2 3 4 5 6
Age
Income
Child
Tax rate
Boundary Value Analysis
The maximum or minimum value of partitioning
Test cases
V < 2.50 2.50 <= V < 2.80
2.80 <= V < 3.30
3.30 <= V < 3.80
3.80 <= V < 4.30
4.30 <= V
2.50V 2.80V 3.30V 3.80V 4.30V
Boundary?
TC 1 2 3 4 5 6 7 8 9
Input 2.49 2.5 2.79 2.80 3.29 3.30 3.79 3.80 4.29
Exp. res.
Pairwise Testing
According to observation, interactions of two factors cause most defects -> deal with all combination pair with two factors
Each value of the parameter has to be pair with each values of the other parameters at least once only.
Mode Setting
Equalizer
Sequence Hold Off
Sequential repeat
Standby
Live
Mode Setting EqualizerSequence Hold Off
Sequence Hold Live
Sequence Standby Off
Sequence Standby Live
Sequential repeat Hold Off
Sequential repeat Hold Live
Sequential repeat Standby Off
Sequential repeat Standby Live
Pairwise TestingMode Setting Equalizer
Sequence Hold Off
Sequence Hold Live
Sequence Standby Off
Sequence Standby Live
Sequential repeat
Hold Off
Sequential repeat
Hold Live
Sequential repeat
Standby Off
Sequential repeat
Standby Live
Mode Setting Equalizer
Sequence Hold Off
Sequence Standby Live
Sequential repeat
Hold Live
Sequential repeat
Standby Off
Mode Setting
Sequence Hold
Sequence Standby
Sequential repeat
Hold
Sequential repeat
Standby
Mode Setting
Eq
Sequence Hold Off
Sequence Standby Live
Sequential repeat
Hold
Sequential repeat
Standby
Mode Setting
Eq
Sequence Hold Off
Sequence Standby Live
Sequential repeat
Hold Live
Sequential repeat
Standby
Mode Setting
Eq
Sequence Hold Off
Sequence Standby Live
Sequential repeat
Hold Live
Sequential repeat
Standby Off
Decision Table Testing If the input conditions (or actions) can either be
true or false, the decision table contains the triggering conditions, often combinations of true and false for all input conditions, and the resulting actions for each combination of conditions
Advantage It detects problems of test basis such as requirement while
designing test cases (meaningful participate in review meeting) It detects incompleteness or ambiguousness of test basis
Disadvantage It may cost much effort and times to make Hard to design when the system is complicated and possible to
make test cases mistakenly
DT : exampleATM Logic Test Case ID
1 2 3 4 5
Test Condition
Card availability N Y Y Y Y
Password Match - N N Y Y
3 times password mismatch - N Y N N
Deposit availability - - - N Y
Expected Result
Card Return Y N N N N
Request password input N Y N N N
Card Retain N N Y N N
Request withdraw amount N N N Y N
Withdraw N N N N Y
DT – Example 2 – Logical TC
Business rule
Cause (Test Condition) 1 2 3 4 5 6
Valid Paper money N Y - - - -
Valid Card - - N Y Y Y
Valid Password (for debit card) or no need password (for credit card)
- - - N Y Y
Valid Money - - - - N Y
EFFECT (expected result)
Reject paper Money Y N N N N N
Reject Card N N Y Y N N
Request lower money input N N N N Y N
Sell block internet access N Y N N N Y
Example 2 - Physical TCTC# Test condition Expected result
D.DT.1
Invalid paper Reject the valid moneyUser can’t access the internet
D.DT.2
Valid Money Accept the moneyUser allowed to use internet
D.DT.3
Invalid Card Reject the cardDisplay “invalid cash card”
D.DT.4
Valid Cash cardInvalid Password entered
Reject the cardDisplay message “password is wrong”User can’t access the internet
State transition testing Test technique to verify relationship
between transition, events, actions, state and conditions (To cover all conditions between states and events – both valid and invalid)
To verify the system or the software if it fits with the state transition model
The defects from state transition testing are classified into state, transition guard and event
Defects could be found in a system (or software) or a specifications
STT – classification of defect Faults in model
Missing initial state A guard is misplaced in a state instead in a
transition Guard overlapped Could be found by inspection or static analysis
Faults in implementation Extra / missing / corrupt state Missing / wrong action Sneak paths; trap doors, back door.. Could be found by dynamic test
STT - example
Accepting Soda
Selection
Accepting Coins
Standby
Vending Machine :Service The Customer
evPowerOn
evCoinDrop [deposit < price]
evCancel/ReturnDeposit()
evCoinDrop [deposit < price]
evCoinDrop [deposit >= price]
evCoinDrop [deposit >= price]
evSodaSelectButton/ReleaseCan();ReturnChange90;UpdateStock()
evCancel/ReturnDeposit(); ResetLights()
STT – State Transition tables
State
event
Standby Accepting Coins
Accepting Soda
Selection
evCoinDrop[deposit>=price]
Accepting Soda Selection
Accepting Soda Selection
evCoinDrop[deposit<price]
Accepting Coins Accepting Coins
evSodaSelectButton Standby
evCancel Standby Standby
STT – transition tree
(1) Standby
(2) Accepting Soda
(3) Accepting Coins
Standby
Accepting Soda
Accepting Coins
Standby
Standby
Accepting Soda
Accepting Coins
Standby
Standby
Accepting Soda
Accepting Coins
Accepting Soda
Accepting Coins
Accepting Soda
Accepting Coins
Standby
Standby
Standby
Standby
Accepting Soda
Accepting Coins
Accepting Soda
Accepting Coins
Path 1
Path 3
Path 2
Path 4
Path 5
Path 6
Path 7
Path 8
Path 9
Path 12
Path 13
Path 14
Path 15
Path 16
Path 10
Path 11
STT – Valid (Legal) Test CasesSoda price = 70TC Start
stateEvent Action Next
StateEvent Action Final
StateV01 Standby evCoinDrop
[devposit>=price]*deposit->100
SetLights()
Accepting Soda Selection
evSodaSelectButton()
ReleaseCan, Return Change (30), updateStock(9)resetLights
Standby
V02 Standby evCoinDrop [devposit>=price]*deposit->100
SetLights()
Accepting Soda Selection
evCancel ReturnDeposit(100), resetLights
Standby
V03 Standby evCoinDrop [devposit < price]*deposit-> 50
-
Accepting Coins
evCoinDrop[deposit >= price]*deposit -> 50
SetLights()
Accepting Soda Selection
V04 standby evCoinDrop [devposit .<price]*deposit -> 50 -
Accepting Coins
evCoinDrop[deposit < price]*deposit -> 10
-
Accepting Coin
::
V16 Accepting Coins
evCancel ReturnDeposit
Standby
STT – invalid (illegal) Test CasesSoda price = 70TC Pre
conditionState Event Expected Result
IV01 V01 Standby evSodaSelectButton-
IV02 V01 Standby evCancel-
IV03 V04 Accepting Coins evSodaSelectButton-
IV04 V03..V15 Accepting Soda Selection
evCoinDrop-
IV05 V03..V15 Accepting Soda Selection
evCoinDrop-
ID Pre condition
States Events Guard Expected Result
G01 V04 Accepting Coins
evCoinDrop*deposit -> 10
Deposit = price
Accepting Soda Selection
G02 G01 Accepting Soda Selection
evCoinDrop*deposit -> 10
Deposit >=priceDeposit = 800
Accepting Soda Selection
STT – invalid (illegal) Test CasesSoda price = 70
STT – Test Procedure (Script)
V01
V02
V03
V04
V05
V06
V07
V08
V09
V10
V11
V12
V13
V14
V15
V16
IV1
IV2
IV3
IV4
G01
G02
TP1
√ √ √ √ √ √ √ √
TP2
√ √ √ √
TP3
√ √ √ √ √
TP4
√ √ √ √ √
TP5
√ √ √
TP1 : V01, IV1, IV2, V02, V03, V06, V07, V10TP2 : V04,IV03, V05, V11TP3 : V08, V09, V12, V13, V16TP4 : V04, V14, G01, G02TP5 : V15, IV3, IV4
Use Case Testing
Use case testing is valid if there are use case models, derive test cases using the use case specifications or activity diagrams
Use case specification : Basic unit for system. A use case describes interactions between actors, including users and the system -> individual use case fir unit testing
Activity diagram : describes interaction between use cases -> used for integration testing of use cases
Use Case Testing (c’td) A use case describes the action sequence of
a system (order of actions) Each action sequence provides a specific
actor with the observable result.
Use case is describe by sentences Use case describes the interaction between actor
and system.
Use case testing - UML
UML does not describe how the event is structured
Usability and testability depends on how much the use case model established – hard to test
Basic flow
Alternative flow
Use Case testing - example
Transfer(mobile banking) Use Case event flows and test scenarios
Basic flowS0 – Normal
Transition
S1 – Invalid bank chipS2 – Invalid password (request try again)
S3 – 3 times Invalid password S4 –Invalid account
S5 –Invalid transfer password (request try again)
S6 – 3 times Invalid transfer password
S7 –Insufficient balance
Test case for transfer use caseTest
CasesScenario
Chip
Internet pwd
Account
Trnsfer pwd
Transfer money
balancemoney
Expected result
TC0 S0 V V V V Transfer < balance Success transfer
TC1 S1 I N/A N/A N/A N/A N/A Msg invalid chip
TC2 S2(trial chance>1)
V I N/A N/A N/A N/A Msg( pwd error, try again)
TC2.1 S2(trial chance=1)
V I N/A N/A N/A N/A Msg( pwd error, try again)
TC3 S3(trial chance=0)
V I N/A N/A N/A N/A Msg( pwd error)
TC4 S4 V V I N/A N/A N/A Msg( acc error)
TC5 S5(trial chance>1)
V V V I N/A N/A Msg( pwd error, try again)
TC5.1 S5(trial chance=1)
V V V I N/A N/A Msg( pwd error, try again)
TC6 S6 (trial chance=0)
V V V I N/A N/A Msg( acc error)
TC7 S7 V V V V Transfer >balance Msg (insufficiet deposit error)
Structure Based Technique (White Box Test)
Derive test case from s/w code or design which shows the structure based on an identified structure of the software or system
Create test cases more to increase test coverage Control flow testing, basis path testing,
elementary comparison testing Component level – structure of code itself
(statement, decision or branches) Integration level – structure of tree call (call
structure diagram) System level - Structure of menu structure,
business process or web page structure
Coverage
Coverage is the extent that a structure has been exercised by a test suite To measure thoroughness of testing If coverage is not 100%, more test may be designed to test
those items which are missing, therefore, could increase coverage.
Classification of coverage Structure based (formal) – branch coverage, condition
coverage, statement coverage, condition/decision coverage, MC/DC, MCC
Extension of concept : coverage that formal tech guarantees
Informal coverage : requirement coverage, functional coverage, entry/exit coverage etc
Informal coverage
Create test cases based on requirement, function specification Equivalence coverage : number of covered partitions/number of all
partitions Boundary value coverage : # of covered boundary values / # of all
boundary values Decision table testing coverage : # of covered combination / # of
all combination
Requirement Test Case Coverage (traceability metric)TC1.1 TC1.2 TC1.3 TC1.4 TC1.5 TC1.1 TC1.6 Total
RS1.1 1 1 1 2 1 6
RS1.2 1 2 1 1 1 6
RS1.3 1 1 2 1 2 7
RS1.4 2 1 2 5
Total 3 1 3 3 5 4 5
Condition vs. Decision Coverage Does “Condition” Coverage cover (include)
“Decision, Branch” Coverage?if (X>=-1 and y < 4) then
x= y-7y = x + y -5
(x,y)Condition
(-3,-2) (0,6) (2,1)
X>=-2 F T T
Y < 4 T F T
X>=-2 and y<4
F F T
Condition vs. Decision Coverage Does “Condition” Coverage cover (include)
“Decision, Branch” Coverage?if (X>=-1 and y < 4) then
x= y-7y = x + y -5
(x,y)Condition
(-3,-2) (0,6) (2,1)
X>=-2 F T T
Y < 4 T F T
X>=-2 and y<4
F F T
Decision coverage
condition coverage
Classification of coverage Decision Coverage
Dpoint = A AND B, Decision table for condition coverage
Condition Coverage Dpoint = A AND B, Decision table for condition
coverage
Condition/decision Coverage Dpoint = A AND B, Decision table for
condition/decision coverage
Dpoint A B
0 1 0
1 1 1
Dpoint A B
0 1 0
0 0 1
Dpoint A B
0 0 0
1 1 1
Classification of coverage Modified Condition/Decision Coverage (MC/DC)
Multiple Condition Coverage
Dpoint A AND B A B
0 1 0
0 0 1
1 1 1
Dpoint A AND B A B
1 1 1
0 1 0
0 0 1
0 0 0
MCC Practice [condition] userID can be combination of characters and numbers with more
than 5 letters and under 15 letters and first letter should be character (not number). Password has to be more than 5 letters and under 15 letters and has to be combined with characters and numbers.
userID = CheckUserID(strUserID);
UserPassword = CheckUserPassword(strUserPassword);
if (userID && userPassword) then
RegisterUserProfile(strUserID, strUserPassword);
else
errMessage(,sgID_101);
MCC - Practice
Dpoint = A AND B, Decision table for multiple condition coverage
Dpoinr = A NAD B, physical test cases
Dpoint A AND B A (userID) B (userPassword)
1 1 1
0 1 0
0 0 1
0 0 0
Dpoint A AND B TC input Expected result
TC-1 (1,1) UserID=st1234Password=pw100000tes
t
User can access
TC-2 (1,0) UserID=st1234Password=testing
Error message
TC-3 (0,1) UserID=1234stPassword=pw100000tes
t
Error message
TC-4 (0,0) UserID=stPassword= test
Error message
Classification of CoverageCoverage SC DC CC C/
DCMCD
CMCC
All statements in program are covered (executed) at least once √ √ √ √ √ √
All possible decision point’s result (true/false) are covered at least once √ √ √ √
In a decision point, all possible result (true/false) of each condition are covered at least once
√ √ √ √
In a decision point, each condition that makes up a decision will independently affect the result of the decision regardless of the result of other conditions
√ √
In a decision point, all possible combinations of each conditions are covered at least once
√
Classification of Coverage
Statement
Coverage
Multiple Condition Coverage
Modified Condition /Decision Coverage
Condition /Decision Coverage
Decision CoverageDecision Coverage
Experienced-Based Testing
Error Guessing Checklist Based Exploratory Testing Approach Scripted Based Testing Approach Classification Tree Method
Experienced based Testing
Intuitional testing approach based on experience of similar s/w or technology
Use it with formal technique (complementary)
Detecting different defects which is hard to find in formal technique is possible Different types of defects The defects is not consistent, depends
on the experience of tester (hard to manage)
Error guessing
Design test cases using experience of similar software, intuition, ability of tester
In error guessing, try to record all defects or failures of previous stage or version, and design test cases to attack those defects or failures
It is possible to design test cases through possible defects, common sense, experience, data of defects list and the reason of software failure.
To reinforce the formal tech., apply it after testing tech (supplementary)
Checklist based
Checklist General checklist
List of test items to be performed Functional (Black box) checklist
It has abstract layers and grouping from a top function of the system structure to a bottom unit or component
System component checklist It has different levels of grouping from subsystem
or modules of high level including statements or data
Checklist vs. test cases
Checklist generally reflect experience and know-how
Test cases is based on design techniques
Documentation based test cases require to reflect experience and know-how of checklist
It can be used for test basis (development documentation) review
A tool for static test technique (TC is a tool for dynamic test)
Exploratory Testing Not a techniques, but an approach You test while you explore A testing approach to design, execute, record and learn
in a given period of time, using test chart which is including test objectives and items
Not designing test cases first, but execute test items (program) first and build test design with the knowledge you learned from the execution of test items In case you don’t have specification In case of lack of time to test When you need to make up for formal test design When you want to have confidence that you had
detected the all fatal defects It is similar to ad-hoc testing, guerilla testing or intuitive
testing, but it had task, objectives and deliverables.
Exploratory Testing Contents of Exploratory Testing
Product Exploration Test Design Test Execution Heuristics Reviewable Results (deliverables)
Note or record about testing items Note or record about detected defects or failure Summary of description about how the test performed
Test procedure Identify the purpose or product Identify functions Identify areas of potential instability Test each function and record problems.
Test Level ProcessTest Planning and
Control
Test Analysis and Design
Test Implementation and Execution
Evaluating Exit Criteria and Reporting
Test Closure Activities
•Define test scope, objective and goal•Test strategy
•Risk analysis•Make strategy
•Exit Criteria•Estimation•Organization•Scheduling•Test Mgmt. and control•Monitoring (Reporting)•Reporting planning / design•Status reporting
Test Planning and Control
Test Analysis and Design•Review test basis•Identify test condition•Decide test design tech.•Evaluate testability•Setup Environment
Test Implementation and Execution•Specify TC, priority data, procedure•Pre Test•(Re) Test execution
Evaluating Exit criteria•Check exit criteria•Write summary report
Test Closure Activities•Testware configuration•Evaluate Test Process
Activities of test implementation
Developing, implementing and prioritizing test cases
Creating test data Creating test procedure or suites for
efficient test execution Preparing test harnesses
(driver/stub) Writing automated test scripts Evaluation of testability
Evaluation of testability
Completeness Verify that everything for test execution is
prepared Primary functionality
Does main functions perform well? Verifying entry criteria for test execution Verifying test environment
Test infrastructure (hardware, tool, test data)
Other application (stub, driver)
Activities of test execution Pre test Executing test procedures either manually or by using
test execution tools, according to the planned sequence Logging the outcome and duration of test execution Update the test cases and scripts or test scenario Recording the identities and versions of the software
under test, test tools and testware Comparing actual results with expected result Reporting defects and test logging (resource usage,
test cases, automation logging) Analyze defect cause, type, severity, priorities Repeat of confirmation testing or regression testing
Objectives of pre-test
Object Software / hardware, prototype etc
Test infrastructure Input value generator, output value
analyzer, simulator Test tool / equipment etc
Development documentation
Analyze test result Comparing actual results with expected results Analyzing the cause of difference between actual and
expected results Defect of test case
The reason of failure of test case is not defect of application but the test case itself (script, scenario, wrong test data)
Defect of test context Test execution error (script, scenario) test data error, input error
Defect of application Defect in specification Defect in software Defect in hardware Defect in test procedure Defect in installation
Defect of application context Error usage Defects in test environment such as OS, DBMS and etc
Test Execution
Test Object
Test Output
Controlled inputs
Expected results
Compare
Contralled test environment / test action (activity)
=
Incident(issues)
Uncontralled test environment / test action (activity)
Uncontrolled inputs
Expected results
Compare
=
Incident(issues)
Definition of Defect
What is software defects? SW is not working as the specification SW is allowing action that the specification described not to do SW is executing some actions which are not mentioned in the
specification SW is not executing some actions which have to be executed but
not mentioned in the specification SW is hard to understand, hard to use, slow and not seems to be
convenient
Defect
Pre-Release Defect
Post-Release Defect
Defects that are found before the
product has released to the
customer
Defects that are found after the
product has released to the
customer
Lifecycle of DefectsFind
Close
Regression Test
Confirm
Distribute
Fix
Evaluate
Need to be documente
d
Review & Documentatio
n
Reporting defect to tester
Distribute to developer
Request to retest
reject
failpass
fail
pass
yes
no
When defect is remained (re-open)
Lifestyle of defect
Status of defect
Description Who change
the status
Open Defect is found Tester, QA
Reviewed Review defect with the stakeholder
QA, reviewer
Assigned Defect is assigned to a developer to be fixed
QA, reviewer
Fixed / Resolved
Completed fixing Developer
Deferred Defect but can’t be fixed at the moment with any specific reason. Re-open when it can be fixed later
QA, reviewer
Closed Fixed defect is confirmed and finished
QA, reviewer
Defects found
Open
Assigned
Resolved
Closed
Reviewed
Defined
Review meeting
Postponed
distribute
disapproved
Confirmation test /
Regression test
Resolving problem(fixing defect)
distribute
Defect (or incident) Reporting To be structured Write to produce them easily Isolation : not need to be debugged Generalize: forecast possible damage Compare : previous version Summarize To be short Clear meaning : to help developer’s understanding Neutral : do not blame any developer Review : to prevent from writing of wrong or bad defect
report
Defect metric Type of defect Severity of defect Priority if defect Number of defects per unit or module Number of defect per each stage (develop, test level) Number of defect which is not closed Number of defect which is not suspended Sample Metric Way to calculate
Average # of defects
# of defect / # of module/scenarios/cases
# of defects by severity
# of defects of each severity / total # of defect
Rate of defects type # of each type / total # of defects
Rate of defects fixed # of closed defects / total # of defects
Defect Type
Incorrect operation System crash Unexpected behavior Missing feature Documentation issue Unfriendly behavior
Cosmetic flaw Slow performance Data corruption
Data loss Installation problem
Defect Type (ctd)
Classify defect type to improve development process or procedure of activities of quality assurance
Lack of consideration in planning (Requirement – not complying with standard, impossibility of testing, unclear sentence, incompletion, discordance)
Lack of consideration in designing (not complying with standard, impossibility of testing, unclear sentence, incompletion, discordance, interface defect etc)
Lack of consideration in coding (not comply with standard, discordance, incompletion, interface defect, data defect etc)
Lack of testing Lack of finishing Lack of communication among team members Mistake in coding
Severity Level Best practice
Show stopper Fatal No workaround Cosmetic
Example (5 scale) Critical defects – hw/sw failure, system crash, lock up, data loss Major defects – missing functions, major incorrect operation Average defects – incomplete functions, minor incorrect op., wrong
interface Minor defects – typing error, user inconvenience, violation of screen
standards Enhancement – not errors, but need some improvement
Example (2 scale) Major – critical, major, average Minor - defect which does not cause system failures
Defect Priority
Resolve immediately Give high attention Normal queue Low priority
Defect reporting automation tool Defect (or incident) tracking tool
Determining priority among reported incident
Assigning the defects to developers to fix and testers to test for confirmation
Defects status managing Fixing, ready to test, closed, suspend, etc
Showing the statistic data from monitoring incident status
Test Level ProcessTest Planning and
Control
Test Analysis and Design
Test Implementation and Execution
Evaluating Exit Criteria and Reporting
Test Closure Activities
•Define test scope, objective and goal•Test strategy
•Risk analysis•Make strategy
•Exit Criteria•Estimation•Organization•Scheduling•Test Mgmt. and control•Monitoring (Reporting)•Reporting planning / design•Status reporting
Test Planning and Control
Test Analysis and Design•Review test basis•Identify test condition•Decide test design tech.•Evaluate testability•Setup Environment
Test Implementation and Execution•Specify TC, priority data, procedure•Pre Test•(Re) Test execution
Evaluating Exit criteria•Check exit criteria•Write summary report
Test Closure Activities•Testware configuration•Evaluate Test Process
Activities of evaluating exit criteria
Checking test logs against
the exit criteria specified in
test planning
Assessing if more tests are needed or if the specified exit criteria should be changed
Writing a test summary report for
stakeholders.
Test Exit Criteria
Thoroughness measure, such as coverage of code, functionality or risk
Estimates of defect density (number of defect/KLOC) or reliability (defect discovery rate) measures
Residual risk, such as defects not fixed or lack of test coverage in certain areas
Cost Schedules (time to market/release) Defect discovery rate Remaining defects estimation criteria When the cost of testing exceeds the value of
testing
Test exit criteriaD
efe
cts
Cost
Time
Defects newly found
Cost of testing
Release?
Test Exit Criteria – example When all test cases are executed and all fatal defects are
closed When the quality goal of test plan is achieved
i.e the system meets pre-defined goals ( ex :no open severity 1 or 2 defects)
Coverage of code/functionality/requirements reaches a specified point Bug rate falls below a certain level
Each test level has its own exit criteria Unit test level : when tests cover 100% branch or basis path or MCDC Integration test level : when tests cover 100% function-call System test level : when test cover 100% function or 1-switch level of
STT or all transition of use cases When the test objectives and scopes are satisfied
i.e. testing for certification All requirements and high risk components are tested When all the risk are down
Risk of testing continuosly > risk of release of product, having defects
Consideration for exit criteria Risk level
Set up different test strategy for each risk level
Test level Each test level has its own exit criteria
Test Reporting
Usage Determinination of application Release Examine severities or type of defects Confirm test coverage for requirement Determination of application quality Determination of test feature for next
test level
Good Test Reporting
Requisite for good test report
Description
Observing test objectives and requirements
Support development project (decision making)
Alignment in whole testing •Testing Plan (strategy) and alignment with execution•tracebility
Measure more than simple defects
Risk, coverage, environment and etc
Reporting trend To avoid reporting measurement of simple capturing
Limited quantity of report To maximize the impact of reporting
Consistency of reporting To keep consistency of reporting contents, style and way of expression
Explain the meaning of “Metric” Explain in every detail for the reader
Good Test ReportingMaturity of test reporting Description
Detected defects and fixed defects
Reporting detected defects and fixed defects regularly
Priorities of defects + progress (status of test and product quality)
•Priorities of defect + information about planed, used, needed budget and time
Risk and Advice (proof of matrics)
Alignment with test strategyAnylyze trend of metrics about budget, time and quality (defects)
Testing advice is related with SPI
Testing is affecting to SPI rather than product, so it is regarded as activities of defect preventionAdvising about something more than testing activities (ex : regular review for some functional specifications, Project plan is affected by test result of proper releasing time)
Classification if Reports
Progress report / status report Quality / risk report
Release advice Final report (Report for a test level)
Report for unit testing Report for integration test (n-th
integration test) Report for system test Report for acceptance test
Contents for Status Report S/ware Product Quality Performance of Testing
Progress vs. milestone S/ware Product Risk Quality of test process Problem of comment on :
Test scope Assumption / Precondition Change in available resource
Agreement during thetesting process Testing activities which should be carried out at
next stage
Release Advice
Official report after closing testing process Acceptance test, release advice for production level Difference between plan and execution Product risk which are remained (Parts which are
planned for tests but not executed or parts which are lack of testing compared to plan) & expected impact
Remaining open defects & expected impact for product and parts
Current status for system part & (Not) Covered risk If there is no test strategy, you can’t make
decision properly for release
Final Report
Summarize of testing status report Release advice Quality of test process Experience of test project Advice for the next testing
Designing Test Report
Consideration for designing of test report Number of defects by type/severity/priorities Number of defects per each defects status Number of defects compare to size of software
(Defects per xKLOC, defect density analysis) Days Test Effort per requirement Defects which are not resolved and their
impact (Hot Spot Analysis) Defect Age Analysis Status of Testing (not started, designing,
executing, numbers of retests, completion)
Test Reporting Design (example)Classific
ationContents Reportin
g basisIncluding Others
Quality of product
Not closed defects (X severity)Number of detecting defects per hour -> 0Number of retestsWhen all test cases are executed more than once (No major defect)Coverage (Basis of test basis)Evaluating of Quality
Exit CriteriaExit CriteriaExit CriteriaExit Criteria
Test PlanExit Criteria
Detail, graphDetail, graphIndirect, generalIndirect, general
Detail, direct/indirectDetail, graph
Including number of detecting defects per requirement
Performance of test
Test achievement compared to plan (difference between execution and plan = # of total test planned – (# of test executed = # of remaining tests))Test achievement compared to budget
Exit Criteria
Detail, graph Consider changed plan
Risk risk analysisprevented damage < cost of testingResource and environment
Exit CriteriaExit CriteriaTest Plan
Detail, graphIndirect, generalDetail, graph
Quality of testing process
Effectiveness in detecting defects = # of detected defects / (# of detected defects + # of defects not found)Efficiency in detecting defects = # of defects detected / resource
Detail, graph
Detail, graph
Test Reporting Automation Tool
Supporting manage of executed tests and activities
Supporting interface between test execution tool and defect management tool, requirement management tool
Having function of tracking between tests, test result, defects and incidents and requirement specification or source documents
Logging test result and writing progress report Supporting measure of test data (passed test
cases / executed test cases), detected defects and etc
Incident (defect) tracking (management) tool
Reporting Example
Reporting Example
Reporting Example
Reporting Example
Reporting Example
Reporting Example
Reporting Example
Reporting Example
Test Level ProcessTest Planning and
Control
Test Analysis and Design
Test Implementation and Execution
Evaluating Exit Criteria and Reporting
Test Closure Activities
•Define test scope, objective and goal•Test strategy
•Risk analysis•Make strategy
•Exit Criteria•Estimation•Organization•Scheduling•Test Mgmt. and control•Monitoring (Reporting)•Reporting planning / design•Status reporting
Test Planning and Control
Test Analysis and Design•Review test basis•Identify test condition•Decide test design tech.•Evaluate testability•Setup Environment
Test Implementation and Execution•Specify TC, priority data, procedure•Pre Test•(Re) Test execution
Evaluating Exit criteria•Check exit criteria•Write summary report
Test Closure Activities•Testware configuration•Evaluate Test Process
Activities of Test Closure Testware configuration Check testware, close incident reporting (defect report)
making documentation for accepting system Closing and archiving testware, test environment and
test infrastructure Transfer testware to maintenanice organization Analyze lesson learned about release prject, maturity
of testing Evaluate Testing Process
Evaluate test items Defects, Quality characteristic (use ISO/IEC9126)
Formal final report Test result, lesson learned from test project, testing tech.
recommendation for next test project Summarizing report of progress report.
Needs Improvement of testing Process
198x 198x 200x200x
eff
or
t
eff
or
t
Status of Current Goal
OO, CBD, RUP, XP, SPI, PSP
Develop
Testing
Needs of Quality of SW,
SW/system complexity
Develop
Testing
Testing Process ImprovementTest automationReusabilitySpecialized testing
TMMI