Post on 03-Apr-2018
Unlocking the Secrets of Test MetricsUnlocking the Secrets of Test Metrics
Presented By:Shaun BradshawQuestcon Technologies
sbradshaw@questcon.com
Slide Slide 22
ObjectivesObjectives
The primary objectives of this class are to instruct Test Leads on how to improve the overall quality of each project by establishing a set of fundamental test metrics used to manage a software testeffort. Attendees will learn how to:
Track and report the progress of the test effort through objective test metrics.Manage the resources necessary to complete the test effort in a timely manner.Increase the ability to understand and account for the scope of the test effort.Prove the contributions and value of QA and software testing to the organization.Assess the risk of component or application failure prior to release to production.Improve customer and end-user confidence and satisfaction.
Slide Slide 33
“Software bugs cost the U.S. economy an estimated $59.5 billion per year.
An estimated $22.2 billioncould be eliminated by
improved testing that enables earlier and more effective
identification and removal of defects.”
- US Department of Commerce (NIST)
Why Measure?Why Measure?
Slide Slide 44
It is often said,“You cannot improve what you
cannot measure.”
Why Measure?Why Measure?
Slide Slide 55
Collecting MetricsCollecting Metrics
Slide Slide 66
Collecting test metrics is the process of tracking data and statistics that can explain the progress and prove the success and effectiveness of a test effort. This is done for two purposes:
Management is more likely to understand the total work effort and to trust future estimates.
Management is able to quantify the effectiveness of the test methodology.
Test metrics are collected by Test Analysts and the Test Leads on a regular basis throughout the test effort. Some metrics are used during the test effort to report test effort progress or application quality, while others are used at the end of the project to evaluate effectiveness.
Collecting Metrics Collecting Metrics
Slide Slide 77
Test Metrics:Are a standard of measurement.Gauge the effectiveness and efficiency of several software development activities.Are gathered and interpreted throughout the test effort.Provide an objective measurement of the success of a software project.
DefinitionDefinition
Slide Slide 88
Keep It SimpleKeep It SimpleKeep It Simple
Make It MeaningfulMake It MeaningfulMake It Meaningful
Track ItTrack ItTrack It
Use ItUse ItUse It
The Metrics PhilosophyThe Metrics Philosophy
Slide Slide 99
Measure the basics first
Clearly define each metric
Get the most “bang for your buck”
Keep It SimpleKeep It SimpleKeep It Simple
Make It MeaningfulMake It Meaningful
Track ItTrack It
Use ItUse It
The Metrics PhilosophyThe Metrics Philosophy
Slide Slide 1010
Metrics are useless if they are meaningless (use GQM model)
Must be able to interpret the results
Metrics interpretation should be objective
Make It MeaningfulMake It MeaningfulMake It Meaningful
Keep It SimpleKeep It Simple
Track ItTrack It
Use ItUse It
The Metrics PhilosophyThe Metrics Philosophy
Slide Slide 1111
Incorporate metrics tracking into the Run Log or defect tracking system
Automate tracking process to remove time burdens
Accumulate throughout the test effort & across multiple projects
Track ItTrack ItTrack It
Keep It SimpleKeep It Simple
Use ItUse It
Make It MeaningfulMake It Meaningful
The Metrics PhilosophyThe Metrics Philosophy
Slide Slide 1212
Use ItUse ItUse It
Keep It SimpleKeep It Simple
Make It MeaningfulMake It Meaningful
Track ItTrack It
Interpret the results
Provide feedback to the Project Team
Implement changes based on objective data
The Metrics PhilosophyThe Metrics Philosophy
Slide Slide 1313
Solution:Closely examine all available data Use the objective information to determine the root causeCompare to other projects
Are the current metrics typical of software projects in your organization?What effect do changes have on the software development process?
Result:Future projects benefit from a more effective and efficient application development process.
Metrics InterpretationMetrics Interpretation
Slide Slide 1414
Metrics to TrackMetrics to Track
Slide Slide 1515
Metrics To TrackMetrics To Track
Base MeasurementsRaw data gathered by the Test AnalystTracked throughout test executionUsed to provide project status reports and evaluations / feedback at the end of the project
Questcon has identified several key measurements to be maintained for each test effort. Each of the following
measurements should be accumulated throughout the project.
Slide Slide 1616
Metrics To TrackMetrics To Track
number of distinct test cases that have not been executed during the testing effort due to an application or environmental constraint
# TCs Blocked
number of distinct test cases that currently fail to meet all test criteria
# TCs Failed
number of distinct test cases that currently meet all test criteria
# TCs Passed
number of distinct test cases executed, not including re-execution of individual test cases
# TCs Executed
number of distinct test cases selected for execution in a test effort
# TCs to be Executed
number of distinct test cases created by the test team for execution in a test effort
# TCs Created
DefinitionMeasurement
Slide Slide 1717
Metrics To TrackMetrics To Track
number of distinct test cases that were re-executed, regardless of the number of times executed
# TCs Re-executed
total number of test cases that failed on the first execution
1st Run Failures
total number of test case failures, including re-executions of the same test case
Total Failures
total number of test case passes, including re-executions of the same test case
Total Passes
total number of test case executions, including re-executions of the same test case
Total Executions
number of test cases being investigated, regardless of the state of execution or completion
# TCs Under Investigation
DefinitionMeasurement
Slide Slide 1818
Metrics To TrackMetrics To Track
Management MetricsDeveloped and tracked by the Test LeadConverts Base Metric data into useful informationAllow Test Leads and Project Managers to determine the effects of different processes on the project
# Passed / # TCs to be Executed% Complete
# Executed / # TCs to be Executed% Test Coverage
# Passed / # Executed% TCs PassedThis metric, along with the "% Complete" and "% Test Coverage", is used to evaluate the current status of the test effort.
The "% Test Coverage" metric shows how many of the planned test cases were executed. This metric can be used in conjunction with the "% Defects Corrected" and "% Test Effectiveness" to explain why a defect made it into production. As a test effort nears its conclusion, the test team must be able to report the percentage of planned test cases that have not been executed. Management should be reminded that every unexecuted test represents a potential undiscovered defect.
The percentage of distinct test cases that currently have a status of ‘Passed’. This metric is used to help measure the progress of the test effort. The goal of most efforts will be to have this number as close to 100% as possible.
Slide Slide 1919
Metrics To TrackMetrics To Track
(Total Failures – # Failed) / Total Failures% Defects Corrected
This metric shows the percentage of test cases that cannot be executed until a specific defect(s) is corrected and resubmitted to test. This metric shows the effect of certain defects on the test effort.
This metric shows the current percentage of known defects that have been corrected. As the test effort nears the completion date, this value should move towards 100%.
The percentage of executed test cases that failed the first time they were executed. This metric is useful in determining the effectiveness of the current analysis and development process. Comparing this metric across projects may show how the different QA procedures has impacted the quality of the product at the end of the development phase. For larger projects, it may be useful to track this metric for separate components of the system. This provides an indication of component complexity or may be indicative of the performance of individual business analysts or developers.
1st Run Failures / # Executed% First Run Failures
This metric measures the percentage of ALL test executions that resulted in a failure. It can be used to measure the usefulness of other QA processes with the expectation that this percentage will go down as the effectiveness of the other QA processes increases.
Total Failures / Total Executions% Failures
# Blocked / # TCs to be Executed% TCs Blocked
Slide Slide 2020
Metrics To TrackMetrics To Track
(Total Test Costs – Fixed Test Costs) / Total FailuresDefect Removal Costs
The purpose of this metric is to measure how effective the test team was at identifying defects in the code. In order to calculate this metric it is necessary to trace the number of defects discovered in production.
Total Failures / (Total Failures + # Failures In Production)% Test Effectiveness
(Total Executions - # Executed) / # Executed)% ReworkThis metric indicates the percentage of effort required to re-execute tests multiple times. Typically, there is a correlation between the % Rework and % Failures.
This metric identifies the average cost of removing a defect in a test effort. It can be used to show the value of instituting improved development and testing processes by reducing the amount of re-testing necessary when failures occur. Note that “Fixed Test Costs” are the costs of executing all Test Cases once, not the cost of “fixing” a defect.
This metric tells how efficient the test team is at analyzing, creating, and executing test cases based on the number of requirements to be tested. A higher efficiency rating can be indicative of several things including: clear requirements, clean code, quick development turn around for defect correction, and test team experience.
# of Requirements Tested / Test Time% Test Efficiency
This metric shows the percentage of failures that were found, sent back to development for correction, and on re-execution, failed again.
1 - (# Re-executed / Total Failures)% Bad Fixes
Slide Slide 2121
Metrics To TrackMetrics To Track
These metrics can be used to derive valuable information to answer questions like:
Is the test effort on schedule?Was the system thoroughly tested?How many defects were found?How many defects were corrected?How efficient was the test effort?Was the test effort effective?
Slide Slide 2222
Metrics To TrackMetrics To Track
Metric ValueTotal # of TCs 10# Executed 9# Passed 8# Failed 1# UI 0# Blocked 0# Unexecuted 1# Re-executed 5Total Executions 26Total Passes 8Total Failures 181st Run Failures 6
Base MeasurementsCurrent # of Status Runs
P 5
F 1
P 6
P 6
P 3
P 1P 1P 1
P 2
0
TC ID
001-001
001-002
001-003
001-004
001-005
001-006001-007001-008
001-009
001-010
Metric Value% Complete 80.0%% Test Coverage 90.0%% TCs Passed 88.9%% TCs Blocked 0.0%% 1st Run Failures 66.7%% Failures 69.2%% Defects Corrected 94.4%% Rework 240.0%
Management Metrics
As seen below, the data can be tracked as part of the run log in a spreadsheet and automatically totaled using simple spreadsheet calculations.
Slide Slide 2323
QuestionsQuestions
What is a test metric?Describe the GQM model and its importance in a metrics program.How can test metrics be used to improve the development process?Describe the difference between the metrics ‘# Executed’ and ‘Total Executions’.Describe the difference between ‘% Complete’ and ‘% Coverage’. Why is this an important difference?
Slide Slide 2424
ExercisesExercises
Collecting MetricsIndividual Exercise
TC ID Run Date Actual Results Run Status Current Status
# of Runs
AA-001 04/01/02 Actual results met expected results. P P 1
AA-002 04/03/02 Error occurred on 04/01/02. Actual results met expected results. F P P 2
AA-003 04/04/02 Error occurred on 04/01/02. Error occurred on 04/03/02. Actual results met expected results. F F P P 3
AA-004 04/03/02 Actual results met expected results. P P P 2
AA-005 04/06/02 Error occurred on 04/04/02. Error occurred on 04/05/02. Error occurred on 04/05/02. F F F F 3
AA-006 04/04/02 Actual results met expected results. P P 1
AA-007 04/04/02 Actual results did not match expected results, but seem correct. UI UI 1
AA-008 Functionality not delivered with release. ND ND AA-009 Functionality not delivered with release. ND ND AA-010 Blocked due to AA-005 B B
Using the Sample Test Run Log, calculate the Base Measurements related to the test effort.
# TCs to be Executed _____ # Re-executed _____
# Executed _____ Total Executions _____
# Passed _____ Total Passes _____
# Failed _____ Total Failures _____
# Blocked _____ 1st Run Failures _____
# Under Investigation _____
Slide Slide 2525
ExercisesExercises
Collecting MetricsIndividual Exercise
Using the Base Measurement calculate the following metrics:
% Complete _____ % First Run Failures _____
% Test Coverage _____ % Rework _____
% Test Cases Passed _____ % Bad Fixes _____
% Test Cases Blocked _____ % Defects Corrected _____
% Failures _____
TC ID Run Date Actual Results Run Status Current Status
# of Runs
AA-001 04/01/02 Actual results met expected results. P P 1
AA-002 04/03/02 Error occurred on 04/01/02. Actual results met expected results. F P P 2
AA-003 04/04/02 Error occurred on 04/01/02. Error occurred on 04/03/02. Actual results met expected results. F F P P 3
AA-004 04/03/02 Actual results met expected results. P P P 2
AA-005 04/06/02 Error occurred on 04/04/02. Error occurred on 04/05/02. Error occurred on 04/05/02. F F F F 3
AA-006 04/04/02 Actual results met expected results. P P 1
AA-007 04/04/02 Actual results did not match expected results, but seem correct. UI UI 1
AA-008 Functionality not delivered with release. ND ND AA-009 Functionality not delivered with release. ND ND AA-010 Blocked due to AA-005 B B
Slide Slide 2626
ExercisesExercises
Test MetricsIndividual Exercise
Using information in the Sample Run Log, chart the cumulative progression of passed test cases during the test effort.
10
9
8
7
6
5
4
3
2
1
0 Day 1 Day 2 Day 3 Day 4 Day 5 Day 6 Day 7 Day 8
Slide Slide 2727
Managing the Test ProcessManaging the Test Process
Slide Slide 2828
During test execution, the Test Lead manages the test effort to:Calculate how much time is required to complete the planned testing and defect-repair activities, Communicate the impact of events on the testing schedule, and Adjust planned activities to control any changes.
A successful test effort is accomplished by:Comparing the test team’s schedule and Test Plan to the actual test results,Reporting the status of the testing, Identifying problems, andRecommending solutions as quickly as possible.
Managing the Test ProcessManaging the Test Process
Slide Slide 2929
The test metrics that are collected throughout the test effort help ensure accurate status reports can be provided quickly. To makethe information more accessible, it is recommended that the metrics be tracked in such a way that they can be easily sorted and graphed.
When reporting the test metrics some of the preferred items to track are:
Test cases executed versus planned execution time Test cases passed versus planned execution time Total test failures versus planned execution time These can be easily plotted in a graph with planned execution time on the X-axis and the evaluation metric on the Y-axis.
Reporting Test ProgressReporting Test Progress
Slide Slide 3030
Plotting the metrics will display the progress of the test effort over time to determine if it is on schedule, if it is not, the cause of the delay should be determined.
These graphs also make it possible to measure the effectiveness of the test effort and provide a valuable basis to estimate timelines, resources and application quality.
Analyzing the GraphsAnalyzing the Graphs
Slide Slide 3131
The S-CurveThe S-Curve
Slide Slide 3232
Successfully managing a test effort requires the ability to make objective and accurate estimates of the time and resources needed to stay on schedule. The S-Curve is one method for doing this.
What is an S-Curve?What is an SWhat is an S--Curve?Curve?
What makes it an “S” shape?
What makes it What makes it an an ““SS”” shape?shape?
How is it used?How is it used?How is it used?
Test Management Using STest Management Using S--CurvesCurves
Slide Slide 3333
An S-Curve is a graphical representation of cumulative work effort.S-Curves can be used to describe projects as a whole, development efforts, test efforts, as well as defect discovery rates.
What is an S-Curve?What is an SWhat is an S--Curve?Curve?
What makes it an “S” shape?
What makes it an “S” shape?
How is it used?How is it used?
Test Management Using STest Management Using S--CurvesCurves
Slide Slide 3434
What makes it an “S” shape?What makes it an What makes it an ““SS”” shape?shape?
How is it used?How is it used?
What is an S-Curve?What is an S-Curve?
Test Management Using STest Management Using S--CurvesCurves
Test efforts typically start out slowly as testers run into a few major defects
As the initial issues are resolved, the testers are able to execute
more tests covering morefunctionality
As the test effort nears its end, there are typically a few left over issues that must be resolved thus slowing the process down again
Slide Slide 3535
Test Management Using STest Management Using S--CurvesCurves
How is it used?How is it used?How is it used?
What is an S-Curve?What is an S-Curve?
What makes itan “S” shape?
What makes itan “S” shape?
Plot the progress of test metrics to quickly see the effectiveness of the test effort
TCs Passed vs. Planned Execution TimeTotal Failures vs. Planned Execution Time
Measure test progress by comparing the actual test curve to a theoretical S-Curve
Use the curve to determine if the application isstable enough to be released
Slide Slide 3636
The Theoretical SThe Theoretical S--CurveCurve
The first step in utilizing an S-Curve for test management involves deriving a theoretical curve, that is, a uniformly distributed curve indicating “optimum” test progress.
The theoretical S-curve is calculated as follows:
(Day Number / Total Days in Test Effort)-------------------------------------------------------------------------------(Day Number / Total Days in Test Effort) + e^(3-8 * Day Number / Total Days)
Using this formula will return the cumulative percentage of tests passed or defects found (depending on the metric being tracked).
Note 1: “e” is the base of the natural logarithm (2.71828182845904)Note 2: the “3” and “8” in the formula set the location of the logarithmic curves
Slide Slide 3737
The Theoretical SThe Theoretical S--CurveCurve
Here is an example of how a theoretical curve will look for a 15 day test effort with 100 test cases to be executed.
# Days # TCs15 100
1 0.56% 12 1.89% 23 4.70% 54 10.08% 105 19.28% 196 32.82% 337 49.28% 498 65.43% 659 78.40% 7810 87.30% 8711 92.80% 9312 96.00% 9613 97.79% 9814 98.78% 9915 99.33% 99
Theoretical Curve
S-Curve Calculations - Passed
Test Metrics Graph - Passed
0
20
40
60
80
100
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Days
Test
Cas
es
Slide Slide 3838
The Actual Test CurveThe Actual Test Curve
By plotting the actual cumulative number of test cases passed or the cumulative number of defects found during a test effort and comparing the resulting graph to the theoretical curve, we are able to quickly and objectively identify risks and/or issues in the test effort, which will be explained later.
Test Metrics Graph - Passed
0102030405060708090
100110
1 2 3 4 5 6 7 8 9 10
Days
Test
Cas
es P
asse
d
Num Passed Theoretical Curve
Test Metrics Graph - Defects
0
3
6
9
12
15
18
21
24
27
30
1 2 3 4 5 6 7 8 9 10
Days
Failu
res
Total Failures Theoretical Curve
Slide Slide 3939
The Actual Test CurveThe Actual Test Curve
By plotting the actual cumulative number of test cases passed or the cumulative number of defects found during a test effort and comparing the resulting graph to the theoretical curve, we are able to quickly and objectively identify risks and/or issues in the test effort, which will be explained later.
Test Metrics Graph - Passed
0102030405060708090
100110
1 2 3 4 5 6 7 8 9 10
Days
Test
Cas
es P
asse
d
Num Passed Theoretical Curve
Test Metrics Graph - Defects
0
3
6
9
12
15
18
21
24
27
30
1 2 3 4 5 6 7 8 9 10
Days
Failu
res
Total Failures Theoretical Curve
Test Metrics Graph - Passed
0102030405060708090
100110
1 2 3 4 5 6 7 8 9 10
Days
TCs/
Failu
res
Num Passed Theoretical Curve
Slide Slide 4040
The Actual Test CurveThe Actual Test Curve
By plotting the actual cumulative number of test cases passed or the cumulative number of defects found during a test effort and comparing the resulting graph to the theoretical curve, we are able to quickly and objectively identify risks and/or issues in the test effort, which will be explained later.
Test Metrics Graph - Passed
0102030405060708090
100110
1 2 3 4 5 6 7 8 9 10
Days
Test
Cas
es P
asse
d
Num Passed Theoretical Curve
Test Metrics Graph - Defects
0
3
6
9
12
15
18
21
24
27
30
1 2 3 4 5 6 7 8 9 10
Days
Failu
res
Total Failures Theoretical Curve
Test Metrics Graph - Defects
0
3
6
9
12
15
18
21
24
27
30
1 2 3 4 5 6 7 8 9 10
Days
Failu
res
Total Failures Theoretical Curve
Slide Slide 4141
Some general rules of thumb for analyzing the graphs…The degree to which the actual test curve complies with the theoretical S-shape becomes the basis for measuring test progress.The actual performance curves for a test effort become the basis for adjustments to subsequent test effort sizing and estimation activities.These graphs can be used to estimate the number of errors remaining based on the number found for the entire test.If defects are uniformly distributed, approximately 70% - 90% of the expected defects will be found during the first 60% of the test effort.Many diverse factors impacting test efforts prevent optimal testprogress from occurring. However, the graph is a valuable test progress measurement tool that can be used to identify quality and/or resource issues.
Analyzing the GraphAnalyzing the Graph
Slide Slide 4242
Analyzing the GraphAnalyzing the Graph
S-Curve: DEFECTS
0
5
10
15
20
25
30
35
40
1 4 7 10 13 16 19 22 25 28 31 34 37 40 43 46
Cumulative Defects Theoretical Curve
Slide Slide 4343
QuestionsQuestions
Why does a typical cumulative test progress graph take the shape of an “S”?How do we use the S-Curve?When using an S-Curve for defect tracking what other information do you need?Which metric is best for tracking test progress on an S-Curve?Should different types of testing have their own S-Curve?
Slide Slide 4444
ExercisesExercises
Managing the Test ProcessClass Exercise
For each of the following sample S-Curves, interpret what is happening in the test effort and describe some corrective actions you might take as a Test Lead.
Test Metrics Graph - TCs Passed
0
25
50
75
100
125
150
175
1 2 3 4 5 6 7 8 9 10
Days
TCs
TCs Passed Theoretical Curve
Slide Slide 4545
ExercisesExercises
Managing the Test ProcessClass Exercise
Test Metrics Graph - Defects
020406080
100120140
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
Days
TCs/
Failu
res
Theoretical Curve Total Defects
Slide Slide 4646
Test Metrics Graph - Failures
0
10
20
30
40
50
60
70
80
1 2 3 4 5 6 7 8 9 10
Days
Failu
res
Total Failures Theoretical Curve
ExercisesExercises
Slide Slide 4747
Tracking Defect MetricsTracking Defect Metrics
Slide Slide 4848
Defect tracking is the process of monitoring what happens to a defect when it is found during the test effort.
Without proper control over this process, it can be difficult toensure that all of the objectives of the test effort have been met and to determine when it is complete.
Tracking DefectsTracking Defects
Slide Slide 4949
Defect tracking allows us to evaluate our ability to adhere to the schedule based on the number of defects discovered and the amount of time to correct them.
Through this process we can track:• Which defects must be fixed,• When defects are corrected, and • When the system is ready for production.
Tracking DefectsTracking Defects
Slide Slide 5050
There are four parts to effectively tracking defect metrics:Establish criteria for setting the severity of a defect.
If the criteria for setting the severity or priority of a defect has not already been set, use the table below as a guideline for establishing your own standards.
Tracking Defect MetricsTracking Defect Metrics
Title Priority Potential Causes Notes Severity 1 High • System or component fails completely.
• All operations cease without completing intended processing.
• Speed of processing is completely unacceptable.
These types of defects must be fixed prior to releasing the system into production.
Severity 2 Medium • System or component error occurs (incorrect calculation or output), but processing continues or the system shuts down gracefully (recoverable).
• Speed of processing is near or just below the acceptable range.
These are the most common types of defects discovered and are typically corrected before the system is released to production.
Severity 3 Low • There are inconsistencies in the interface. • There are misspelled words in non-end-
user files. • A suggestion for improvement in the
interface, calculations, or speed of processing.
• Speed of execution is slower than expected, but within an acceptable range.
These types of defects do not affect the functionality of the system or component and are typically used to generate new enhancements for the system. These are fixed only when time allows.
Slide Slide 5151
Help determine if an issue is a defect and, if so, its severity.There may be instances where a Tester discovers a potential defect, but:
The severity level is not clear.The potential defect is inconsistent in nature.There is disagreement between the Test Analyst and Developer or Business Analyst as to whether the issue is a defect.
In these situations, resolve the problem by:Clarifying the criteria of each severity level so the Test Analyst clearly understands what priority is appropriate for the discovered defectReviewing the circumstances under which the problem occurred (e.g., environment settings, data inconsistencies, procedural errors)Acting as a mediator between the involved parties and eventuallyescalating issues to management, if necessary, for final decision.
Tracking Defect MetricsTracking Defect Metrics
Slide Slide 5252
Track the status of defects after they have been discovered and categorized.
Ensure that all known defects are properly handled based on their status
Track the number and types of defects found, as well as the amount of time to correct them.
Track metrics relating to the number and type of defects found.Pay special attention to any defects that were found, but not corrected.Report any delays in the test effort caused by the number of defects found or the amount of time necessary to correct them.
Tracking Defect MetricsTracking Defect Metrics
Slide Slide 5353
The Zero Bug BounceThe Zero Bug Bounce
Slide Slide 5454
Defect Management with the Zero Bug BounceDefect Management with the Zero Bug Bounce
What is the Zero Bug Bounce?What is the Zero Bug Bounce?
The Zero Bug Bounce (ZBB) is a defect management technique made popular by Microsoft. Strictly speaking, it is the point in the test effort of a project when the developers have corrected ALL open defects and they have essentially “caught up” with the test team’s defect discovery rate. The “bounce” occurs when the test team finds additional defects and the development team must again begin defect correction activities.
After the initial bounce occurs, peaks in open defects will become noticeably smaller and should continue to decrease until the application is stable enough to release to production. This is what I call the ripple effect of the ZBB.
Slide Slide 5555
Zero Bug Bounce
0
2
4
6
8
10
12
14
16
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Open Defects
Defect Management with the Zero Bug BounceDefect Management with the Zero Bug Bounce
How do you track the ZBB?How do you track the ZBB?
The Zero Bug Bounce is tracked by charting the number of Opendefects at the end of each day during test execution.
Slide Slide 5656
Defect Management with the Zero Bug BounceDefect Management with the Zero Bug Bounce
Some Notes On the ZBBSome Notes On the ZBB
The “bounce” does not always happen at zeroThe initial “bounce” typically occurs near the end of test executionThere IS a ripple effectUse the height and length of the ripple effect, in addition to the timing of the initial bounce, to determine if the application is stable enough to be released to production
Slide Slide 5757
Zero Bug BounceZero Bug Bounce
Zero Bug Bounce
0
5
10
15
20
25
30
35
40
45
50
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35
All Defects Sev 1 Defects
Zero Bug Bounce
0
2
4
6
8
10
12
14
16
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25
Open Defects
Slide Slide 5858
QuestionsQuestions
Provide three reasons why defects should be tracked throughout the test effort.What is the difference between a High, Medium, and Low severity defect?What is the Zero Bug Bounce graph? How do testers use it to manage a test effort?Why is there a “bounce” in the ZBB?What is an “open” defect?
Slide Slide 5959
ExercisesExercises
Managing the Test ProcessClass Exercise
Is the application under test stable enough to release into the production environment?
Zero Bug Bounce
0
10
20
30
40
50
60
70
80
90
100
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
Open Defects
Slide Slide 6060
ExercisesExercises
Managing the Test ProcessClass Exercise
What is wrong with this picture? Can the application be released in 2 days?
Zero Bug Bounce
0
10
20
30
40
50
60
1 2 3 4 5 6 7 8 9 10
Open Failures
Slide Slide 6161
ROI MetricsROI Metrics
Slide Slide 6262
Return on Investment (ROI)Return on Investment (ROI)
• What is ROI?ROI is a calculation that attempts to determine the actual or perceived future value of an expense or investment By calculating ROI, an organization can assess whether the expense/investment is justified by the resulting savings/revenue
• How is ROI calculated?In its most basic form ROI is calculated as follows:
ROI = (Benefits – Costs)/Costs
That is, the financial benefit after an investment or improvement is made minus the cost of the investment or improvement, calculated as a percentage of those costs
Slide Slide 6363
• What makes testing valuable? How do we get to a positive ROI?
Produces information reliably grounded in observed system behavior [1]
– Functions like a credit check [1]
– Exposes risk and improves decision-making
Extends the life of the softwareImproves the development processIncreases end-user confidence and satisfaction
Return on Investment (ROI)Return on Investment (ROI)
Benefits• Revenue generated• Cost reduction• Cost avoidance• Productivity improvements
Costs• Labor expense• Time expense• Tool expense
Slide Slide 6464
Defect Preventionvs.
Defect Detection
Defect Preventionvs.
Defect Detection
Slide Slide 6565
• Importance of TestingOrganizations rely on testers to do QAWe cannot “test” quality into an applicationTest needs to be the “backdoor to QA”Requires a shift in the testing paradigm
Test Finding Defects
Defect Prevention vs. Defect Detection Defect Prevention vs. Defect Detection
Measuring Quality
Slide Slide 6666
• QA & TestingTesting is part of QA Audits, reviews & inspections are forms of testingQA and testing expose risks before they become unmanageable or too costly to correctQA and IV&V promote best practices that reduce defects (cost) throughout the SDLC
Defect Prevention vs. Defect Detection Defect Prevention vs. Defect Detection
Slide Slide 6767
• Rules & Results of Defect Prevention
60% Rule [2]
Percentage of defectsintroduced in require-ments and design
85% Rule [2]
Percentage of defects removed through reviews, inspections and testing
368:1 Rule [3]
Defect Prevention vs. Defect Detection Defect Prevention vs. Defect Detection
Slide Slide 6868
Design Code Test ProductionRequirements
30
30
$0
$0
0
30
60
0
$0
$0
30
90
0
$0
$0
10
100
85
$166,600
$166,600
0
15
$378,700
15
$212,100
# of defects introduced
Accumulated defects
Defects removed
Cost
Accumulated Cost
X 5X 10X 50X 368X+
Testing Only
Assume X is $100 & 100 total defects
Defect Prevention vs. Defect Detection Defect Prevention vs. Defect Detection
Slide Slide 6969
Design Code Test Production
Verify Verify
Requirements
Verify
30
30
$2,600
$2,600
26
30
34
29
$14,500
$17,100
30
35
30
$30,000
$47,100
10
15
13
$65,000
$112,100
0
2
$185,700
2
$73,600
Reviews, Inspections & Testing
Assume X is $100 & 100 total defects
Defect Prevention vs. Defect Detection Defect Prevention vs. Defect Detection
X 5X 10X 50X 368X+
# of defects introduced
Accumulated defects
Defects removed
Cost
Accumulated Cost
Slide Slide 7070
Quantitative BenefitsQuantitative Benefits
Slide Slide 7171
• Reduced Defect Repair CostsTest involvement earlier in the lifecycle reduces repair costsAutomation improves defect detectionGood test practices increase detection likelihood
Quantitative Benefits Quantitative Benefits
Slide Slide 7272
• Decreased Production CostsDefects in production can be significantly higher than 368x - the “Annuity Nightmare”
– Small rounding error goes undetected at a major financial institution– Months until discovered– Code repair is cheap– Production cost – NOT CHEAP!
• Increased Revenue & ProfitsReduced time-to-market for functioning softwareIncreased market shareHigher customer retention/goodwillReduced maintenance costs (easier to enhance)
Quantitative Benefits Quantitative Benefits
Slide Slide 7373
Calculating Test ValueCalculating Test Value
Slide Slide 7474
• Defect Injection RateTotal number of defects introduced into an application When each defect was introduced (using Root Cause Analysis)Number of defects introduced in each phase of the SDLC
Calculate the defect injection rate as:
# of defects introducedtotal defects
Key Metrics to Calculate Test Value Key Metrics to Calculate Test Value
Slide Slide 7575
• Defect Repair CostHourly rate of each resource involved in the repair (PM, BA, Developer, Tester, etc.)Defect repair time for each resource by phase of the SDLCNumber of defects repaired in each phase of the SDLC
Calculate the defect repair costs as shown in the example*:
*Data in the example was drawn from [4].
Key Metrics to Calculate Test Value Key Metrics to Calculate Test Value
Slide Slide 7676
• Test EffectivenessNumber of defects found during testingNumber of defects found in production (use a “warranty period”)
Calculate test effectiveness as:
# of defects foundtest(# of defects foundtest + # of defects foundproduction)
Key Metrics to Calculate Test Value Key Metrics to Calculate Test Value
Slide Slide 7777
ROI ExampleROI Example
Slide Slide 7878
ROI Example ROI Example
Slide Slide 7979
Established a test team made up of:• 1 test lead• 1 test analyst• 1 test consultant
Total 1st year investment: $237,500
ROI Case Study ROI Case Study
Slide Slide 8080
Conducted 2 projects in first 8 monthsRealized $173,000 savings in project-over-project defect repair costsCalculated savings do not include other benefits such as:
• Reduced production downtime• Reduced maintenance costs• Increased customer satisfaction & trust
Anticipated 46% ROI at the end of year 1
ROI Case Study ROI Case Study
Slide Slide 8181
References References
1. Bullock, James. “Calculating the Value of Testing.” Software Testing & Quality Engineering: May/June, 2000. <www.stickyminds.com/>
2. Jones, Capers. "Software Cost Estimating Methods for Large Projects."CrossTalk: April, 2005. <www.stsc.hill.af.mil/crosstalk/2005/04/0504Jones.html>
3. Dabney, JB. "Return on Investment of Independent Verification and Validation Study Preliminary Phase 2B Report." Fairmont, WV: NASA IV&V Facility, 2003. <sarpresults.ivv.nasa.gov/ViewResearch/289/24.jsp>
4. US Dept. of Commerce, National Institute of Standards & Technology (NIST). “Planning Report 02-3: The Economic Impacts of Inadequate Infrastructure for Software Testing.” Technology Program Office, Strategic Planning & Economic Analysis Group. May, 2002.<www.nist.gov/director/prog-ofc/report02-3.pdf>
Slide Slide 8282
Q & AQ & A