Dangerous decisions - Assumption traps
-
Upload
stag-software-private-limited -
Category
Technology
-
view
719 -
download
1
description
Transcript of Dangerous decisions - Assumption traps
Copyright 2011 STAG Software Private Limited www.stagsoftware.com
We collect and analyze metrics to make decisions, without understanding the underlying assumptions and therefore make potentially dangerous decisions. Some of the aspects that we track are (1) Quality (2) Progress and (3) Adequacy/Effectiveness.
Some of the typical measurements are defect rates, distributions, density, coverage, distribution of test cases by test entity, test cases executed etc. Though each of these seem logical, my contention is that we may not remember the assumptions underlying these.
For example, does a low defect rate mean a good product or poor testing? Does high code coverage mean great testing? How do we know what the desired density is ? Also some of the measurements are focused on the low level activity (for example test case execution progress), which could be useless as we lose sight the overall goal ( i.e. how is quality progressing?).
Measurements like test case immunity, test case growth, quality growth are possibly better indicators ensuring that we stay focussed on the goal of effective testing. The intent is to understand the assumptions and ensure we do not forget these and also look at interesting measurements that could be better indicators.
Abstract
Copyright 2011 STAG Software Private Limited www.stagsoftware.com
Some typical measurements
Copyright 2011 STAG Software Private Limited www.stagsoftware.com
DesignCode coverageRequirement coverage#Test cases(TC) categorized by attributes
ExecutionTest productivity
TC execution rate
Assessment
Pass/Fail rate
Defect arrival rate
Defect distribution
Defect density
Assumptions/ Dangers
Copyright 2011 STAG Software Private Limited www.stagsoftware.com
# Intent is to ensure that behaviors of all LOC have indeed been examined>>>> Only functional behaviors are examined. It is assumed that *all* behaviors have been coded.
# Intent is to ensure that all requirements will indeed be examined>>>> What if we have just *One TC *per requirement? Necessary condition but not sufficient.
# Intent is to have just enough test cases that will ensure that our examination will be successful. We categorize these in multiple ways : (1) by features (2)by priority etc..>>>> Assumption is that *many* is good, how do we measure quality of TC i.e. *defect-type yielding* ability?
DesignCode coverageRequirement coverage#Test cases(TC) categorized by attributes
Assumptions/ Dangers
Copyright 2011 STAG Software Private Limited www.stagsoftware.com
ExecutionTest productivity
TC execution rate
#We focus on the test activity here, how are we doing with respect to the intended goal?>>>>Assumption - more quickly we execute, more closer we are to the goal. #We focus on the test activity here, how are we doing with respect to the intended goal?>>>>Assumption - more quickly we execute, more closer we are to the goal. But what is the *yield* ? >>>>We need to execute *all* TC, gives us comfort , but is it effective?But what is the *yield* ? >>>>We need to execute *all* TC, gives us comfort , but is it effective?
Assumptions/ Dangers
Copyright 2011 STAG Software Private Limited www.stagsoftware.com
#Assumption that *more failures* means high risk to delivery>>>>We are assuming that * #failures* is kinda indicative of risk. Should we not analyze from a defect type perspective?# Higher the rate of arrival of defects, the worse it seems; lower it is , better it seems...>>>>Assumption that *defect acceleration* is important, need to sure that we have the right goal i.e. test cases need to be complete- how sure are we about this?# Distribution by severity - More severe the defect, the worse it is..>>>Do we know what a *good* distribution is? # More dense the defect *clump* is, higher the risk - seems logical>>>> Do we know what kind of clump it is ? Simple ‘fungal infection’ or ‘cancerous’>>>> Distribution/Generalized metric (e.g density) need a clear UCL/LCL to infer
Assessment
Pass/Fail rate
Defect arrival rate
Defect distribution
Defect density
Think...
Copyright 2011 STAG Software Private Limited www.stagsoftware.com
• Shift from measuring pure activities to outcome/goals
• Do we measure “Test Case Immunity” i.e. over time, which of the test cases have lost the power to uncover defects?
• Would test case growth(needs to be qualified) be an interesting metric that indicates that we are constantly expanding the net?
• Rather than using pure defect metrics to serve as indicators of quality, should we not shift to defect type metrics to give us a objective sense of quality?
Copyright 2011 STAG Software Private Limited www.stagsoftware.com
Thank you!
Follow us @stagsoft