Software Effort Estimation Planning to Meet Schedule Lewis Sykalski 5/01/2010.
-
Upload
marcus-eustace -
Category
Documents
-
view
219 -
download
0
Transcript of Software Effort Estimation Planning to Meet Schedule Lewis Sykalski 5/01/2010.
How Managers View Software Effort Estimation
• Nebulous• Vague• No correlation to reality• Why bother
Effects of Bad/No Estimation
• Underestimation can lead to Schedule/Cost Over-runs:– Diminished Confidence by upper management– Customer upset– Can affect schedules/cost downstream
• Overestimation can lead to Schedule/Cost Under-runs:– Adversely affect positioning– De-motivator for employees– More profitable opportunities passed over
Effect of Estimation
• Clearer expectation of level of effort• Allows SPM to better allocate resources• Helps SPM account for change• Shows upper-level management that manager
has a plan
AbstractMost managers today are unaware of established software
effort estimation methodologies or don’t account for unforeseen consequences when using a method.
This paper attempts to reconcile this by surveying several effort estimation approaches and gauging both the utility and inherent pitfalls in each.
Additionally, this paper will present a refined method for software effort estimation based on expert judgment and describe benefits of using said method.
Current Methodologies
• Expert Judgment• Consensus-based Estimation• Price-to-Win• Analogy Costing• Function Point Analysis• Algorithmic Models
– Cocomo– SLIM– PriceS
Expert Judgment
• Consulting one or more experts • Experts leverage their own past experiences
or methods• Typically arrive at task duration. Sometimes
size which can be converted to effort with assumed productivity.
• Sometimes averaging occurs between multiple experts’ estimates to smooth out results
Expert Judgment UtilityAdvantages:• Not much time/effort required• Not conceptually difficult
Disadvantages:• Not repeatable or explicit• No consistent rationale for estimates• Prone to error and very subjective• If estimates do not match size of experts’ historical
experiences, estimate can be way off-based• Experts with right experience could be hard to find
Consensus-based Estimation• Logical extension of expert judgment• Multiple experts/developers seek to reach consensus
on estimate
• Wideband-Delphi most popular:– Short discussion of to define tasks– Secret ballots– Any deviations must be resolved by discussion & revote
• Planning Poker (Agile Method)– Card representing duration of task is shown– Resolution follows until consensus is reached
Consensus-Based Estimation UtilityAdvantages:• Same advantages as parent -- Expert Judgment• Experts have discussed and agreed on the estimate• Hidden tasks are often discovered
Disadvantages:• Same disadvantages as parent – Expert Judgment• Amount of time required to reach consensus• “Anchoring”: When process is loosened and
someone affects groups predispositions. (e.g. “It can’t be more than 20 days…”)
Price-to-Win
• Estimate is estimated at:– whatever the optimum value is in order to win the
contract or – whatever funds or time the customer has available
WINEffortEffort
Price-to-Win UtilityAdvantages:• Win the contract• Effort could contract to fill the difference?• Not a lot of time required
Disadvantages:• Considered poor practice• Large discrepancies in effort anticipated and effort required –
might result in severe overruns • Quality of the product may suffer in trying to reach deadline • Profit loss?
Analogy Costing
• Estimates effort by analogizing to past project(s)
• ∆Effort = difference in project from past project(s) in terms of requirements, reuse opportunity, process, etc.
EffortEffortEffort oldnew
Analogy Costing UtilityAdvantages:• Grounded in past history• Full effort need not be estimated only delta• Not a lot of time required (just meaningful analysis of deltas)
Disadvantages:• Meaningful data may not be present• Subjectivity in deltas• Past historical data may not be representative (innovation
efforts)• Abnormal conditions in past projects (that estimator is
unaware of) may throw off results
Function Point Analysis (Albrecht)
• Function point represents a piece of functionality of the program:– User-input– User-output– Inquiry (interactive inputs requiring response)– External (files shared or used externally)– Internal (files shared or used internally)
Function Point Analysis (Albrecht)
where, • i=type of FP (User-input, output, etc)• j=Complexity level of FP (1-3)• Nij is the number of FPs of type ij• Wij is the weight of the FPs of type ij
where a & b come from historical data/curve-fitting
Effort = LOC*Productivity
5
1
3
1i jijijWNUFC
bUFCaLOC *
Function Point Analysis UtilityAdvantages:• Can be formulated at requirement time very early in the
software-lifecycle • Provides good traceability in mapping tasks to effort • No subjectivity is required as to the task duration • Less prone to subjective bias than expert judgment based
techniques
Disadvantages:• Detailed requirements & consensus on complexity is required • Very nebulous• Time involved to arrive at such an estimate • Requires a good handle up-front of the requirements (prone
to requirements creep/hidden requirements)
Algorithmic Models
Where, • {C1, C2, …, Cn} denote cost factors• F represents the function used
),...,( 21 nCCCFEffort
COCOMO (Boehm)• Regression Formula• Historical Data Inputs• Current Project Characteristics: (Nominal 1.0)
– Product: Reliability, Documentation Needs, etc.– Computer: Performance Constraints, Volatility, etc– Personnel: Capability, Familiarity w/language, etc – Project: Co-location, Tools productivity gains, etc
• Project Categorization: (Different historical data)– Organic: Small team / flexible process– Embedded: Large team / tight process– Semi-Detached: Somewhere in-between
SLIM (Putnam)
• Spent much time doing curve fitting, came up with following equation:
where, • Size is ESLOC or Effective SLOC (new+modified)• B is a scaling factor indicative of project size• Productivity is the Process Productivity factor• Time is duration of project schedule (years)
BTimeoductivity
SizeEffort *
*Pr
3
3/4
PriceS
• Parametric cost-estimation system • Can accept a wide variety of inputs:
– Use-Cases, Function Points, Predictive Object Points, Functional Size, SLOC, and Fast Function Points.
• Effort estimates also factor in:– Software Lifecycle processes (Waterfall, Agile, etc.)– Software Language Choice (Java, C++, etc).
Algorithmic Models UtilityAdvantages:• Objective• Repeatable results• Historical data often represents MANY past projects
Disadvantages:• Complex formulas are hard to comprehend• Requires faith on the users’ end• Subjective historical data• Subjective Cost factors may throw off equations• May require difficult/time consuming tailoring using
estimator’s historical data
New Model: Predictive Expert Judgment
Combines inherent simplicity of expert judgment method w/feedback control provided for in other models
Requires:• Diligent tracking of actual times for past tasks• Records of experts’ estimates toward those
tasks.
Predictive Expert Judgment (cont.)
Steps:• Solicit effort estimates for each task from each
expert• As tasks are completed build a repository
tracking how close the experts’ estimate was to past historical estimates
• Weight each experts’ future estimate by how well he has historically estimated
Predictive Expert Judgment Equation
Where,– Wi corresponds to each expert’s trust weight
based on historical performance– Ei corresponds to each experts’ estimate for the
current task being estimated
i
N
iiTotal EWE *
1
Predictive Expert Judgment (cont.)
• Wi can be calculated:– Simple formula involving standard deviations– Intermediate custom formula where one
disregards some experts based if they are outside target range or based on external factors
– Weighted Variance Equation
Simple Formula
Where,– Wi corresponds to each expert’s trust weight
based on historical performance– N is the number of experts with historical data– σi is the current experts’ historical stdev– σn is each experts’ historical stdev
N
nni
iW
1
/1*
1
Expert Judgment Example
Expert A: 40 hoursExpert B: 20 hoursExpert C: 5 hoursExpert D: 20 hoursExpert E: 30 hours
No historical data:
40*0.2+20*0.2+5*0.2+20*0.2+30*0.2 =23.0 hours
Predictive Expert Judgment Example
0.35*40+0.16*20+0.11*5+0.22*20+0.17*30=27.0 hours
i
N
iiTotal EWE *
1
Everybody counts methodology…
Predictive Expert Judgment – W/Constraints
If we had rules where we threw out experts’ estimates if they were wildly off: > 12.0 STDEV
i
N
iiTotal EWE *
1
0.47*40+0.29*20+0.23*30=31.8 hours (Could be closer?)
(Where σi <12.0)
Predictive Expert Judgment – W/Constraints (cont.)
You could alternatively tighten the standard deviation constraint to trust only the leading expert…
1.0*40=40.0 hours
iTotal EE (Where σi = best)
Predictive Expert Judgment – W/Constraints (cont.)
You could also adjust for deviations in estimate (how far they are normally off and in what direction)
)-(* i1
i
N
iiTotal EWE
0.35*38.75+0.16*20+0.11*3.75+0.22*13.75+0.17*26.25=24.4 hours
Results & Analysis
• No model can estimate the cost of software with a high degree of accuracy
• Expert Judgment was found to be just as good as algorithmic models (in 15 case studies)
• Uncertainty and Probability should be added to most models
• More historical data needs to be collected from industry