Post on 04-Jul-2015
description
+
how to avoid drastic software process change (using stochastic stability)
WVU: Tim Menzies, Steve Williams, Ous Elwaras USC: Barry Boehm JPL: Jairus Hihn
Apr 6, 2009
+
this talk
background
digression
internal vs drastic changes
what space do we explore?
what is interesting/different here?
results (on 4 projects)
related work
conclusion & future work
questions? comments?
2
+
this talk
background
digression
internal vs drastic changes
what space do we explore?
what is interesting/different here?
results (on 4 projects)
related work
conclusion & future work
questions? comments?
3
+stochastic stability
“For all is but a woven web of guesses.” -- Xenophanes (570 – 480 BCE)
Seek what holds true over the space of all guesses. Surprisingly, happily, such stable conclusions exist.
Bad idea for: The safety-critical guidance system of a manned rocket.
Good idea for: Exploring the myriad space of possibilities associated with
software project management.
4
+stochastic stability and option exploration Software project managers have more options than they think.
It is possible (even useful) to push back against drastic change
Q: can we found local options that out-perform drastic change? A: yes, we can
5
Fire people
Deliver late
Do it all again, in ADA
Adjust current project options
+expectation management
We explore models built from real project data
We perform experiments with models about software project Not experiments with actual projects.
Such experiments are hard to do Gone of the days of Victor Basili-style SEL experimentation
where the researchers could tell the project what to do. Software developers are more aggressive in selecting their own methods.
We hope to apply this to real “lab rats”, soon Meanwhile, we sharpen our tools And publicize our results to date (see below).
6
+
this talk
background
digression
internal vs drastic changes
what space do we explore?
what is interesting/different here?
results (on 4 projects)
related work
conclusion & future work
questions? comments?
7
+timm = ?
8
+timm = nerd hippy (type 3)
9
+timm = nerd hippy (type 3)
10
Type 1 Type 2 Type 3
Habitat: Haight-Ashbury
Goals: What goals? Goals are a construct, man. Free your mind!
Habitat: MIT
Example: Richard Stallman
Goals: lecturing you on how to do it better
Habitat: (a) Berkeley (b) Mum’s living room, Helsinki (c) Portland
Example: Linus Torvalds
Goals: Finding out how we can better build tools, together
+timm = nerd hippy (type 3)
11
Type 1 Type 2 Type 3
Habitat: Haight-Ashbury
Goals: What goals? Goals are a construct, man. Free your mind!
Habitat: MIT
Example: Richard Stallman
Goals: lecturing you on how to do it better
Habitat: (a) Berkeley (b) Mum’s living room, Helsinki (c) Portland
Example: Linus Torvalds
Goals: Finding out how we can better build tools, together
+timm = nerd hippy (type 3)
12
Type 1 Type 2 Type 3
Habitat: Haight-Ashbury, San Francisco
Examples: happily, very few
Goals: Goals are a construct, man. Free your mind!
Habitat: MIT, Boston
Example: Richard Stallman
Goals: lecturing you on how to do it better
Habitat: (a) Berkeley (b) Portland (c) Mum’s living room, Helsinki
Example: Linus Torvalds
Goals: Finding out how we can better build tools, together
+hippies share
(the “PROMISE” project)
Repeatable, improvable, (?refutable) software engineering experiments “Put up or shut up” Submit the paper AND the data
Activities Annual conference: this year, co-located with ICSE Journal special issues: 2008,2009 Empirical Software Engineering On-line repository: http://promisedata.org/data: contributions welcome!
13
+plays well with others You have data? Ok then…..
With NASA, IV&V SE research chair 2001 2008, predicting software defects
With Dan Port ASE’08: software process models to
assess agile programming
with Andrian Marcus: ICSM’08 : SEVERIS: automatic audits for
text reports of software bugs. ICSM’09 (submitted): incorporating user
feedback for better concept location
with Barry Boehm See below
with Jamie Andrews: TSE’09 (submitted): genetic algorithms
to design test cases that maximize code coverage
14
+plays well with others
15
+Plays well with others (but not as good as some)
16
+
this talk
background
digression
internal vs drastic changes
what space do we explore?
what is interesting/different here?
results (on 4 projects)
related work
conclusion & future work
questions? comments?
17
+internal vs drastic changes
Internal changes: within the space of current project options
Drastic change: cry havoc and let slip the dogs of war
18
+internal vs drastic changes
Internal changes: within the space of current project options
Drastic change: cry havoc and let slip the dogs of war
19
Internal choices
+internal vs drastic changes
Internal changes: within the space of current project options
Drastic change: cry havoc and let slip the dogs of war
20
Internal choices
Drastic changes
+internal vs drastic changes
Internal changes: within the space of current project options
Drastic change: cry havoc and let slip the dogs of war
21
Internal choices
Drastic changes
Can internal choices out-perform drastic change?
+
this talk
background
digression
internal vs drastic changes
what space do we explore?
what is interesting/different here?
results (on 4 projects)
related work
conclusion & future work
questions? comments?
22
+ 23 Estimates = model(P, T)
P = project
T = tunings
G = goals
+
P = project
T = tunings
G = goals
P = project
Note: controllability assumption
Some project options Estimates = model(P, T)
G = goals
+ 25
Note: controllability assumption
Some project options
cplx, data, docu pvol, rely, ruse, stor, time
Increase!effort! acap, apex, ltex, pcap, pcon,
plex,sced, site,toool
Decrease!effort!
Ranges seen in 161 projects, Learned via regression, Boehm 2000
Some tuning options
Estimates = model(P, T)
G = goals
+
P = project
T = tunings
G = goals
26
Note: controllability assumption
Some project options
cplx, data, docu pvol, rely, ruse, stor, time
Increase!effort! acap, apex, ltex, pcap, pcon,
plex,sced, site,toool
Decrease!effort!
Ranges seen in 161 projects, Learned via regression, Boehm 2000
Some tuning options
An objective function
• Find least p from P that reduce effort (E) , defects (D), time to complete- in months (M)
Estimates = model(P, T)
+
this talk
background
digression
internal vs drastic changes
what space do we explore?
what is interesting/different here?
results (on 4 projects)
related work
conclusion & future work
questions? comments?
27
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides.
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
28
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides.
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
29
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides.
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
30
e = b + 0.01 * sum( scalers )
Effort = a * KLOC ^e * prod( multipliers )
Defaults <a,b> = <2.94, 0.91>
Local calibration [Boehm81]: • Tune <a,b> using local data
Bayesian calibration [Chulani99]: • Combine expert intuition with historical data
0.00 2.00 4.00 6.00
ruse plex data ltex
sced stor
pvol tool
apex docu
site rely
pcon time
pcap acap cplx prec pmat
resl flex
team
Relative impact above lowest value
Estimates are within 30% of actual, 69% of the time
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
31
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
32
Boehm’s value-based SE challenge: • Most SE techniques are “value-neutral” (euphuism for “useless”). • tune recommendations, process decisions to the particulars of company
e.g. [Huang06]: mapped a business into Boehms’ models developed a “risk exposure measure” combining
(a) racing delivery to market (b) delivered software defects
Ran two scenarios:.
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
33
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
34
Fenton07: “....much of the current software metrics research is inherently irrelevant to the industrial mix ... any software metrics program that depends on some extensive metrics collection is doomed to failure.”
e.g. After 26 years, Boehm collected less than 200 sample projects for the COCOMO effort database
Yet another victim of the data drought
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
35
For project options P and internal model tunings T: Estimates = model(P, T)
Tuning uses local data to constraint T • e.g. Boehm’s local calibration constrains <a,b>
If T’s variance dominates P’s variance, then you must tune?
Estimates = model(P, T)
But what about if P’s variance dominates?
Estimates = model(P, T)
Then control estimates by controlling P: • Keep “t” random (no local data for tuning) • Find the smallest “p” from P (random project) that most changes estimates.
[Menzies08]: • var(P) dominates in Boehm’s COCOMO models • just changing P yields similar estimates to standard methods • local data collection is useful, not mandatory.
UAI finds “p” in Boehm’ s effort/time/defect predictors.
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides.
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
36
Mark Harman: search-based SE Many management decisions are over-
constrained No solution satisfies all users, all criteria. E.g. better, faster, cheaper, pick any two
Many tools. Data mining to learn defect predictors (see Jan
IEEE TSE ’07) Genetic algorithms for test case generation
(recall work with Andrews). Simulated annealing for software process
planning (my ASE’07 paper). AI search for project planning (see below) Abduction (a.k.a. partial evaluation + constraints
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
37
The “one slide” rule
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
38
At each “x”, AI search (*) finds and takes next best decision.
After each decision, run Boehm’s models 100 times (for the as-yet undecided, select their values at random).
Prune spurious final decisions with a back-select.
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
39
At each “x”, AI search (*) finds and takes next best decision.
After each decision, run Boehm’s models 100 times (for the as-yet undecided, select their values at random).
Prune spurious final decisions with a back-select.
Search methods = simulated annealing, beam, issamp, keys, a-star, maxwalksat, dfid, LDS…
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides.
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
40
IMPORTANT: this representation allows managers to perform trade-off relations on our recommendations
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides.
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
41
Mark Harman: “Solution robustness may be as important as solution functionality.
“For example, it may be better to locate an area of the search space that is rich in fit solutions, rather than identifying an even better solution that is surrounded by a set of far less fit solutions. ”
Conventional optimizations: 1. One solution 2. Constraining all
choices
AI search : 1) N solutions 2) Provide
neighborhood information
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides.
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
42
• Other uncertainty-in-SE-estimates • Typically Bayes nets. • Usually, single goal
• defects [Fenton08] • effort [Pendharkar05]
• Little (?no) trade off analysis to understand • neighborhood of solution • minimal solution
• We explore multiple, possibly competing, goals • Better AND faster AND cheaper
• We offer neighborhood solutions • We offer trade offs between solution size and effectiveness • We can work in very dimension space
.
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides.
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
43
Even advanced visualization methods fail after 5-10 dimensions
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides.
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
44
Our tools are like optimizers that make so assumption of linearity, continuity, single maxima or even smoothness.
Traditional gradient descent optimizers assume smooth surfaces.
But what of local-maxima?
And what if our shapes are not smooth? [Baker07]: learn <a,b> for Boehm’s model in NASA data 1000 times, pick 90% of data at random.
a
b
[Coarfo00] and [Gu97]: AI methods faster*100 than (e.g.) int. programming for this kind of task.
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides.
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
45
P(ground)=
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides.
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
46
Generate consequences of drastic change • Monte Carlo simulation of the above.
Contrast with good selection of internal choices Control estimates by controlling P:
• Keep “t” random (no local data for tuning)
• Find the smallest “p” from P (random project) that most improves scores from estimates.
P(ground)=
+
this talk
background
digression
internal vs drastic changes
what space do we explore?
what is interesting/different here?
results (on 4 projects)
related work
conclusion & future work
questions? comments?
47
+
+
+ 50
+ 51
+
+ 53
+
this talk
background
digression
internal vs drastic changes
what space do we explore?
what is interesting/different here?
results (on 4 projects)
related work
conclusion & future work
questions? comments?
54
+Related Work
Other parametric cost models: PRICE-S, SLIM, or SEER- SEM Not open source (have secrets) ?possibly over-elaborated. May generate range of estimates
but no search to find better options.
Other instance-based effort tools: ANGEL: Shepperd’s nearest neighbor No parametric form. No way to describe shape of the theory,
no way to explore perturbations of that theory
Other search-based SE: Focuses on few tools: SA, GA, tabu. (we started with SA, but moved on) Exciting new generation of tools
constraint satisfaction algorithms stochastic SAT solves to explore
Other COCOMO work Very focused on regression methods & tuning
Problems with data drought, tuning variance, performance variance
Discrete methods: insightful? fun!
55
+
this talk
background
digression
internal vs drastic changes
what space do we explore?
what is interesting/different here?
results (on 4 projects)
related work
conclusion & future work
questions? comments?
56
+Conclusion
Software project managers have more options than they think.
It is possible (even useful) to push back against drastic change
Q: can we found local options that out-perform drastic change? A: yes, we can
57
Fire people
Deliver late
Do it all again, in ADA
Adjust current project options
Use models whose estimates are dominated by project variance;
Estimates = model(P, T)
Control estimates via P:
• Keep “t” random (no need for local tuning data)
• Using AI, find the fewest parts of P that most change estimates
+Future work
Constraint logic programming Haven’t shown the dark side of our models
Procedural kludges regarding conditional co-dependencies
Much recent interest in constrained regression Users offer hints on what kinds of theories they’d accept
These hints bias the search algorithms
Is CLP a general/useful framework for adding “hinting” to our current tools?
58
+
this talk
background
digression
internal vs drastic changes
what space do we explore?
what is interesting/different here?
results (on 4 projects)
related work
conclusion & future work
questions? comments?
59
+my research question: why are humans so successful?
The world is a very complex place: how do dumb humans get by?
How did dummies like me (?and you) build things as complex as: The internet?
The international domestic airplane network?
The Apollo moon rocket? (400K parts, 2K contractors, worked flawlessly)
60
+why do dummies get by?
Answer #1: we don’t get by Computers crash Economic systems fail Sure, we get some failures
But why don’t we fail all the time?
Answer #2: some of us aren’t so dumb Kepler, Descartes, Newton, Planck, So few of them, too many “Menzies”
Answer #3: the world is not as complex as it appears
Key variables: a few things set the rest Most possible differences, aren’t More regularities than you might expect
61
Distribution of, change-proned classes: Koffice, mozilla
Zipf’s law: reuse frequency of library functions: LINUX,
Sun OS, Mac OS
+applications of “keys”
Rewrite all your algorithms, assuming keys
Reduce complex problems to simper ones
simpler, faster code
Reduce complex answers to simpler ones
shorter, clearer, theories
62
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides.
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
63
In effort estimation, useful for bridging expert vs model-based estimation methods
Support Jorgensen's Expert Judgment Best Practices: 1. evaluate estimation accuracy, 2. avoid conflicting estimation goals 3. ask the estimators to justify and
criticize their estimates 4. avoid irrelevant and unreliable
estimation information 5. use documented data from previous
development tasks 6. find estimation experts with relevant
domain background 7. estimate top-down +bottom-up 8. use estimation checklists 9. combine estimates of many
experts +estimation strategies 10. assess the uncertainty of the estimate 11. provide feedback on estimation accuracy 12. provide estimation training opportunities
(M. Jorgensen. A review of studies on expert estimation of software development effort. Journal of Systems and Software, 2004).
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides.
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
64
In effort estimation, useful for bridging expert vs model-based estimation methods
Support Jorgensen's Expert Judgment Best Practices: 1. evaluate estimation accuracy, 2. avoid conflicting estimation goals 3. ask the estimators to justify and
criticize their estimates 4. avoid irrelevant and unreliable
estimation information 5. use documented data from previous
development tasks 6. find estimation experts with relevant
domain background 7. estimate top-down +bottom-up 8. use estimation checklists 9. combine estimates of many
experts +estimation strategies 10. assess the uncertainty of the estimate 11. provide feedback on estimation accuracy 12. provide estimation training opportunities
(M. Jorgensen. A review of studies on expert estimation of software development effort. Journal of Systems and Software, 2004).
Cross-val
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
65
In effort estimation, useful for bridging expert vs model-based estimation methods
Support Jorgensen's Expert Judgment Best Practices: 1. evaluate estimation accuracy, 2. avoid conflicting estimation goals 3. ask the estimators to justify and
criticize their estimates 4. avoid irrelevant and unreliable
estimation information 5. use documented data from previous
development tasks 6. find estimation experts with relevant
domain background 7. estimate top-down +bottom-up 8. use estimation checklists 9. combine estimates of many
experts +estimation strategies 10. assess the uncertainty of the estimate 11. provide feedback on estimation accuracy 12. provide estimation training opportunities
(M. Jorgensen. A review of studies on expert estimation of software development effort. Journal of Systems and Software, 2004).
Feature selection, Instance-based learning
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
66
In effort estimation, useful for bridging expert vs model-based estimation methods
Support Jorgensen's Expert Judgment Best Practices: 1. evaluate estimation accuracy, 2. avoid conflicting estimation goals 3. ask the estimators to justify and
criticize their estimates 4. avoid irrelevant and unreliable
estimation information 5. use documented data from previous
development tasks 6. find estimation experts with relevant
domain background 7. estimate top-down +bottom-up 8. use estimation checklists 9. combine estimates of many
experts +estimation strategies 10. assess the uncertainty of the estimate 11. provide feedback on estimation accuracy 12. provide estimation training opportunities
(M. Jorgensen. A review of studies on expert estimation of software development effort. Journal of Systems and Software, 2004).
Data mining
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
67
In effort estimation, useful for bridging expert vs model-based estimation methods
Support Jorgensen's Expert Judgment Best Practices: 1. evaluate estimation accuracy, 2. avoid conflicting estimation goals 3. ask the estimators to justify and
criticize their estimates 4. avoid irrelevant and unreliable
estimation information 5. use documented data from previous
development tasks 6. find estimation experts with relevant
domain background 7. estimate top-down +bottom-up 8. use estimation checklists 9. combine estimates of many
experts +estimation strategies 10. assess the uncertainty of the estimate 11. provide feedback on estimation accuracy 12. provide estimation training opportunities
(M. Jorgensen. A review of studies on expert estimation of software development effort. Journal of Systems and Software, 2004).
Ensemble learning
+what is interesting/different here?
It is possible to predict software development effort [Boehm81, Chulani99]
Software development models can be used to debate trade-offs between different management options: [Boehm00], and many others besides.
Such decision making need not wait on detailed local data domain collection [Fenton08, Menzies08]
AI is useful for software engineering
AI tools can explore and rank more options that humans [Menzies00], and many more besides
AI tools might be better than standard methods (many papers).
The options found by the AI tools are better than (at least some) management repair actions (this paper)
68
In effort estimation, useful for bridging expert vs model-based estimation methods
Support Jorgensen's Expert Judgment Best Practices: 1. evaluate estimation accuracy, 2. avoid conflicting estimation goals 3. ask the estimators to justify and
criticize their estimates 4. avoid irrelevant and unreliable
estimation information 5. use documented data from previous
development tasks 6. find estimation experts with relevant
domain background 7. estimate top-down +bottom-up 8. use estimation checklists 9. combine estimates of many
experts +estimation strategies 10. assess the uncertainty of the estimate 11. provide feedback on estimation accuracy 12. provide estimation training opportunities
(M. Jorgensen. A review of studies on expert estimation of software development effort. Journal of Systems and Software, 2004).
Report variance in cross- validation