Programmatic risk management workshop (handbook)

69
Programmatic Risk Management: A “not so simple” introduction to the complex but critical process of building a “credible” schedule Program Planning and Controls Workshop, Denver, Colorado October 6 th and October 14 th 2008 1 /69 Programmatic Risk Management Work (Handbook)

description

A step by step process of building a risk adjusted Integrated Master Schedule (IMS)

Transcript of Programmatic risk management workshop (handbook)

Page 1: Programmatic risk management workshop (handbook)

Programmatic Risk Management:

A “not so simple” introduction to the

complex but critical process of building a

“credible” schedule

Program Planning and Controls Workshop, Denver, Colorado

October 6th and October 14th 2008

1/69

Programmatic Risk Management Work (Handbook)

Page 2: Programmatic risk management workshop (handbook)

Agenda

Duration Topic

20 Minutes Risk Management in Five Easy Pieces

15 Minutes Basic Statistics for programmatic risk management

15 Minutes Monte Carlo Simulation (MCS) theory

20 Minutes Mechanics of MSFT Project and Risk+

15 Minutes Programmatic Risk Ranking

15 Minutes Building a Credible schedule

20 Minutes Conclusion

120 Minutes

2/69

Page 3: Programmatic risk management workshop (handbook)

When we say “Risk Management”

What do we really mean?

3/69

Page 4: Programmatic risk management workshop (handbook)

Five Easy Pieces†:

The Essentials of

Managing

Programmatic Risk

Managing the risk to cost, schedule, and technical performance is the

basis of a successful project management method. † With apologies to Carole Eastman and Bob Rafelson for their 1970 film staring Jack Nicholson

Risk in Five Easy Pieces 4/69

Page 5: Programmatic risk management workshop (handbook)

Hope is Not a Strategy

A Strategy is the plan to successfully complete the project

If the project’s success factors, the processes that deliver them, the alternatives when they fail, and the measurement of this success are not defined in meaningful ways for both the customer and managers of the project – Hope is the only strategy left.

When General Custer was completely surrounded, his chief scout asked, “General what's our strategy?” Custer replied, “The first thing we need to do is make a note to ourselves – never get in this situation again.”

Hope is not a strategy!

Risk in Five Easy Pieces 5/69

Page 6: Programmatic risk management workshop (handbook)

No Single Point Estimate can be

correct without knowing the variance

Single Point Estimates use sample data to

calculate a single value (a statistic) that serves as a

"best guess" for an unknown (fixed or random)

population parameter

Bayesian Inference is a statistical inference where

evidence or observations are used to infer the

probability that a hypothesis may be true

Identifying underlying statistical behavior of the cost

and schedule parameters of the project is the first

step in forecasting future behavior

Without this information and the model in which it is

used any statements about cost, schedule and

completion dates are a 50/50 guesses

When estimating

cost and duration

for planning

purposes using

Point Estimates

results in the

least likely result.

A result with a

50/50 chance of

being true.

Risk in Five Easy Pieces 6/69

Page 7: Programmatic risk management workshop (handbook)

Without Integrating $, Time, and TPM

you’re driving in the rearview mirror

Addressing customer satisfaction means incorporating

product requirements and planned quality into the

Performance Measurement Baseline to assure the true

performance of the project is made visible.

Technical Performance (TPM)

Risk in Five Easy Pieces 7/69

Page 8: Programmatic risk management workshop (handbook)

Without a model for risk management, you’re driving in the dark with the headlights turn off

Risk Management means using a proven risk management process, adapting this to the project environment, and using this process for everyday decision making.

The Risk

Management

process to the right

is used by the US

DOD and differs

from the PMI

approach in how

the processes

areas are arranged.

The key is to

understand the

relationships

between these

areas.

Risk in Five Easy Pieces 8/69

Page 9: Programmatic risk management workshop (handbook)

Risk Communication is …

An interactive process of exchange of information and opinion among individuals, groups, and institutions; often involving multiple messages about the nature of risk or expressing concerns, opinions, or reactions to risk messages or to legal or institutional arrangements for risk management.

Bad news is not wine. It does not improve with age — Colin Powell

Risk in Five Easy Pieces 9/69

Page 10: Programmatic risk management workshop (handbook)

Basic Statistics for Programmatic

Risk Management

Since all point estimates are wrong, statistical estimates will be needed

to construct a credible cost and schedule model

Basic Statistics 10/69

Page 11: Programmatic risk management workshop (handbook)

Uncertainty and Risk are not the

same thing – don’t confuse them

Uncertainty stems from

unknown probability

distributions

– Requirements change impacts

– Budget Perturbations

– Re–work, and re–test

phenomena

– Contractual arrangements

(contract type, prime/sub

relationships, etc)

– Potential for disaster (labor

troubles, shuttle loss, satellite

“falls over”, war, hurricanes,

etc.)

– Probability that if a discrete

event occurs it will invoke a

project delay

Risk stems from known

probability distributions

– Cost estimating methodology

risk resulting from improper

models of cost

– Cost factors such as inflation,

labor rates, labor rate

burdens, etc

– Configuration risk (variation in

the technical inputs)

– Schedule and technical risk

coupling

– Correlation between risk

distributions

Basic Statistics 11/69

Page 12: Programmatic risk management workshop (handbook)

There are 2 types of Uncertainty

encountered in cost and schedule

Static uncertainty is natural variation and foreseen risks

– Uncertainty about the value of a parameter

Dynamic uncertainty is unforeseen uncertainty and “chaos”

– Stochastic changes in the underlying environment

– System time delays, interactions between the network elements, positive and negative feedback loops

– Internal dependencies

Basic Statistics

12/69

Page 13: Programmatic risk management workshop (handbook)

The Multiple Sources of Schedule Uncertainty

and Sorting Them Out is the Role of Planning

Unknown interactions drive

uncertainty

Dynamic uncertainty can be

addressed by flexibility in the

schedule – On ramps

– Off ramps

– Alternative paths

– Schedule “crashing” opportunities

Modeling of this dynamic

uncertainty requires simulation

rather than static PERT based path

assessment – Changes in critical path are

dependent on time and state of the

network

– The result is a stochastic network

Basic Statistics 13/69

Page 14: Programmatic risk management workshop (handbook)

Statistics at a Glance

Probability distribution – A function that describes the probabilities of possible outcomes in a "sample space.”

Random variable – variable a function of the result of a statistical experiment in which each outcome has a definite probability of occurrence.

Determinism – a theory that phenomena are causally determined by preceding events or natural laws.

Standard deviation (sigma value) – An index that characterizes the dispersion among the values in a population.

Bias –The expected deviation of the expected value of a statistical estimate from the quantity it estimates.

Correlation – A measure of the joint impact of two variables upon each other that reflects the simultaneous variation of quantities.

Percentile – A value on a scale of 100 indicating the percent of a distribution that is equal to or below it.

Monte Carlo sampling – A modeling technique that employs random sampling to simulate a population being studied.

Basic Statistics 14/69

Page 15: Programmatic risk management workshop (handbook)

Statistics Versus Probability

In building a risk tolerant

schedule, we’re interested in the

probability of a successful

outcome – “What is the probability of making a

desired completion date?”

But the underlying statistics of the

tasks influence this probability

The statistics of the tasks, their

arrangement in a network of tasks

and correlation define how this

probability based estimated

developed.

Basic Statistics 15/69

Page 16: Programmatic risk management workshop (handbook)

Each path and each task along that path has a

probability distribution

Any path could be critical depending on the convolution of the

underlying task completion time probability distribution functions

The independence or

dependency of each task

with others in the network,

greatly influences the

outcome of the total project

duration

Understanding this

dependence is critical to

assessing the credibility of

the plan as well as the total

completion time of that plan

Basic Statistics 16/69

Page 17: Programmatic risk management workshop (handbook)

Probability Distribution Functions are the Life

Blood of good planning

Probability of

occurrence as a

function of the

number of

samples

“The number of

times a task

duration appears

in a Monte Carlo

simulation”

Basic Statistics 17/69

Page 18: Programmatic risk management workshop (handbook)

Statistics of a Triangle Distribution

Triangle

distributions are

useful when

there is limited

information

about the

characteristics of

the random

variables are all

that is available.

This is common

in project cost

and schedule

estimates. Mode = 2000 hrs

Median = 3415 hrs

Mean = 3879 hrs

Minimum

1000 hrs

Maximum

6830 hrs

50% of all possible values are under

this area of the curve. This is the

definition of the median

Basic Statistics 18/69

Page 19: Programmatic risk management workshop (handbook)

Basics of Monte Carlo Simulation

Far better an approximate answer to the right question, which is often

vague, than an exact answer to the wrong question, which can always

be made precise. — John W. Tukey, 1962 Basics of Monte Carlo

19/69

Page 20: Programmatic risk management workshop (handbook)

Monte Carlo Simulation

Yes Monte Carlo is named after the

country full of casinos located on

the French Rivera

Advantages of Monte Carlo over

PERT is that Monte Carlo…

– Examines all paths, not just the critical

path

– Provides an accurate (true) estimate of

completion

• Overall duration distribution

• Confidence interval (accuracy range)

– Sensitivity analysis of interacting tasks

– Varied activity distribution types – not restricted to Beta

– Schedule logic can include branching – both probabilistic and conditional

– When resource loaded schedules are used – provides integrated cost and schedule

probabilistic model

Basics of Monte Carlo 20/69

Page 21: Programmatic risk management workshop (handbook)

First let’s be convinced that PERT

has limited usefulness

The original paper (Malcolm 1959) states

– The method is “the best that could be done in a real

situation within tight time constraints.”

– The time constraint was One Month

The PERT time made the assumption that the

standard deviation was about 1/6 of the range

(b–a), resulting in the PERT formula.

It has been shown that the PERT mean and

standard deviation formulas are poor

approximations for most Beta distributions

(Keefer 1983 and Keefer 1993).

– Errors up to 40% are possible for the PERT mean

– Errors up to 550% are possible for the PERT

standard deviation

Basics of Monte Carlo 21/69

Page 22: Programmatic risk management workshop (handbook)

Critical Path and Mostly Likelies

Critical Path’s are Deterministic – At least one path exists through

the network

– The critical path is identified by

adding the “single point” estimates

– The critical predicts the completion

date only if everything goes

according to plan (we all know this

of course)

Schedule execution is Probabilistic

– There is a likelihood that some durations will comprise a path that is off the critical path

– The single number for the estimate – the “single point estimate” is in fact a most likely estimate

– The completion date is not the most likely date, but is a confidence interval in the probability distribution function resulting from the convolution of all the distributions along all the paths to the completion of the project

Basics of Monte Carlo 22/69

Page 23: Programmatic risk management workshop (handbook)

Deterministic PERT Uses Three Point

Estimates In A Static Manner

Durations are defined as three point estimates

– These estimates are very subjective if captured individually by asking…

– “What is the Minimum, Maximum, and Most Likely”

Critical path is defined from these

estimates is the algebraic addition of

three point estimates

Project duration is based on the

algebraic addition of the times along

the critical path

This approach has some serious

problems from the outset

– Durations must be independent

– Most likely is not the same as the

average

Basics of Monte Carlo 23/69

Page 24: Programmatic risk management workshop (handbook)

Foundation of Monte Carlo Theory

George Louis Leclerc, Comte de Buffon,

asked what was the probability that the

needle would fall across one of the lines,

marked in green.

That outcome occurs only if: sinA l

Basics of Monte Carlo 24/69

Page 25: Programmatic risk management workshop (handbook)

Mechanics of Risk+ integrated with

Microsoft Project

Any credible schedule is a credible model of its dynamic behavior. This

starts with a Monte Carlo model of the schedule’s network of tasks

Mechanics of Risk+ 25/69

Page 26: Programmatic risk management workshop (handbook)

The Simplest Risk+ elements

Task to “watch”

(Number3)

Most Likely

(Duration3)

Pessimistic

(Duration2)

Optimistic

(Duration1)

Distribution

(Number1)

Mechanics of Risk+ 26/69

Page 27: Programmatic risk management workshop (handbook)

The output of Risk+

The height of each box indicates how often the project complete in a given interval during the run

The S–Curve shows the cumulative probability of completing on or before a given date.

The standard deviation of the completion date and the 95% confidence interval of the expected completion date are in the same units as the “most likely remaining duration” field in the schedule

Date: 9/26/2005 2:14:02 PMSamples: 500Unique ID: 10Name: Task 10

Completion Std Deviation: 4.83 days95% Confidence Interval: 0.42 daysEach bar represents 2 days

Completion Date

Fre

qu

en

cy

Cu

mu

lative

Pro

ba

bili

ty

3/1/062/10/06 3/17/06

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1.0

0.02

0.04

0.06

0.08

0.10

0.12

0.14

0.16 Completion Probability Table

Prob ProbDate Date0.05 2/17/060.10 2/21/060.15 2/22/060.20 2/22/060.25 2/23/060.30 2/24/060.35 2/27/060.40 2/27/060.45 2/28/060.50 3/1/06

0.55 3/1/060.60 3/2/060.65 3/3/060.70 3/3/060.75 3/6/060.80 3/7/060.85 3/8/060.90 3/9/060.95 3/13/061.00 3/17/06

Task to “watch”

80% confidence

that task will

complete by

3/7/06

Mechanics of Risk+ 27/69

Page 28: Programmatic risk management workshop (handbook)

A Well Formed Risk+ Schedule

For Risk+ to provide useful information, the underlying schedule must

be well formed on some simple way.

Mechanics of Risk+ 28/69

Page 29: Programmatic risk management workshop (handbook)

A Well formed Risk+ Schedule

A good critical path network

– No constraint dates

– Lowest level tasks have predecessors and

successors

– 80% of relationships are finish to start

Identify risk tasks

– These are “reporting tasks”

– Identify the preview task to watch during

simulation runs

Defining the probability distribution profile for each task

– Bulk assignment is an easy way to start

– A – F ranking is another approach

– Individual risk profile assignments is best but tedious

Mechanics of Risk+ 29/69

Page 30: Programmatic risk management workshop (handbook)

Analyzing the Risk+ Simulation

Risk+ generates one or more of the following outputs:

– Earliest, expected, and latest completion date for each reporting task

– Graphical and tabular displays of the completion date distribution for each reporting task

– The standard deviation and confidence interval for the completion date distribution for each reporting task

– The criticality index (percentage of time on the critical path) for each task

– The duration mean and standard deviation for each task

– Minimum, expected, and maximum cost for the total project

– Graphical and tabular displays of cost distribution for the total project

– The standard deviation and confidence interval for cost at the total project level

Mechanics of Risk+ 30/69

Page 31: Programmatic risk management workshop (handbook)

Programmatic Risk Ranking

The variance in task duration must be defined in some systematic way.

Capturing three point values is the least desirable.

Programmatic Risk Ranking 31/69

Page 32: Programmatic risk management workshop (handbook)

Thinking about risk ranking

These classifications can be used to avoid

asking the “3 point” question for each task

This information will be maintained in the IMS

When updates are made the percentage

change can be applied across all tasks

Classification Uncertainty Overrun

A Routine, been done before Low 0% to 2%

B Routine, but possible difficulties Medium to Low 2% to 5%

C Development, with little technical difficulty Medium 5% to 10%

D Development, but some technical difficulty Medium High 10% to 15%

E Significant effort, technical challenge High 15% to 25%

F No experience in this area Very High 25% to 50%

Programmatic Risk Ranking 32/69

Page 33: Programmatic risk management workshop (handbook)

Steps in characterizing uncertainty

Use an “envelope” method to characterize the

minimum, maximum and “most likely”

Fit this data to a statistical distribution

Use conservative assumptions

Apply greater uncertainty to less mature

technologies

Confirm analysis matches intuition

Remember Sir Francis Bacon’s quote

about beginning with uncertainty and

ending with certainty.

If we start with a what we think is a

valid number we will tend to continue

with that valid number.

When in fact we should speak only in

terms of confidence intervals and

probabilities of success.

Programmatic Risk Ranking 33/69

Page 34: Programmatic risk management workshop (handbook)

Sobering observations about 3 point

estimates when asking engineers

In 1979, Tversky and Kahneman proposed an alternative to Utility theory. Prospect theory asserts that people make predictably irrational decisions.

The way that a choice of decisions is presented can sway a person to choose the less rational decision from a set of options.

Once a problem is clearly and reasonably presented, rarely does a person think outside the bounds of the frame.

Source:

– “The Causes of Risk Taking By Project Managers,” Proceedings of the Project Management Institute Annual Seminars & Symposium November 1–10, 2001 • Nashville, Tenn

– Tversky, Amos, and Daniel Kahneman. 1981. The Framing of Decisions and the Psychology of Choice. Science 211 (January 30): 453–458

Programmatic Risk Ranking 34/69

Page 35: Programmatic risk management workshop (handbook)

Building a Credible Schedule

A credible schedule contains a well formed network, explicit risk

mitigations, proper margin for these risks, and a clear and concise

critical path(s). All of this is prologue to analyzing the schedule. Building a Credible Schedule

35/69

Page 36: Programmatic risk management workshop (handbook)

Good schedules have a contingency plans

The schedule contingency

needed to make the plan credible

can be derived from the Risk+

analysis

The schedule contingency is the

amount of time added (or

subtracted) from the baseline

schedule necessary to achieve

the desired probability of an under

run or over run.

The schedule contingency can be determined through

– Monte Carlo simulations (Risk+)

– Best judgment from previous experience

– Percentage factors based on historical experience

– Correlation analysis for dependency impacts

Is This Our

Contingency

Plan ?

Building a Credible Schedule 36/69

Page 37: Programmatic risk management workshop (handbook)

Schedule quality and accuracy

Accuracy range

– Similar for each estimate class

Consistent with estimate

– Level of project definition

– Purpose

– Preparation effort

Monte Carlo simulation

– Analysis of results shows quality attained versus the quality sought

(expected accuracy ranges)

Achieving specified accuracy requirements

– Select value at end points of confidence interval

– Calculate percentages from base schedule completion date, including

the contingency

Building a Credible Schedule 37/69

Page 38: Programmatic risk management workshop (handbook)

Technical Performance Measures

Technical Performance Measures are one method of showing risk by

done

– Specific actions taken in the IMS to move the compliance forward toward the

goal

Activities that

assessing the

increasing compliance

to the technical

performance measure

can be show in the

IMS

– These can be

Accomplishment

Criteria

Building a Credible Schedule 38/69

Page 39: Programmatic risk management workshop (handbook)

The Monte Carlo Process starts with

the 3 point estimates

Estimates of the task duration are still needed,

just like they are in PERT

– Three point estimates could be used

– But risk ranking and algorithmic generation of the

“spreads” is a better approach

Duration estimates must be parametric rather

than numeric values

– A geometric scale of parametric risk is one approach

Branching probabilities need to be defined

– Conditional paths through the schedule can be

evaluated using Monte Carlo tools

– This also demonstrate explicit risk mitigation

planning to answer the question “what if this

happens?”

These three

point estimates

are not the PERT

ones.

They are derived

from the ordinal

risk ranking

process.

This allows them

to be “calibrated”

for the domain,

correlated with

the technical risk

model.

Building a Credible Schedule 39/69

Page 40: Programmatic risk management workshop (handbook)

Expert Judgment is required to build

a Risk Management approach

Expert judgment is typically the basis of cost and

schedule estimates

– Expert judgment is usually the weakest area of process

and quantification

– Translating from English (SOW) to mathematics

(probabilistic risk model) is usually inconsistent at best and

erroneous at worst

One approach

– Plan for the “best case” and preclude a self–fulfilling

prophesy

– Budget for the “most likely” and recognize risks and

uncertainties

– Protect for the “worst case” and acknowledge the

conceivable in the risk mitigation plan

The credibility of the “best case” estimates if crucial to

the success of this approach

Building the

variance values

for the ordinal

risk rank is a

technical

process,

requiring

engineering

judgment.

Building a Credible Schedule 40/69

Page 41: Programmatic risk management workshop (handbook)

Guiding the Risk Factor Process requires

careful weighting of each level of risk

For tasks marked “Low” a reasonable

approach is to score the maximum 10%

greater than the minimum.

The “Most Likely” is then scored as a

geometric progression for the remaining

categories with a common ratio of 1.5

Tasks marked “Very High” are bound at

200% of minimum.

– No viable project manager would like a task

grow to three times the planned duration

without intervention

The geometric progress is somewhat

arbitrary but it should be used instead of

a linear progression

Min Most

Likely

Max

Low 1.0 1.04 1.10

Low+ 1.0 1.06 1.15

Moderate 1.0 1.09 1.24

Moderate+ 1.0 1.14 1.36

High 1.0 1.20 1.55

High+ 1.0 1.30 1.85

Very High 1.0 1.46 2.30

Very High+ 1.0 1.68 3.00

Building a Credible Schedule 41/69

Page 42: Programmatic risk management workshop (handbook)

Assume now we have a well formed

schedule – now what?

With all the “bone head” elements

removed, we can say we have a

well formed schedule

But the real role of Planning is to

forecast the future, provide

alternative Plan’s for this forecast

and actively engage all the

participants in the projects in the

Planning Process

For the role of

PP&C is to

move “reporting

past

performance” to

“forecasting

future

performance” it

must break the

mold of using

static models of

cost and

schedule

Building a Credible Schedule 42/69

Page 43: Programmatic risk management workshop (handbook)

We’re really after the management of schedule

margin as part of planning

Plan the risk alternatives that

“might” be needed

– Each mitigation has a Plan B

branch

– Keep alternatives as simple as

possible (maybe one task)

Assess probability of the alternative

occurring

Assign duration and resource

estimates to both branches

Turn off for alternative for a

“success” path assessment

Turn off primary for a “failure” path

assessment

30% Probability

of failure

70% Probability

of success

Plan B

Plan A Current Margin Future Margin

80% Confidence for completion

with current margin

Duration of Plan B Plan A + Margin

Building a Credible Schedule 43/69

Page 44: Programmatic risk management workshop (handbook)

Successful margin management requires the

reuse of unused durations

Programmatic Margin is added between

Development, Production and Integration

& Test phases

Risk Margin is added to the IMS where

risk alternatives are identified

Margin that is not used in the IMS for risk

mitigation will be moved to the next

sequence of risk alternatives

– This enables us to buy back schedule margin

for activities further downstream

– This enables us to control the ripple effect of

schedule shifts on Margin activities

5 Days Margin

5 Days Margin

Plan B

Plan A

Plan B

Plan AFirst Identified Risk Alternative in IMS

Second Identified Risk

Alternative in IMS

3 Days Margin Used

Downstream

Activities shifted to

left 2 daysDuration of Plan B < Plan A + Margin

2 days will be added

to this margin task

to bring schedule

back on track

Building a Credible Schedule 44/69

Page 45: Programmatic risk management workshop (handbook)

Simulation Considerations

Schedule logic and constraints – Simplify logic – model only paths which, by

inspection, may have a significant bearing on the final result

– Correlate similar activities

– No open ends

– Use only finish–to–start relationships with no lags

– Model relationships other than finish–to–start as activities with base durations equal to the lag value

– Eliminate all date constraints

– Consider using branching for known alternatives

Building a Credible Schedule 45/69

Page 46: Programmatic risk management workshop (handbook)

The contents of the schedule

Constraints

Lead/Lag

Task relationships

Durations

Network topology

Building a Credible Schedule 46/69

Page 47: Programmatic risk management workshop (handbook)

Simulation Considerations

Selection of Probability Distributions

– Develop schedule simulation inputs

concurrently with the cost estimate

• Early in process – use same subject matter

experts

• Convert confidence intervals into probability

duration distributions

– Number of distributions vary depending on

software

– Difficult to develop inputs required for

distributions

– Beta and Lognormal better than triangular;

avoid exclusive use of Normal distribution

Building a Credible Schedule 47/69

Page 48: Programmatic risk management workshop (handbook)

Sensitivity Analysis describes which

tasks drive the completion times

Concentrates on inputs most likely

to improve quality (accuracy)

Identifies most promising

opportunities where additional

work will help to narrow input

ranges

Methods

– Run multiple simulations

– Use criticality index

– “Tornado” or Pareto graph

Building a Credible Schedule 48/69

Page 49: Programmatic risk management workshop (handbook)

What we get in the end is a Credible

Model of the schedule

Concept generator from Ramon

Lull’s Ars Magna (C. 1300)

All models are wrong. Some

models are useful.

– George Box (1919 – )

Building a Credible Schedule 49/69

Page 50: Programmatic risk management workshop (handbook)

Conclusion

At this point there is too much information. Processing this information

will take time, patience, and most of all practice with the tools and the

results they produce. Conclusion 50/69

Page 51: Programmatic risk management workshop (handbook)

Conclusions

Project schedule status must be

assessed in terms of a critical path

through the schedule network

Because the actual durations of

each task in the network are

uncertain (they are random

variables following a probability

distribution function), the project

schedule duration must be

modeled statistically

Conclusion 51/69

Page 52: Programmatic risk management workshop (handbook)

Conclusions

Quality (accuracy) is measured at the end points of achieved confidence interval (suggest 80% level)

Simulation results depend on:

– Accuracy and care taken with base schedule logic

– Use of subject matter experts to establish inputs

– Selection of appropriate distribution types

– Through analysis of multiple critical paths

– Understanding which activities and paths have the greatest potential impact

Conclusion 52/69

Page 53: Programmatic risk management workshop (handbook)

Conclusions

Cost and schedule estimates are made up of many independent elements. – When each element is planned as best case – e.g. a

probability of achievement of 10%

– The probability of achieving best case for a two–element estimate is 1%

– For three elements, 0.01%

– For many elements, infinitesimal

– In effect, it is zero.

In the beginning no attempt should be made to distinguish between risk and uncertainty – Risk involves uncertainty but it is indeed more

– For initial purposes it is unimportant

– The effect is combined into one statistical factor called “risk,” which can be described by a single probability distribution function

Conclusion 53/69

Page 54: Programmatic risk management workshop (handbook)

What are we really after in the end?

As the program

proceeds so

does:

– Increasing

accuracy

– Reduced

schedule risk

– Increasing

visual

confirmation

that success

can be reached

Current Estimate Accuracy

Conclusion 54/69

Page 55: Programmatic risk management workshop (handbook)

Points to remember

Good project management is good risk

management

Risk management is how adults manage

projects

The only thing we manage is project risk

Risks impact objectives

Risks come from the decisions we make while

trying to achieve the objectives

Risks require a factual condition and have

potential negative consequences that must be

mitigated in the schedule

Conclusion 55/69

Page 56: Programmatic risk management workshop (handbook)

Usage is needed before

understanding is acquired

Here and elsewhere, we shall not

obtain the best insights into things

until we actually see them growing

from the beginning.

— Aristotle

Conclusion 56/69

Page 57: Programmatic risk management workshop (handbook)

The End

This is actually the beginning, since building a risk tolerant, credible,

robust schedule requires constant “execution” of the plan.

A planning algorithm from

Aristotle’s De Motu Animalium

c. 400 BC

Conclusion 57/69

Page 58: Programmatic risk management workshop (handbook)

Resources

1. “The Parameters of the Classical PERT: An Assessment of its Success,” Rafael Herrerias Pleguezuelo, http://www.cyta.com.ar/biblioteca/bddoc/bdlibros/pert_van/PARAMETROS.PDF

2. “Advanced Quantitative Schedule Risk Analysis,” David T. Hulett, Hulett & Associates, http://www.projectrisk.com/index.html

3. “Schedule Risk Analysis Simplified,” David T. Hulett, Hulett & Associates, http://www.projectrisk.com/index.html

4. “Project Risk Management: A Combined Analytical Hierarchy Process and Decision Tree Approach,” Prasanta Kumar Dey, Cost Engineering, Vol. 44, No. 3, March 2002.

5. “Adding Probability to Your ‘Swiss Army Knife’,” John C. Goodpasture, Proceedings of the 30th Annual Project Management Institute 1999 Seminars and Symposium, October, 1999.

6. “Modeling Uncertainty in Project Scheduling,” Patrick Leach, Proceedings of the 2005 Crystal Ball User Conference

7. “Near Critical Paths Create Violations in the PERT Assumptions of Normality,” Frank Pokladnik and Robert Hill, University of Houston, Clear Lake, http://www.sbaer.uca.edu/research/dsi/2003/procs/237–4203.pdf

Resources 58/69

Page 59: Programmatic risk management workshop (handbook)

Resources

8. “Teaching SuPERT,” Kenneth R. MacLeod and Paul F. Petersen, Proceedings of the Decision Sciences 2003 Annual Meeting, Washington DC, http://www.sbaer.uca.edu/research/dsi/2003/by_track_paper.html

9. “The Beginning of the Monte Carlo Method,” N. Metropolis, Los Alamos Science, Special Issue, 1987. http://www.fas.org/sgp/othergov/doe/lanl/pubs/00326866.pdf

10. “Defining a Beta Distribution Function for Construction Simulation,” Javier Fente, Kraig Knutson, Cliff Schexnayder, Proceedings of the 1999 Winter Simulation Conference.

11. “The Basics of Monte Carlo Simulation: A Tutorial,” S. Kandaswamy, Proceedings of the Project Management Institute Annual Seminars & Symposium, November, 2001.

12. “The Mother of All Guesses: A User Friendly Guide to Statistical Estimation,” Francois Melese and David Rose, Armed Forces Comptroller, 1998, http://www.nps.navy.mil/drmi/graphics/StatGuide–web.pdf

13. “Inverse Statistical Estimation via Order Statistics: A Resolution of the Ill–Posed Inverse problem of PERT Scheduling,” William F. Pickard, Inverse Problems 20, pp. 1565–1581, 2004

Resources 59/69

Page 60: Programmatic risk management workshop (handbook)

Resources

14. “Schedule Risk Analysis: Why It Is Important and How to Do It,

“Stephen A. Book, Proceedings of the Ground Systems Architecture

Workshop (GSAW 2002), Aerospace Corporation, March 2002,

http://sunset.usc.edu/GSAW/gsaw2002/s11a/book.pdf

15. “Evaluation of the Risk Analysis and Cost Management (RACM)

Model,” Matthew S. Goldberg, Institute for Defense Analysis, 1998.

http://www.thedacs.com/topics/earnedvalue/racm.pdf

16. “PERT Completion Times Revisited,” Fred E. Williams, School of

Management, University of Michigan–Flint, July 2005,

http://som.umflint.edu/yener/PERT%20Completion%20Revisited.htm

17. “Overcoming Project Risk: Lessons from the PERIL Database,” Tom

Hendrick , Program Manager, Hewlett Packard, 2003,

http://www.failureproofprojects.com/Risky.pdf

18. “The Heart of Risk Management: Teaching Project Teams to Combat

Risk,” Bruce Chadbourne, 30th Annual Project Management Institute

1999 Seminara and Symposium, October 1999,

http://www.risksig.com/Articles/pmi1999/rkalt01.pdf

Resources 60/69

Page 61: Programmatic risk management workshop (handbook)

Resources

20. Project Risk Management Resource List, NASA Headquarters Library,

http://www.hq.nasa.gov/office/hqlibrary/ppm/ppm22.htm#art

21. “Quantify Risk to Manage Cost and Schedule,” Fred Raymond,

Acquisition Quarterly, Spring 1999,

http://www.dau.mil/pubs/arq/99arq/raymond.pdf

22. “Continuous Risk Management,” Cost Analysis Symposium, April

2005,

http://www1.jsc.nasa.gov/bu2/conferences/NCAS2005/papers/5C_–

_Cockrell_CRM_v1_0.ppt

23. “A Novel Extension of the Triangular Distribution and its Parameter

Estimation,” J. Rene van Dorp and Samuel Kotz, The Statistician

51(1), pp. 63 – 79, 2002.

http://www.seas.gwu.edu/~dorpjr/Publications/JournalPapers/TheStati

stician2002.pdf

24. “Distribution of Modeling Dependence Cause by Common Risk

Factors,”

J. Rene van Dorp, European Safety and Reliability 2003 Conference

Proceedings, March 2003,

http://www.seas.gwu.edu/~dorpjr/Publications/ConferenceProceeding

s/Esrel2003.pdf

Resources 61/69

Page 62: Programmatic risk management workshop (handbook)

Resources

25. “Improved Three Point Approximation To Distribution Functions For

Application In Financial Decision Analysis,” Michele E. Pfund, Jennifer

E. McNeill, John W. Fowler and Gerald T. Mackulak, Department of

Industrial Engineering, Arizona State University, Tempe, Arizona,

http://www.eas.asu.edu/ie/workingpaper/pdf/cdf_estimation_submissio

n.pdf

26. “Analysis Of Resource–constrained Stochastic Project Networks

Using Discrete–event Simulation,” Sucharith Vanguri, Masters Thesis,

Mississippi State University, May 2005,

http://sun.library.msstate.edu/ETD–db/theses/available/etd–

04072005–123743/restricted/SucharithVanguriThesis.pdf

27. “Integrated Cost / Schedule Risk Analysis,” David T. Hulett and Bill

Campbell, Fifth European Project Management Conference, June

2002.

28. “Risk Interrelation Management – Controlling the Snowball Effect,” Olli

Kuismanen, Tuomo Saari and Jussi Vähäkylä, Fifth European Project

Management Conference, June 2002.

29. The Lady Tasting Tea: How Statistics Revolutionized Science in the

Twentieth Century, David Salsburg, W. H. Freeman, 2001

Resources 62/69

Page 63: Programmatic risk management workshop (handbook)

Resources

30. “Triangular Approximations for Continuous Random Variables in Risk

Analysis,” David G. Johnson, The Business School, Loughborough

University, Liecestershire.

31. “Statistical Dependence through Common Risk Factors: With

Applications in Uncertainty Analysis,” J. Rene van Dorp, European

Journal of Operations Research, Volume 161(1), pp. 240–255.

32. “Statistical Dependence in the risk analysis for Project Networks Using

Monte Carlo Methods,” J. Rene van Dorp and M. R. Dufy,

International Journal of Production Economics, 58, pp. 17–29, 1999.

http://www.seas.gwu.edu/~dorpjr/Publications/JournalPapers/Prodeco

n1999.pdf

33. “Risk Analysis for Large Engineering Projects: Modeling Cost

Uncertainty for Ship Production Activities,” M. R. Dufy and J. Rene

van Dorp, Journal of Engineering Valuation and Cost Analysis,

Volume 2. pp. 285–301,

http://www.seas.gwu.edu/~dorpjr/Publications/JournalPapers/EVCA19

99.pdf

34. “Risk Based Decision Support techniques for Programs and Projects,”

Barney Roberts and David Frost, Futron Risk Management Center of

Excellence, http://www.futron.com/pdf/RBDSsupporttech.pdf

Resources 63/69

Page 64: Programmatic risk management workshop (handbook)

Resources

35. Probabilistic Risk Assessment Procedures Guide for NASA Managers

and Practitioners, Office of Safety and Mission Assurance, April 2002.

http://www.hq.nasa.gov/office/codeq/doctree/praguide.pdf

36. “Project Planning: Improved Approach Incorporating Uncertainty,”

Vahid Khodakarami, Norman Fenton, and Martin Neil, Track 15

EURAM2005: “Reconciling Uncertainty and Responsibility” European

Academy of Management.

http://www.dcs.qmw.ac.uk/~norman/papers/project_planning_khodake

rami.pdf

37. “A Distribution for Modeling Dependence Caused by Common Risk

Factors,” J. Rene van Dorp, European Safety and Reliability 2003

Conference Proceedings, March 2003.

38. “Probabilistic PERT,” Arthur Nadas, IBM Journal of Research and

Development, 23(3), May 1979, pp. 339–347.

39. “Ranked Nodes: A Simple and effective way to model qualitative in

large–scale Bayesian Networks,” Norman Fenton and Martin Neil,

Risk Assessment and Decision Analysis Research Group, Department

of Computer Science, Queen Mary, University of London, February

21, 2005.

Resources 64/69

Page 65: Programmatic risk management workshop (handbook)

Resources

40. “Quantify Risk to Manage Cost and Schedule,” Fred Raymond,

Acquisition Review Quarterly, Spring 1999, pp. 147–154

41. “The Causes of Risk Taking by Project Managers,” Michael Wakshull,

Proceedings of the Project Management Institute Annual Seminars &

Symposium, November 2001.

42. “Stochastic Project Duration Analysis Using PERT–Beta Distributions,”

Ron Davis.

43. “Triangular Approximation for Continuous Random Variables in Risk

Analysis,” David G. Johnson, Decision Sciences Institute Proceedings

1998.

http://www.sbaer.uca.edu/research/dsi/1998/Pdffiles/Papers/1114.pdf

44. “The Cause of Risk Taking by Managers,” Michael N.Wakshull,

Proceedings of the Project Management Institute Annual Seminars &

Symposium November 1–10, 2001, Nashville Tennessee ,

http://www.risksig.com/Articles/pmi2001/21261.pdf

45. “The Framing of Decisions and the Psychology of Choice,” Tversky,

Amos, and Daniel Kahneman. 1981, Science 211 (January 30): 453–

458, http://www.cs.umu.se/kurser/TDBC12/HT99/Tversky.html

Resources 65/69

Page 66: Programmatic risk management workshop (handbook)

Resources

46. “Three Point Approximations for Continuous Random Variables,”

Donald Keefer and Samuel Bodily, Management Science, 29(5), pp.

595 – 609.

47. “Better Estimation of PERT Activity Time Parameters,” Donald Keefer

and William Verdini, Management Science, 39(9), pp. 1086 – 1091.

48. “The Benefits of Integrated, Quantitative Risk Management,” Barney

B. Roberts, Futron Corporation, 12th Annual International Symposium

of the International Council on Systems Engineering, July 1–5, 2001,

http://www.futron.com/pdf/benefits_QuantIRM.pdf

49. “Sources of Schedule Risk in Complex Systems Development,” Tyson

R. Browning, INCOSE Systems Engineering Journal, Volume 2, Issue

3, pp. 129 – 142, 14 September 1999,

http://sbufaculty.tcu.edu/tbrowning/Publications/Browning%20(1999)–

–SE%20Sch%20Risk%20Drivers.pdf

50. “Sources of Performance Risk in Complex System Development,”

Tyson R. Browning, 9th Annual International Symposium of INCOSE,

June 1999,

http://sbufaculty.tcu.edu/tbrowning/Publications/Browning%20(1999)–

–INCOSE%20Perf%20Risk%20Drivers.pdf

Resources 66/69

Page 67: Programmatic risk management workshop (handbook)

Resources

51. “Experiences in Improving Risk Management Processes Using the

Concepts of the Riskit Method,” Jyrki Konito, Gerhard Getto, and

Dieter Landes, ACM SIGSOFT Software Engineering Notes ,

Proceedings of the 6th ACM SIGSOFT international symposium on

Foundations of software engineering SIGSOFT '98/FSE-6, Volume 23

Issue 6, November 1998.

52. “Anchoring and Adjustment in Software Estimation,” Jorge Aranda and

Steve Easterbrook, Proceedings of the 10th European software

engineering conference held jointly with 13th ACM SIGSOFT

international symposium on Foundations of software engineering

ESEC/FSE-13

53. “The Monte Carlo Method,” W. F. Bauer, Journal of the Society of

Industrial Mathematics, Volume 6, Number 4, December 1958,

http://www.cs.fsu.edu/~mascagni/Bauer_1959_Journal_SIAM.pdf.

54. “A Retrospective and Prospective Survey of the Monte Carlo Method,”

John H. Molton, SIAM Journal, Volume 12, Number 1, January 1970,

http://www.cs.fsu.edu/~mascagni/Halton_SIAM_Review_1970.pdf.

Resources 67/69

Page 68: Programmatic risk management workshop (handbook)

Resources 68/69

Page 69: Programmatic risk management workshop (handbook)

Lewis & Fowler

8310 South Valley Highway

Suite 300

Englewood, Colorado 80112

www.lewisandfowler.com

303.524.1610

Deliverables Based Planningsm

Integrated Master Plan

Integrated Master Schedule

Earned Value

Risk Management

Proposal Support Service

Glen B. Alleman, VP, Program Planning and Controls

[email protected]

303.437 5226

69/69