Decision analysis

52
1 © 2005 Thomson/South-Western Chapter 13 Decision Analysis Problem Formulation Decision Making without Probabilities Decision Making with Probabilities Risk Analysis and Sensitivity Analysis Decision Analysis with Sample Information Computing Branch Probabilities

description

SPPT Course Sample

Transcript of Decision analysis

Page 1: Decision analysis

1 1 Slide

Slide

© 2005 Thomson/South-Western

Chapter 13Decision Analysis

Problem Formulation Decision Making without

Probabilities Decision Making with

Probabilities Risk Analysis and Sensitivity

Analysis Decision Analysis with Sample

Information Computing Branch Probabilities

Page 2: Decision analysis

2 2 Slide

Slide

© 2005 Thomson/South-Western

Problem Formulation

A decision problem is characterized by decision alternatives, states of nature, and resulting payoffs.

The decision alternatives are the different possible strategies the decision maker can employ.

The states of nature refer to future events, not under the control of the decision maker, which may occur. States of nature should be defined so that they are mutually exclusive and collectively exhaustive.

Page 3: Decision analysis

3 3 Slide

Slide

© 2005 Thomson/South-Western

Influence Diagrams

An influence diagram is a graphical device showing the relationships among the decisions, the chance events, and the consequences.

Squares or rectangles depict decision nodes.

Circles or ovals depict chance nodes. Diamonds depict consequence nodes. Lines or arcs connecting the nodes show

the direction of influence.

Page 4: Decision analysis

4 4 Slide

Slide

© 2005 Thomson/South-Western

Payoff Tables

The consequence resulting from a specific combination of a decision alternative and a state of nature is a payoff.

A table showing payoffs for all combinations of decision alternatives and states of nature is a payoff table.

Payoffs can be expressed in terms of profit, cost, time, distance or any other appropriate measure.

Page 5: Decision analysis

5 5 Slide

Slide

© 2005 Thomson/South-Western

Decision Trees

A decision tree is a chronological representation of the decision problem.

Each decision tree has two types of nodes; round nodes correspond to the states of nature while square nodes correspond to the decision alternatives.

Page 6: Decision analysis

6 6 Slide

Slide

© 2005 Thomson/South-Western

The branches leaving each round node represent the different states of nature while the branches leaving each square node represent the different decision alternatives.

At the end of each limb of a tree are the payoffs attained from the series of branches making up that limb.

Page 7: Decision analysis

7 7 Slide

Slide

© 2005 Thomson/South-Western

Decision Making without Probabilities

Three commonly used criteria for decision making when probability information regarding the likelihood of the states of nature is unavailable are: • the optimistic approach• the conservative approach• the minimax regret approach.

Page 8: Decision analysis

8 8 Slide

Slide

© 2005 Thomson/South-Western

Optimistic Approach

The optimistic approach would be used by an optimistic decision maker.

The decision with the largest possible payoff is chosen.

If the payoff table was in terms of costs, the decision with the lowest cost would be chosen.

Page 9: Decision analysis

9 9 Slide

Slide

© 2005 Thomson/South-Western

Conservative Approach

The conservative approach would be used by a conservative decision maker.

For each decision the minimum payoff is listed and then the decision corresponding to the maximum of these minimum payoffs is selected. (Hence, the minimum possible payoff is maximized.)

If the payoff was in terms of costs, the maximum costs would be determined for each decision and then the decision corresponding to the minimum of these maximum costs is selected. (Hence, the maximum possible cost is minimized.)

Page 10: Decision analysis

10 10 Slide

Slide

© 2005 Thomson/South-Western

Minimax Regret Approach

The minimax regret approach requires the construction of a regret table or an opportunity loss table.

This is done by calculating for each state of nature the difference between each payoff and the largest payoff for that state of nature.

Then, using this regret table, the maximum regret for each possible decision is listed.

The decision chosen is the one corresponding to the minimum of the maximum regrets.

Page 11: Decision analysis

11 11 Slide

Slide

© 2005 Thomson/South-Western

Example

Consider the following problem with three decision alternatives and three states of nature with the following payoff table representing profits:

States of Nature s1 s2 s3

d1 4 4 -2

Decisions d2 0 3 -1

d3 1 5 -3

Page 12: Decision analysis

12 12 Slide

Slide

© 2005 Thomson/South-Western

Example: Optimistic Approach

An optimistic decision maker would use the optimistic (maximax) approach. We choose the decision that has the largest single value in the payoff table.

Maximum Decision Payoff

d1 4

d2 3

d3 5

Maximaxpayoff

Maximaxdecision

Page 13: Decision analysis

13 13 Slide

Slide

© 2005 Thomson/South-Western

Example: Optimistic Approach

Solution SpreadsheetA B C D E F

123 Decision Maximum Recommended4 Alternative s1 s2 s3 Payoff Decision5 d1 4 4 -2 46 d2 0 3 -1 37 d3 1 5 -3 5 d389 5

State of Nature

Best Payoff

PAYOFF TABLE

Page 14: Decision analysis

14 14 Slide

Slide

© 2005 Thomson/South-Western

Example: Conservative Approach

A conservative decision maker would use the conservative (maximin) approach. List the minimum payoff for each decision. Choose the decision with the maximum of these minimum payoffs.

Minimum Decision Payoff

d1 -2

d2 -1

d3 -3

Maximindecision

Maximinpayoff

Page 15: Decision analysis

15 15 Slide

Slide

© 2005 Thomson/South-Western

Example: Conservative Approach

Solution SpreadsheetA B C D E F

123 Decision Minimum Recommended4 Alternative s1 s2 s3 Payoff Decision5 d1 4 4 -2 -26 d2 0 3 -1 -1 d27 d3 1 5 -3 -389 -1

State of Nature

Best Payoff

PAYOFF TABLE

Page 16: Decision analysis

16 16 Slide

Slide

© 2005 Thomson/South-Western

For the minimax regret approach, first compute a regret table by subtracting each payoff in a column from the largest payoff in that column. In this example, in the first column subtract 4, 0, and 1 from 4; etc. The resulting regret table is:

s1 s2 s3

d1 0 1 1

d2 4 2 0

d3 3 0 2

Example: Minimax Regret Approach

Page 17: Decision analysis

17 17 Slide

Slide

© 2005 Thomson/South-Western

For each decision list the maximum regret. Choose the decision with the minimum of these values.

Maximum Decision Regret d1 1

d2 4

d3 3

Example: Minimax Regret Approach

Minimaxdecision

Minimaxregret

Page 18: Decision analysis

18 18 Slide

Slide

© 2005 Thomson/South-Western

Solution SpreadsheetA B C D E F

12 Decision 3 Alternative s1 s2 s34 d1 4 4 -25 d2 0 3 -16 d3 1 5 -3789 Decision Maximum Recommended10 Alternative s1 s2 s3 Regret Decision11 d1 0 1 1 1 d112 d2 4 2 0 413 d3 3 0 2 314 1Minimax Regret Value

State of NaturePAYOFF TABLE

State of NatureOPPORTUNITY LOSS TABLE

Example: Minimax Regret Approach

Page 19: Decision analysis

19 19 Slide

Slide

© 2005 Thomson/South-Western

Decision Making with Probabilities

Expected Value Approach• If probabilistic information regarding the

states of nature is available, one may use the expected value (EV) approach.

• Here the expected return for each decision is calculated by summing the products of the payoff under each state of nature and the probability of the respective state of nature occurring.

• The decision yielding the best expected return is chosen.

Page 20: Decision analysis

20 20 Slide

Slide

© 2005 Thomson/South-Western

The expected value of a decision alternative is the sum of weighted payoffs for the decision alternative.

The expected value (EV) of decision alternative di is defined as:

where: N = the number of states of nature P(sj ) = the probability of state of

nature sj

Vij = the payoff corresponding to decision alternative di and state of nature sj

Expected Value of a Decision Alternative

EV( ) ( )d P s Vi j ijj

N

1

EV( ) ( )d P s Vi j ijj

N

1

Page 21: Decision analysis

21 21 Slide

Slide

© 2005 Thomson/South-Western

Example: Burger Prince

Burger Prince Restaurant is considering opening a new restaurant on Main Street. It has three

different models, each with a different

seating capacity. Burger Prince

estimates that the average number of

customers per hour will be 80, 100, or

120. The payoff table for the three

models is on the next slide.

Page 22: Decision analysis

22 22 Slide

Slide

© 2005 Thomson/South-Western

Payoff Table

Average Number of Customers Per Hour

s1 = 80 s2 = 100 s3 = 120

Model A $10,000 $15,000 $14,000

Model B $ 8,000 $18,000 $12,000

Model C $ 6,000 $16,000 $21,000

Page 23: Decision analysis

23 23 Slide

Slide

© 2005 Thomson/South-Western

Expected Value Approach

Calculate the expected value for each decision. The decision tree on the next slide can assist in this calculation. Here d1, d2, d3

represent the decision alternatives of models A, B, C, and s1, s2, s3 represent the states of

nature of 80, 100, and 120.

Page 24: Decision analysis

24 24 Slide

Slide

© 2005 Thomson/South-Western

Decision Tree

11

.2

.4

.4

.4

.2

.4

.4

.2

.4

d1

d2

d3

s1

s1

s1

s2

s3

s2

s2

s3

s3

Payoffs10,000

15,000

14,0008,000

18,000

12,000

6,000

16,000

21,000

22

33

44

Page 25: Decision analysis

25 25 Slide

Slide

© 2005 Thomson/South-Western

Expected Value for Each Decision

Choose the model with largest EV, Model C.

33

d1

d2

d3

EMV = .4(10,000) + .2(15,000) + .4(14,000) = $12,600

EMV = .4(8,000) + .2(18,000) + .4(12,000) = $11,600

EMV = .4(6,000) + .2(16,000) + .4(21,000) = $14,000

Model A

Model B

Model C

22

11

44

Page 26: Decision analysis

26 26 Slide

Slide

© 2005 Thomson/South-Western

Solution SpreadsheetA B C D E F

123 Decision Expected Recommended4 Alternative s1 = 80 s2 = 100 s3 = 120 Value Decision5 d1 = Model A 10,000 15,000 14,000 126006 d2 = Model B 8,000 18,000 12,000 116007 d3 = Model C 6,000 16,000 21,000 14000 d3 = Model C8 Probability 0.4 0.2 0.49 14000

State of Nature

Maximum Expected Value

PAYOFF TABLE

Expected Value Approach

Page 27: Decision analysis

27 27 Slide

Slide

© 2005 Thomson/South-Western

Expected Value of Perfect Information

Frequently information is available which can improve the probability estimates for the states of nature.

The expected value of perfect information (EVPI) is the increase in the expected profit that would result if one knew with certainty which state of nature would occur.

The EVPI provides an upper bound on the expected value of any sample or survey information.

Page 28: Decision analysis

28 28 Slide

Slide

© 2005 Thomson/South-Western

Expected Value of Perfect Information

EVPI Calculation• Step 1:

Determine the optimal return corresponding to each state of nature.

• Step 2: Compute the expected value of these

optimal returns.• Step 3:

Subtract the EV of the optimal decision from the amount determined in step (2).

Page 29: Decision analysis

29 29 Slide

Slide

© 2005 Thomson/South-Western

Calculate the expected value for the optimum payoff for each state of nature and subtract the EV of the optimal decision.

EVPI= .4(10,000) + .2(18,000) + .4(21,000) - 14,000 = $2,000

Expected Value of Perfect Information

Page 30: Decision analysis

30 30 Slide

Slide

© 2005 Thomson/South-Western

SpreadsheetA B C D E F

123 Decision Expected Recommended4 Alternative s1 = 80 s2 = 100 s3 = 120 Value Decision5 d1 = Model A 10,000 15,000 14,000 126006 d2 = Model B 8,000 18,000 12,000 116007 d3 = Model C 6,000 16,000 21,000 14000 d3 = Model C8 Probability 0.4 0.2 0.49 140001011 EVwPI EVPI12 10,000 18,000 21,000 16000 2000

State of Nature

Maximum Expected Value

PAYOFF TABLE

Maximum Payoff

Expected Value of Perfect Information

Page 31: Decision analysis

31 31 Slide

Slide

© 2005 Thomson/South-Western

Risk Analysis

Risk analysis helps the decision maker recognize the difference between:• the expected value of a decision alternative,

and• the payoff that might actually occur

The risk profile for a decision alternative shows the possible payoffs for the decision alternative along with their associated probabilities.

Page 32: Decision analysis

32 32 Slide

Slide

© 2005 Thomson/South-Western

Risk Profile

Model C Decision Alternative

.10.10

.20.20

.30.30

.40.40

.50.50

5 10 15 20 255 10 15 20 25

Pro

bab

ility

Pro

bab

ility

Profit ($thousands)Profit ($thousands)

Page 33: Decision analysis

33 33 Slide

Slide

© 2005 Thomson/South-Western

Sensitivity Analysis

Sensitivity analysis can be used to determine how changes to the following inputs affect the recommended decision alternative:• probabilities for the states of nature• values of the payoffs

If a small change in the value of one of the inputs causes a change in the recommended decision alternative, extra effort and care should be taken in estimating the input value.

Page 34: Decision analysis

34 34 Slide

Slide

© 2005 Thomson/South-Western

Bayes’ Theorem and Posterior Probabilities

Knowledge of sample (survey) information can be used to revise the probability estimates for the states of nature.

Prior to obtaining this information, the probability estimates for the states of nature are called prior probabilities.

With knowledge of conditional probabilities for the outcomes or indicators of the sample or survey information, these prior probabilities can be revised by employing Bayes' Theorem.

The outcomes of this analysis are called posterior probabilities or branch probabilities for decision trees.

Page 35: Decision analysis

35 35 Slide

Slide

© 2005 Thomson/South-Western

Computing Branch Probabilities

Branch (Posterior) Probabilities Calculation • Step 1:

For each state of nature, multiply the prior probability by its conditional probability for the indicator -- this gives the joint probabilities for the states and indicator.

Page 36: Decision analysis

36 36 Slide

Slide

© 2005 Thomson/South-Western

Computing Branch Probabilities

Branch (Posterior) Probabilities Calculation • Step 2:

Sum these joint probabilities over all states -- this gives the marginal probability for the indicator.

• Step 3: For each state, divide its joint

probability by the marginal probability for the indicator -- this gives the posterior probability distribution.

Page 37: Decision analysis

37 37 Slide

Slide

© 2005 Thomson/South-Western

Expected Value of Sample Information

The expected value of sample information (EVSI) is the additional expected profit possible through knowledge of the sample or survey information.

Page 38: Decision analysis

38 38 Slide

Slide

© 2005 Thomson/South-Western

Expected Value of Sample Information

EVSI Calculation• Step 1:

Determine the optimal decision and its expected return for the possible outcomes of the sample using the posterior probabilities for the states of nature.

• Step 2: Compute the expected value of these optimal returns.

• Step 3: Subtract the EV of the optimal

decision obtained without using the sample information from the amount determined in step (2).

Page 39: Decision analysis

39 39 Slide

Slide

© 2005 Thomson/South-Western

Efficiency of Sample Information

Efficiency of sample information is the ratio of EVSI to EVPI.

As the EVPI provides an upper bound for the EVSI, efficiency is always a number between 0 and 1.

Page 40: Decision analysis

40 40 Slide

Slide

© 2005 Thomson/South-Western

Burger Prince must decide whether or not to purchase a marketing survey from Stanton Marketing for $1,000. The results of the survey are "favorable" or "unfavorable". The conditional probabilities are:

P(favorable | 80 customers per hour) = .2

P(favorable | 100 customers per hour) = .5

P(favorable | 120 customers per hour) = .9

Should Burger Prince have the survey performed by Stanton Marketing?

Sample Information

Page 41: Decision analysis

41 41 Slide

Slide

© 2005 Thomson/South-Western

Influence Diagram

RestaurantSize Profit

Avg. Numberof Customers

Per Hour

MarketSurveyResults

MarketSurvey

DecisionChanceConsequence

Page 42: Decision analysis

42 42 Slide

Slide

© 2005 Thomson/South-Western

Favorable

State Prior Conditional Joint Posterior

80 .4 .2 .08 .148

100 .2 .5 .10 .185

120 .4 .9 .36 .667 Total .54 1.000

P(favorable) = .54

Posterior Probabilities

Page 43: Decision analysis

43 43 Slide

Slide

© 2005 Thomson/South-Western

Unfavorable

State Prior Conditional Joint Posterior

80 .4 .8 .32 .696

100 .2 .5 .10 .217

120 .4 .1 .04 .087

Total .46 1.000

P(unfavorable) = .46

Posterior Probabilities

Page 44: Decision analysis

44 44 Slide

Slide

© 2005 Thomson/South-Western

Solution SpreadsheetA B C D E

12 Prior Conditional Joint Posterior3 State of Nature Probabilities Probabilities Probabilities Probabilities4 s1 = 80 0.4 0.2 0.08 0.1485 s2 = 100 0.2 0.5 0.10 0.1856 s3 = 120 0.4 0.9 0.36 0.6677 0.548910 Prior Conditional Joint Posterior11 State of Nature Probabilities Probabilities Probabilities Probabilities12 s1 = 80 0.4 0.8 0.32 0.69613 s2 = 100 0.2 0.5 0.10 0.21714 s3 = 120 0.4 0.1 0.04 0.08715 0.46

Market Research Favorable

P(Favorable) =

Market Research Unfavorable

P(Favorable) =

Posterior Probabilities

Page 45: Decision analysis

45 45 Slide

Slide

© 2005 Thomson/South-Western

Decision Tree

Top Half

s1 (.148)

s1 (.148)

s1 (.148)s2 (.185)

s2 (.185)

s2 (.185)

s3 (.667)

s3 (.667)

s3 (.667)

$10,000

$15,000$14,000$8,000

$18,000

$12,000$6,000

$16,000

$21,000

I1(.54)

d1

d2

d3

22

44

55

66

11

Page 46: Decision analysis

46 46 Slide

Slide

© 2005 Thomson/South-Western

Bottom Half

s1 (.696)

s1 (.696)

s1 (.696)

s2 (.217)

s2 (.217)

s2 (.217)

s3 (.087)

s3 (.087)

s3 (.087)

$10,000

$15,000

$18,000

$14,000$8,000

$12,000$6,000

$16,000

$21,000

I2(.46) d1

d2

d3

77

99

8833

11

Decision Tree

Page 47: Decision analysis

47 47 Slide

Slide

© 2005 Thomson/South-Western

I2(.46)

d1

d2

d3

EMV = .696(10,000) + .217(15,000) +.087(14,000)= $11,433

EMV = .696(8,000) + .217(18,000) + .087(12,000) = $10,554

EMV = .696(6,000) + .217(16,000) +.087(21,000) = $9,475

I1(.54)

d1

d2

d3

EMV = .148(10,000) + .185(15,000) + .667(14,000) = $13,593

EMV = .148 (8,000) + .185(18,000) + .667(12,000) = $12,518

EMV = .148(6,000) + .185(16,000) +.667(21,000) = $17,855

44

55

66

77

88

99

22

33

11

$17,855

$11,433

Decision Tree

Page 48: Decision analysis

48 48 Slide

Slide

© 2005 Thomson/South-Western

If the outcome of the survey is "favorable”,choose Model C. If it is “unfavorable”, choose Model A.

EVSI = .54($17,855) + .46($11,433) - $14,000 = $900.88

Since this is less than the cost of the survey, the survey should not be purchased.

Expected Value of Sample Information

Page 49: Decision analysis

49 49 Slide

Slide

© 2005 Thomson/South-Western

Efficiency of Sample Information

The efficiency of the survey:

EVSI/EVPI = ($900.88)/($2000) = .4504

Page 50: Decision analysis

50 50 Slide

Slide

© 2005 Thomson/South-Western

Bayes’ Decision Rule:

Using the best available estimates of the

probabilities of the respective states of nature

(currently the prior probabilities), calculate the

expected value of the payoff for each of the

possible actions. Choose the action with the

maximum expected payoff.

Page 51: Decision analysis

51 51 Slide

Slide

© 2005 Thomson/South-Western

Bayes’ theorySi: State of Nature (i = 1 ~ n)

P(Si): Prior Probability

Ij: Professional Information (Experiment)( j = 1 ~ n)

P(Ij | Si): Conditional Probability

P(Ij Si) = P(Si Ij): Joint Probability

P(Si | Ij): Posterior Probability

P(Si | Ij)

n

1iiij

iij

j

ji

)S(P)S|I(P

)S(P)S|I(P

)I(P

)IS(P

Page 52: Decision analysis

52 52 Slide

Slide

© 2005 Thomson/South-Western

Home Work

Problem 13-10

Problem 13-21

Due Date: Nov 11, 2008