1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from...

58
1 Markov Processes Chapter 12 (12.1 – 12.5)

Transcript of 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from...

Page 1: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

1

Markov ProcessesMarkov Processes

Chapter 12(12.1 – 12.5)

Page 2: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

2

1. A Markov process describes a system moving from one state to another under a certain probabilistic rule.

2. The process consists of countable number of stages. Stages can correspond to:– Fixed time periods (days, weeks, months)– Random points in time right after a change occurs.

3. At each stage the process can be assessed and determined to be in any one of a countable number states.

4. The probability of moving from state “i“ at stage k to state “j“ at stage k + 1 is independent of how the process has arrived at state “i“.

That is, in a Markov Process the past plays no role in determining how the process will move to a future state from a given current state.

12.1 Basic Concepts of Markov Processes

Page 3: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

3

• Determining the transition matrix for a restaurant, where probabilities for tomorrow’s weather are assessed to help estimate daily profit.

• Studying the behavior of inventories for a computer store.• Forecasting the policy holder’s account status for an

insurance company.• Determining the long run market share for a certain store.

Business Applications

Page 4: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

4

• Recurrent, Transient, and Absorbing States

– Recurrent state: a state that is certain to be revisited in future.

– Transient state: a state that might not ever be revisited in the future.

– Absorbing state: a state that is never left after it is entered (this is a special case of a recurrent state).

State Classification

Page 5: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

5

– Periodicity occurs when the process exhibits a regular pattern of moving between states from one stage to the next one.

Periodicity

Page 6: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

6

• Markov processes most common to business applications have the following restrictions:– The Markov process does not show periodic behavior.– The Markov process may have both recurrent and transient states; if it

has both, all the recurrent states are absorbing states.– State accessibility.

• Each absorbing and transient state can eventually be reached from every transient state.

• Each absorbing state can eventually be reached from every recurrent state that is not absorbing.

– The probability of moving from stage to another is the same for all stages.

Restrictions on Markov Processes in Business Applications

Page 7: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

7

12.2 Transition Matrices For Processes With No Absorbing States

• Transition probability pij – the probability that the process moves from state i at one stage to state j at the next stage.

• Transition Matrix organizes the transition probabilities in a matrix form as follows:

P

p p p pp p p pp p p p

p p p p

n

n

n

n n n nn

11 12 13 1

21 22 23 2

31 32 33 3

1 2 3

...

...

.... . . ... .

...

The probability of moving

from state 1 to state 2 in one transition.

The probability that the process remains in state n in one transition

Page 8: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

8

Construction of Transition Matrices -Some Illustrations For The Non – Absorbing State Case.

Page 9: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

9

• Three fast-food restaurants serve the people of Sandpoint, Idaho:Rally Burger, Burger Barn, Caesar’s.

• It was found that a customer’s last visit solely influences the choice of fast-food restaurant at the next visit.

• Management at Caesar’s would like to estimate its market share of the fast food business in Sandpoint.

Transition Matrix for – FAST-FOOD RESTAURANT SELECTION

Page 10: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

10

• From the data collected it was found that :– When the last visit was to– The probability of next visiting is

Pp p pp p pp p p

11 12 1321 22 2331 32 33

State 1: The customer chooses RallyBurgerState 2: The customer chooses Burger BarnState 3: The customer chooses Caesar’s

1P = 2 3

1 2 3

Rally BurgerRally Burger

Rally BurgerRally Burger 0.7

0.70.70.70.70.70.70.7

0.7

Burger BarnBurger Barn 0.2

0.20.20.20.20.20.2

0.2

Ceasar’sCeasar’s 0.1

0.10.10.10.1

0.10.1

Burger BarnBurger Barn

Rally BurgerRally Burger 0.35

0.350.350.350.350.350.35

0.35

0.35

Burger BarnBurger Barn 0.50.5

0.50.50.50.50.5

0.5

0.5

Ceasar’sCeasar’s 0.15

0.15

Caesar’sCaesar’s

Rally BurgerRally Burger 0.25

0.25

Burger BarnBurger Barn 0.30

0.30

Caesar’sCaesar’s 0.35

0.35

Transition Matrix for – FAST-FOOD RESTAURANT SELECTION

Page 11: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

11

Transition Matrix for – ROLLEY’S RENTALS

• Rolley’s Rentals rents bicycles for daily use, and estimates that its profit is a function of the weather conditions.

• The weather was classified into three states:– Sunny– Cloudy– Rainy.

• Mr. Rolley would like to estimate his daily expected profit.

• The transition matrix must be determined first.

Page 12: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

12

Today’sweather

Tomorrow’s weather

0 75 0 20 0 050 45 0 40 0 150 35 0 45 0 20

. . .

. . .

. . .

Sunny

Sunny

P11

Cloudy

Rainy

Cloudy Rainy

• Meteorological studies indicate the following transition probabilities:

P23

Transition Matrix for – ROLLEY’S RENTALS

Page 13: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

13

Transition Matrix for – TRAFFIC PLANNING

• The 10 Freeway and the 60 Freeway run between the city of Ontario and downtown Los Angeles.

• The travel time on these routes is about the same when there is no traffic congestion.

• Traffic congestion, though, is quite frequent:– 70% chance on the 10 Freeway on any given day.– 80% chance on the 60 Freeway on any given day.

• The probability of choosing a particular freeway route depends on individual’s previous trip experience.

Page 14: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

14

• To select an expansion plan for the two highways, the long-run percentages of drivers traveling from Ontario to LA on each Freeway are needed.

• To find these percentages the following data is useful:

Last trip was Road Next trip is ...withon Freeway…Conditions on Freeway… probability

10 No congestion 10 0.910 Congestion 10 0.360 No congestion 60 0.860 Congestion 60 0.2

Last trip was Road Next trip is ...withon Freeway…Conditions on Freeway… probability

10 No congestion 10 0.910 Congestion 10 0.360 No congestion 60 0.860 Congestion 60 0.2

Transition Matrix for – TRAFFIC PLANNING

Page 15: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

15

Transition Matrix for – TRAFFIC PLANNING

• The transition matrix, representing the process of route selection, is needed to calculate the long-run percentage of drivers from Ontario to LA who travel on each highway.• State definition

State1 - 10 Freeway is selected and no major congestion is experienced.State 2 - 10 Freeway is selected and congestion is experienced. State 3 - 60 Freeway is selected and no major congestion is experienced.State 4 - 60 Freeway is selected and congestion is experienced.

Page 16: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

16

State 1:10 Fwy+

congestion

Hwy 60 (10%)

90% chance

10 Freeway

State 1:10 Fwy +congestion

70% chance

congestion63% chance

p11 = (.90)(.30) = .27p12 = (.90)(.70) = .63p13 = (.10)(.20) = .02p14 = (.10)(.80) = .08

. . . .

. . . .

. . . .

. . . .

27 63 02 0809 21 14 5606 14 16 6424 56 04 16

Current Trip

10 Fwy / no congestion

10 Fwy / congestion

60 Fwy / no congestion

60 Fwy / congestion

10 FwyNo cong. Cong.

60 FwyNo cong. Cong.

Next trip

Current trip

Next tripTransition Matrix – TRAFFIC PLANNING

Page 17: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

17

Transition Matrix for – CRAFTMADE COMFORT BEDS

• Craftmade arranges for its salespeople to go out on two sales calls each day.

• An analysis shows that the likelihood of sales today depend on the last two days sales.

• The sales manager wants to forecast next month’s sales, and therefore needs to build a transition matrix.

Page 18: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

18

Transition Matrix for – CRAFTMADE COMFORT BEDS

• Data Sales Sales …withToday Yesterday Tomorrow probability

0 1,2 0 0.401 0.502 0.10

0 0 0 0.201 0.652 0.15

1 0,2 0 0.301 0.452 0.25

1 1 0 0.401 0.252 0.35

2 0,1 0 0.501 0.402 0.10

2 2 0 0.601 0.352 0.05

Sales Sales …withToday Yesterday Tomorrow probability

0 1,2 0 0.401 0.502 0.10

0 0 0 0.201 0.652 0.15

1 0,2 0 0.301 0.452 0.25

1 1 0 0.401 0.252 0.35

2 0,1 0 0.501 0.402 0.10

2 2 0 0.601 0.352 0.05

Page 19: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

19

• By defining the states as the number of beds sold each day over a two day period, the Markovian assumptions are satisfied.

• Definition of states:– State (i,j) = the salesperson sells i beds yesterday

and j beds today (i=0, 1, 2 and j= 0, 1, 2). – Example.

State (1,2) = the salesperson sold 1 bed yesterday and 2 beds today.

Transition Matrix for – CRAFTMADE COMFORT BEDS

Page 20: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

20

Transition Matrix CRAFTMADE COMFORT

. . .. . .

. . .. . .

. . .. . .

. . .. . .

. . .

2 65 15 0 0 0 0 0 00 0 0 3 45 25 0 0 00 0 0 0 0 0 5 4 14 5 1 0 0 0 0 0 00 0 0 4 25 35 0 0 00 0 0 0 0 0 5 4 14 5 1 0 0 0 0 0 00 0 0 3 45 25 0 0 00 0 0 0 0 0 6 35 05

Number of beds soldyesterday and today

State 1: (0,0)

State 2: (0,1)

State 3: (0,2)

State 4: (1,0)

State 5: (1,1)

State 6: (1,2)

State 7: (2,0)

State 8: (2,1)

State 9: (2,2)

(0,0) (0,1) (0,2) (1,0) (1,1) (1,2) (2,0) (2,1) (2,2)

Sales: 0 0 Yesterday Today

State 1

1Tomorrow

State 2Number of beds sold today and tomorrow

Page 21: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

21

12.3 Transition Matrices For Processes with Absorbing States

• Many Markov processes involve absorbing states.

• Once a process enters an absorbing state it never leaves it.

• The transition probability defining an absorbing state “i” is pii = 1.

Page 22: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

22

Transition Matrix for – DR. DALE BANDON, DDS

• Patients of Dr. Bandon, DDS, tend to wait until their insurance company pays its portion of the payment before they pay their portion.

• Based on the oldest unpaid charges, accounts are classified into the following categories:– Paid up.– Sent for collection (if more than 90 days overdue).– Less than 45 overdue.– Between 45 days and 90 days overdue.

Page 23: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

23

• Dr. Bandon’s accountant wants to estimate the percentage of claims that will eventually be turned over to a collection agency.

• A Markov process is suggested to describe how the account transitions from one category to the next.

• A transition matrix of this process is required.

Transition Matrix for –DR. DALE BANDON, DDS

Page 24: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

24

Transition Matrix DR. DALE BANDON

1 0 0 0

0 1 0 0

45 0 30 25

55 05 15 25

. . .

. . . .

Paid up

Sent tocollection

Less than45 days overdue

Between 45 and 90 days overdue

Paid upSent tocollection

Less than45 daysOverdue

Between 45and 90 days overdue

Once an account is paid upit remains paid up for ever

Accountstatusthis month

Account status next month

Page 25: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

25

1 0 0 0

0 1 0 0

45 0 30 25

55 05 15 25

. . .

. . . .

Paid up

Sent tocollection

Less than45 days overdue

Between 45 and 90 days overdue

Paid upSent tocollection

Less than45 daysOverdue

Between 45and 90 days overdue

An account currently less than 45 days overdue is not turned over to the collection agency next month.

Accountstatusthis month

Account status next month

Transition Matrix DR. DALE BANDON

Page 26: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

26

Suppose the “oldest” unpaid charge is 40 days old, and the second “oldest’” is 5 days old. Then… 1 0 0 0

0 1 0 0

45 0 30 25

55 05 15 25

. . .

. . . .

Paid up

Sent tocollection

Less than45 days overdue

Between 45 and 90 days overdue

Paid upSent tocollection

Less than45 daysOverdue

Between 45and 90 days overdue

Accountstatusthis month

Account status next month

Transition Matrix DR. DALE BANDON

Page 27: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

27

1 0 0 0

0 1 0 0

45 0 30 25

55 05 15 25

. . .

. . . .

Paid upSent tocollection

Less than45 daysOverdue

Between 45and 90 days overdue

If only the “oldest” charge is paid this month, then next month the account will be less than 45 days over due.

Accountstatusthis month

Account status next month

Transition Matrix - DR. DALE BANDON

Paid up

Sent tocollection

Less than45 days overdue

Between 45 and 90 days overdue

Page 28: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

28

Transition Matrix for –STACY’S DEPARTMENT STORES

• The personnel management at Stacey's wishes to estimate the average number of additional years an employee will work at Stacey's.

• The chance that an employee will stay with the company next year was shown to depend on his / her promotion this year.

• A transition matrix describing the yearly changes in employee status is required.

Page 29: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

29

1 0 0 0 00 1 0 0 00 0 1 0 007 12 03 32 4609 17 04 17 53. . . . .. . . . .

Retired

Quit

Fired

Promoted

Not Promoted

Employeestatusthisyear

Employee status next year

Not Retired Quit Fired Promoted Promoted

Transition Matrix for - STACY’S DEPARTMENT STORES

Page 30: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

30

Transition Matrix for – GAMBLING IN LAS VEGAS

• Tom Turner can afford to spend $50 on chips to play the roulette in Las Vegas. He will place $10 bets on black for each play.

• Tom would like to go home having doubled his money. Thus, Tom will stop playing either when his fortune reaches $100, or when he loses all his money.

• Tom wishes to determine:– His chance of reaching his goal of $100.– The expected number of times he will play roulette.

Page 31: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

31

Transition Matrix - GAMBLING

• Tom’s fortune can be modeled as a Markov process where:– Stages corresponds to each play (a spin of the

roulette).– States corresponds to Tom’s fortune after each play.

• There are 38 possible outcomes on a roulette wheel:– 18 black – 18 red– 2 green

The probability that Tom will win $10 is18 / 38 = 0.47, and the probability that he will lose $10 is 0.53.

The probability that Tom will win $10 is18 / 38 = 0.47, and the probability that he will lose $10 is 0.53.

Page 32: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

32

1 0 0 0 0 0 0 0 0 0 053 0 47 0 0 0 0 0 0 0 00 53 0 47 0 0 0 0 0 0 00 0 53 0 47 0 0 0 0 0 00 0 0 53 0 47 0 0 0 0 00 0 0 0 53 0 47 0 0 0 00 0 0 0 0 53 0 47 0 0 00 0 0 0 0 0 53 0 47 0 00 0 0 0 0 0 0 53 0 47 00 0 0 0 0 0 0 0 53 0 470 0 0 0 0 0 0 0 0 0 1

. .. .

. .. .

. .. .

. .. .

. .

$0

$10 $20

$30 $40

$50

$60

$70

$80 $90

$100

$0 $10 $20 $30 $40 $50 $60 $70 $80 $90 $100

Current Stake

Stake at Start of Next Play

Page 33: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

33

12.4 Determining a State Vector

• The state probability is the probability that a Markov process is in state i at stage j (i( j )).

• The state vector is a set of all state probabilities pertaining to a certain stage.

( j ) = {1( j ), 2( j ), 3( j ),…, n( j )}

Page 34: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

34

FAST-FOOD RESTAURANT SELECTION - continued

• Jon Lee is currently eating at RallyBurger.

• What is the probability of Jon eating in each of the three restaurants in the future?

Page 35: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

35

RallyBurger

RallyBurger

Burger Bran

Caesar’s

RallyBurger

Burger Bran

Caesar’s

RallyBurger

Burger Bran

Caesar’s

Burger Bran

Caesar’s

Currently Jon is eating at RallyBurger

Stage 1 Stage 2

Stage 2

Stage 3

.70

.20

.10

.70

.20

.10

.35

.50

.15

.25

.30

.45

What is the probability that Jon will eat at RallyBurger at stage three (at his third visit to a fast-food restaurant?)

RallyBurger

RallyBurger

RallyBurger

FAST-FOOD RESTAURANT – Finding Rally(3)

Page 36: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

36

SOLUTION 1

2

3

1 2 3

0.7 0.2 0.1

0.35 0.5 0.15

0.25 0.3 0.35

RallyBurger

RallyBurger

Burger Bran

Caesar’s

RallyBurger

Burger Bran

Caesar’s

RallyBurger

Burger Bran

Caesar’s

RallyBurger

Burger Bran

Caesar’s

Currently Jon is eating at RallyBurger

Stage 1 Stage 2

Stage 2

Stage 3

.70

.20

.10

.70

.20

.10

.35

.50

.15

.25

.30

.45

.70(.70) = .49

.20(.35) = .07

.10(.25) = .025

.585

FAST-FOOD RESTAURANT – Finding Rally(3)

Page 37: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

37

• The state vectors for states 1,2, and 3 are:Stage 1

1(1) = 1 1(2) = 0 1(3) = 0

(1) = (1 0 0)Stage 2

2(1) = .70 2(2) = .20 2(3) = .10

(2) = (.70 .20 .10)Stage 3

3(1) = .49 3(2) = .07 3(3) = .025

(3) = (.49 .07 .025)

FAST-FOOD RESTAURANT – State Vectors

Page 38: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

38

• Calculating state vector for stage j using matrix algebra is quite simple.

( j + 1 ) = ( j )P( j + 1 ) = ( j )P

Example: (3) = (2)P

= (.70 .20 .10)

= {.70(.70)+.2(.35)+.10(.25) .70(.20)+.2(.50)+.10(.30) .70(.10)+20(.15)+.10(.10)}

=

= { .585 .270 .145 }.

. . .

. . .

. . .

70 20 1035 50 1525 30 45

FAST-FOOD RESTAURANT – State Vectors

Page 39: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

39

Steady State Probabilities

=$B3*F$2+$C3*F$3+$D3*F$4Drag across to C4:D4Then drag B4:D4 to B22:D22

Page 40: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

40

Steady State Probabilities inNon- Absorbing Markov Chains

For each state, the state probability approaches alimiting value as the number of stages increases. This value is called -

the Steady State Probabilitythe Steady State Probability

Notice:

For a customer who is currently visiting Rally Burger…

…the state probabilities 18 stages later are (.5111, .3111, .1778).

Page 41: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

41

Steady State Probabilities inNon- Absorbing Markov Chains

Yet, for a customer who is currently visiting Caesar’s…

…the state probabilities 20 stages later are the same (.5111, .3111, .1778).

Page 42: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

42

Steady State Probabilities inNon- Absorbing Markov Chains

The limiting behavior of the state probabilities (the steady state probabilities) does not depend on the initial state probabilities when no absorbing states are present

Page 43: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

43

State Probabilities for Absorbing Markov Chains – DR. DALE BANDON - continued

• Dr. Bandon wishes to forecast the status of two receivable accounts over a 10-month period:

– an account currently less than 45 days overdue– an account between 45 and 90 days overdue

Page 44: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

44

• Recall the transition matrix and the state definitions.

1 0 0 0

0 1 0 0

45 0 30 25

55 05 15 25

. . .

. . . .

Paid up

Sent toCollection

Less than45 days overdue

Between 45 and 90 days overdue

Paid upSent tocollection

Less than 45 daysoverdue

Between 45and 90 days overdue

State Probabilities for Absorbing Markov Chains – DR. DALE BANDON - continued

Page 45: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

45

State Probabilities for Absorbing Markov Chains – DR. DALE BANDON - continued

=$B3*G$2+$C3*G$3+$D3*G$4+$E3*G$5Drag across to C4:E4Then drag B4:E4 to B22:E22

The current state of Pamela Tovar’s account is “Less than 45 days overdue”

Page 46: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

46

State Probabilities for Absorbing Markov Chains – DR. DALE BANDON - continued

The current state of Ellen Stoval’s account is “Between 45 and 90 days overdue”.

=$B3*G$2+$C3*G$3+$D3*G$4+$E3*G$5 Drag across to C4:E4Then drag B4:E4 to B22:E22

Page 47: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

47

State Probabilities for Absorbing Markov Chains – DR. DALE BANDON - continued

Stoval probabilitiesTovar probabilities

Both processes appear to converge over time, but not to the same values.Both processes appear to converge over time, but not to the same values.

Page 48: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

48

The limiting behavior of the state probabilities does depend on the initial state in the presence of absorbing states.

Page 49: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

49

Using the Template Markov.xls

The transition matrix can beentered between B4 and M15

Enter here the initial state probabilities

Page 50: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

50

12.5 Determining Limiting Behavior for Markov Processes without Absorbing States

• Steady state probabilities – Determining the long run (steady state) behavior of a

Markov process enables an economic analysis of many business systems.

– The limiting behavior of a Markov process can be described by the steady state probabilities.

Page 51: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

51

• The long run market share of the three restaurants in Sandpoint Idaho.

• The proportion of days (in the long run) that are sunny, cloudy, or rainy in the Lahaina (the Rolley’s Rental problem).

• The proportion of time that a driver on the 10 freeway will experience no congestion (the traffic planning problem).

Examples for Steady State Probabilities

Page 52: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

52

• Definitions– The steady state probability of state i is i.

– The steady state vector is = {12…n} .

• Calculating steady state probabilities– The steady state behavior means that once the

process reaches steady state, the state probabilities do not change. Thus,

= * P = * P

Steady State Probabilities

Page 53: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

53

1 11 1 21 2 31 3 1

2 12 1 22 2 32 3 2

3 13 1 23 2 33 3 3

1 1 2 2 3 3

p p p p

p p p p

p p p p

p p p p

n n

n n

n n

n n n n nn n

...

...

...

.

.

....

One equation of this set of n equations depends on all the others and can be dropped.Instead, we add the probability condition equation.

1 2 3 1 ... n

Steady State Probabilities – Linear Equations

Page 54: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

54

• Illustration of the steady state probability equations for the fast food problem.

3332231133

3322221122

3312211111

ppp

ppp

ppp

. . .

. . .

. . .

70 20 1035 50 1525 30 45

P =

1 2 3 1

Steady State Probabilities – Linear Equations

.70

.20

.10

.35

.50

.15

.25

.30

.45

Page 55: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

55

Solving the Linear Equations of theSteady State Probabilities

• Formulate a linear programming model with an arbitrary objective function.

Max or Min 1+ 2+ 3

ST .3 - .35 -.25 = 0

-.2 + .50 - .30 = 0 + + = 1

The optimal solution: 1 = .5111 2=.3111 3 = .1778

Page 56: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

56

Solving for the Steady State Probabilities – Excel Solver Template

=SUM(B2:D2)

=SUMPRODUCT($B$2:$D$2,B3:D3)Drag to cells E4:E5

Page 57: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

57

• Mean recurrence time is the average time (number of transitions) required for the process to return to a given state.

• Example: A customer who is currently eating in RallyBurger will return to RallyBurger every 1.5111 visits.

Mean Recurrence Time of state i = 1/iMean Recurrence Time of state i = 1/i

Mean Recurrence Time

Page 58: 1 Markov Processes Chapter 12 (12.1 – 12.5). 2 1.A Markov process describes a system moving from one state to another under a certain probabilistic rule.

58

Copyright John Wiley & Sons, Inc. All rights reserved. Reproduction or translation of this work beyond that named in Section 117 of the United States Copyright Act without the express written consent of the copyright owner is unlawful. Requests for further information should be addressed to the Permissions Department, John Wiley & Sons, Inc. Adopters of the textbook are granted permission to make back-up copies for their own use only, to make copies for distribution to students of the course the textbook is used in, and to modify this material to best suit their instructional needs. Under no circumstances can copies be made for resale. The Publisher assumes no responsibility for errors, omissions, or damages, caused by the use of these programs or from the use of the information contained herein.