A discrete-time Markov Chain consists of random variables X n for n = 0, 1, 2, 3, …, where the...

41
A discrete-time Markov Chain consists of random variables X n for n = 0, 1, 2, 3, …, where the possible values for each X n are the integers 0, 1, 2, …, m; each of these possible values represents what is called a state. Typically, 0 represents what is called the initial state, and we have that X 0 = 0. Chapter 3 The conditional probability Pr[X n+1 = j | X n = i] is called a transitional probability . When this conditional probability does not depend on n, the stochastic process defined by the Markov Chain is called homogeneous , the probability is denoted by p ij , and the transitional probability matrix is then defined to be P = p 00 p 01 p 0m p 10 p 11 p 1m . . . . . . . . . . . . p m0 p m1 p mm

Transcript of A discrete-time Markov Chain consists of random variables X n for n = 0, 1, 2, 3, …, where the...

A discrete-time Markov Chain consists of random variables Xn for n = 0, 1, 2, 3, …, where the possible values for each Xn are the integers 0, 1, 2, …, m; each of these possible values represents what is called a state. Typically, 0 represents what is called the initial state, and we have that X0 = 0.

Chapter 3

The conditional probability Pr[Xn+1 = j | Xn = i] is called a transitional probability. When this conditional probability does not depend on n, the stochastic process defined by the Markov Chain is called homogeneous, the probability is denoted by pij, and the transitional probability matrix is then defined to be

P =

p00 p01 … p0m

p10 p11 … p1m

. . . . . . . . . . . .

pm0 pm1 … pmm

j = 1

npij = 1.

Observe that each row of the matrix must sum to 1, that is, for each i,

If pii = 1, then state i is called an absorbing state, and it is not possible to move from this state.

When the conditional probability depends on n, the stochastic process defined by the Markov Chain is called non-homogeneous

1.Chapter 3 Exercises

Let States 0, 1, and 2 be defined respectively by three rooms, labeled 0, 1, and 2, and let Xn be the room which a particular person occupies at time n = 0, 1, 2, 3, …. When the person is in a room at time n, one of the paths leading from to another room or possibly back to the same room is selected and taken at random, which determines the room the person will be in at time n + 1.(a) Decide whether the stochastic process defined by the Markov

Chain is homogeneous or non-homogeneous, and say why.

The stochastic process is homogeneous, since paths from room to room are always the same, implying that the transitional probability of moving from one room to another room does not depend on n.

Room 0

Room 2 Room 1

(b) Suppose the paths between rooms are as in the figure below.

3/94/9 2/9

1/92/9 6/9

2/63/6 1/6

(i) Find the transitional probability matrix.

P =

Room 0

Room 2 Room 1

(c) Suppose the paths between rooms are as in the figure below.

3/5 0 2/5

1/32/3 0

03/4 1/4

(i) Find the transitional probability matrix.

P =

Room 0

Room 2 Room 1

(d) Suppose the paths between rooms are as in the figure below, where each arrow represents a locked door, and when a locked door is chosen the person does not take any path.

(i) Find the transitional probability matrix.

Room 0

Room 2 Room 1

(i) Find the transitional probability matrix.

2/51/5 2/5

0 0 1

1/42/4 1/4

(d)

P =

Room 0

Room 2 Room 1

(e) Suppose the paths between rooms are as in the figure below, where each arrow represents a locked door, and when a locked door is chosen the person must choose and take a different path. Find each item listed following the figure.

(i) Find the transitional probability matrix.

Room 0

Room 2 Room 1

2/4 0 2/4

0 0 1

02/3 1/3

(i) Find the transitional probability matrix.

(e)

P =

We let in denote the probability of being in State i at time n, and we denote the row vector of such probabilities for every state as

n =

which is called the state vector at time n.

(0n , 1n , …, mn) ,

It must of course be true that

m

i = 0

in = 1 .

(b) Suppose the paths between rooms are as in the figure below.

3/94/9 2/9

1/92/9 6/9

2/63/6 1/6

(i) Find the transitional probability matrix.

(ii) Write a formula for each of the following:the probability of being in room 0 at time n = 1,the probability of being in room 1 at time n = 1,the probability of being in room 2 at time n = 1.

01 = Pr[X1 = 0] =

Pr[X0 = 0 X1 = 0] + Pr[X0 = 1 X1 = 0] + Pr[X0 = 2 X1 = 0] =

Pr[X0 = 0] Pr[X1 = 0 | X0 = 0] + Pr[X0 = 1] Pr[X1 = 0 | X0 = 1] +

Pr[X0 = 2] Pr[X1 = 0 | X0 = 2] =00 p00 + 10 p10 + 20 p20 = (4/9)00 + (2/9)10 + (3/6)20

P =

(ii) Write a formula for each of the following:the probability of being in room 0 at time n = 1,the probability of being in room 1 at time n = 1,the probability of being in room 2 at time n = 1.

01 = Pr[X1 = 0] =

Pr[X0 = 0 X1 = 0] + Pr[X0 = 1 X1 = 0] + Pr[X0 = 2 X1 = 0] =

Pr[X0 = 0] Pr[X1 = 0 | X0 = 0] + Pr[X0 = 1] Pr[X1 = 0 | X0 = 1] +

Pr[X0 = 2] Pr[X1 = 0 | X0 = 2] =00 p00 + 10 p10 + 20 p20 = (4/9)00 + (2/9)10 + (3/6)20

11 = Pr[X1 = 1] = 00 p01 + 10 p11 + 20 p21 = (2/9)00 + (6/9)10 + (1/6)20

21 = Pr[X1 = 2] = 00 p02 + 10 p12 + 20 p22 = (3/9)00 + (1/9)10 + (2/6)20

We may now write 1 = (01 , 11 , 21) = 0 P

(iii) Write a formula for each of the following:the probability of being in room 0 at time n = 2,the probability of being in room 1 at time n = 2,the probability of being in room 2 at time n = 2.

02 = Pr[X2 = 0] = 01 p00 + 11 p10 + 21 p20 =

(4/9)01 + (2/9)11 + (3/6)21

12 = Pr[X2 = 1] = 01 p01 + 11 p11 + 21 p21 =

(2/9)01 + (6/9)11 + (1/6)21

22 = Pr[X2 = 2] = 01 p02 + 11 p12 + 21 p22 =

(3/9)01 + (1/9)11 + (2/6)21

We may now write 2 = (02 , 12 , 22) = 1 P = 0 P P = 0 P2

(iv) Write a formula for each of the following:the probability of being in room 0 at time n,the probability of being in room 1 at time n,the probability of being in room 2 at time n.

0n = Pr[Xn = 0] = 0,n1 p00 + 1,n1 p10 + 2,n1 p20 =

(4/9)0,n1 + (2/9)1,n1 + (3/6)2,n1

1n = Pr[Xn = 1] = 0,n1 p01 + 1,n1 p11 + 2,n1 p21 =

(2/9)0,n1 + (6/9)1,n1 + (1/6)2,n1

2n = Pr[Xn = 2] = 0,n1 p02 + 1,n1 p12 + 2,n1 p22 =

(3/9)0,n1 + (1/9)1,n1 + (2/6)2,n1

We may now write 2 = (02 , 12 , 22) = 0 Pn1

We let in denote the probability of being in State i at time n, and we denote the row vector of such probabilities for every state as

n =

which is called the state vector at time n.

(0n , 1n , …, mn) ,

It must of course be true that

m

i = 0

in = 1 .

From Exercise 1(b), we can see that n+r = n Pr .

(v) Suppose a person is in room 0 at time 0. Findthe probability of being in room 0 at time n = 2,the probability of being in room 1 at time n = 2,the probability of being in room 2 at time n = 2.

These probabilities are respectively the entries of the row vector 2 = . Since the person is in room 0 at time 0, we must have 0 =

0 P2

(1, 0, 0).

2 = =0 P2 (1, 0, 0)

3/94/9 2/9

1/92/9 6/9

2/63/6 1/6

3/94/9 2/9

1/92/9 6/9

2/63/6 1/6

=

(4/9, 2/9, 3/9)

3/94/9 2/9

1/92/9 6/9

2/63/6 1/6

= (67/162, 49/162, 23/81)

(vi) Suppose a person is in room 1 at time 0. Findthe probability of being in room 0 at time n = 2,the probability of being in room 1 at time n = 2,the probability of being in room 2 at time n = 2.

These probabilities are respectively the entries of the row vector 2 = . Since the person is in room 1 at time 0, we must have 0 =

0 P2

(0, 1, 0).

2 = =0 P2 (0, 1, 0)

3/94/9 2/9

1/92/9 6/9

2/63/6 1/6

3/94/9 2/9

1/92/9 6/9

2/63/6 1/6

=

(2/9, 6/9, 1/9)

3/94/9 2/9

1/92/9 6/9

2/63/6 1/6

= (49/162, 83/162, 5/27)

(vii) Suppose a person is in room 2 at time 0. Findthe probability of being in room 0 at time n = 2,the probability of being in room 1 at time n = 2,the probability of being in room 2 at time n = 2.

These probabilities are respectively the entries of the row vector 2 = . Since the person is in room 2 at time 0, we must have 0 =

0 P2

(0, 0, 1).

2 = =0 P2 (0, 0, 1)

3/94/9 2/9

1/92/9 6/9

2/63/6 1/6

3/94/9 2/9

1/92/9 6/9

2/63/6 1/6

=

(3/6, 1/6, 2/6)

3/94/9 2/9

1/92/9 6/9

2/63/6 1/6

= (23/54, 5/18, 8/27)

Completing parts (c), (d), and (e) are for homework.

When the transitional probability Pr[Xn+1 = j | Xn = i] depends on n, the stochastic process defined by the Markov Chain is called non-homogeneous. We can define

pij =

which is the probability of moving from State i to State j in discrete time interval #(n + 1) of the process, and we can denote the matrix whose entries are these probabilities by P(n).

nPr[Xn + 1 = j | Xn = i]

2.Chapter 3 Exercises

Let States 0 and 1 be defined respectively by a lighter not igniting or igniting, and let Xn be 0 or 1 depending respectively on whether the lighter does not ignite or does ignite on try number n = 0, 1, 2, 3, …, with X0 = 1. Whenever the lighter is triggered at time n, if Xn1 = 0 then the probabilities the lighter will not or will ignite are respectively 1 0.8n and 0.8n, but if Xn1 = 1 then the probabilities the lighter will not or will ignite are respectively 1 0.9n and 0.9n.(a) Decide whether the stochastic process defined by the Markov

Chain is homogeneous or non-homogeneous, and say why.

The stochastic process is non-homogeneous, since the probability of the lighter igniting or not igniting depends on n.

2.Chapter 3 Exercises

Let States 0 and 1 be defined respectively by a lighter not igniting or igniting, and let Xn be 0 or 1 depending respectively on whether the lighter does not ignite or does ignite on try number n = 0, 1, 2, 3, …, with X0 = 1. Whenever the lighter is triggered at time n, if Xn1 = 0 then the probabilities the lighter will not or will ignite are respectively 1 0.8n and 0.8n, but if Xn1 = 1 then the probabilities the lighter will not or will ignite are respectively 1 0.9n and 0.9n.(b) Find a formula for each matrix P(n) for n = 0, 1, 2, 3, ….

1 0.8n + 1 0.8n + 1

1 0.9n + 1 0.9n + 1

P(n) =

2.Chapter 3 Exercises

Let States 0 and 1 be defined respectively by a lighter not igniting or igniting, and let Xn be 0 or 1 depending respectively on whether the lighter does not ignite or does ignite on try number n = 0, 1, 2, 3, …, with X0 = 1. Whenever the lighter is triggered at time n, if Xn1 = 0 then the probabilities the lighter will not or will ignite are respectively 1 0.8n and 0.8n, but if Xn1 = 1 then the probabilities the lighter will not or will ignite are respectively 1 0.9n and 0.9n.(c) Find each of the following:

0 =

1 =

2 =

(0, 1)

0 P(0) = (0, 1) = 1 0.8 0.8

1 0.9 0.9(0.1, 0.9)

1 P(1) = (0.1, 0.9) = 1 0.82 0.82

1 0.92 0.92

(0.207, 0.795)

3.Chapter 3 Exercises

Let States 0 and 1 be defined respectively by a battery providing power or not providing power, and let Xn be 0 or 1 depending respectively on whether the battery does or does not provide power on day number n = 0, 1, 2, 3, …, with X0 = 1. Whenever the battery is used at time n, if Xn1 = 1 then the probabilities the battery will or will not provide power are respectively 0 and 1, but if Xn1 = 0 then the probabilities the battery will or will not provide power are respectively 0.99n and 1 0.99n.(a) Decide whether the stochastic process defined by the Markov

Chain is homogeneous or non-homogeneous, and say why.

The stochastic process is non-homogeneous, since the probability of the battery providing or not providing power depends on n.

(b) Find a formula for each matrix P(n) for n = 0, 1, 2, 3, ….

(b) Find a formula for each matrix P(n) for n = 0, 1, 2, 3, ….

1 0

1 0.99n + 1 0.99n + 1

P(n) =

(c) Find each of the following:

0 =

1 =

2 =

(1, 0)

0 P(0) = (1, 0) = 0 1

1 0.99 0.99(0.99, 0.01)

1 P(1) = (0.99, 0.01) = 0 1

1 0.992 0.992

(0.980299, 0.019701)

(d) Suppose that the battery costs $50, and the manufacturer of the battery pays a refund of 1/(n + 1) of the cost if the battery does not provide power on day number n = 1, 2, 3. Find the expected refund.The expected refund is (50/2) 11 + (50/3) 12 + (50/4) 13

We have 11 and 12 from part (c), but we need to calculate 13 as follows:

3 = 2 P(2) = (0.980299, 0.019701) = 0 1

1 0.993 0.993

(don’t care, 0.048817)

The expected refund is

(50/2)(0.01) + (50/3)(0.019701) + (50/4)(0.048817) = $1.19

(e) Suppose that the battery costs $50, and the manufacturer of the battery pays a refund of 1/(n + 1) of the cost if the battery does not provide power for the first time on day number n = 1, 2, 3. Find the expected refund.The expected refund is

(50/2)p01 + (50/3)0

p00 p01 +0 1

Completing parts (c) is for homework.

4. Read Text Exercise 3-4.

(a) Find the probability that the process is in State 0 or State 1 at each of times t = 0, 1, 2, 3, ….

Since the process begins in State 0 at time 0, the probability of being in State 0 or State 1 at time 0 is 1

The probability that the process is in State 0 or State 1 at time 1 is p00 + p01 =

0 00.6 + 0.3 = 0.9

The probability that the process is in State 0 or State 1 at time 2 is p00 p00 + p00 p01 + p01 p11 =

0 1

(0.6)(0.6) + (0.6)(0.3) + (0.3)(0) = 0.540 1 0 1

The probability that the process is in State 0 or State 1 at time 3 is p00 p00 p01 =

0 1(0.6)(0.6)(0.3) = 0.108

2

The probability that the process is in State 0 or State 1 at time greater than 3 is 0

(b) Do part (a) of Text Exercise 3-4.

The expected payment is

(1)(1) + (1)(0.9) +

Completing parts (c), (d), and (e) are for homework.

When the transitional probability Pr[Xn+1 = j | Xn = i] depends on n, the stochastic process defined by the Markov Chain is called non-homogeneous. We can define

pij =

which is the probability of moving from State i to State j in discrete time interval #(n + 1) of the process, and we can denote the matrix whose entries are these probabilities by P(n). More generally, we can define

r pij =

which is the probability of moving from State i to State j from time n to time n + r, and these probabilities will be the entries in the matrix P(n) P(n + 1) P(n + 2) … P(n + r 1).

In actuarial mathematics applications, time n = 0 typically corresponds to age x for an entity (person). The previous notation is adapted by writing

r pij =

nPr[Xn + 1 = j | Xn = i]

Pr[Xn + r = j | Xn = i]n

xPr[Xr = j | X0 = i]

When it is possible to move from one state to any other state, we say that all states communicate with each other; in the actuarial models to be considered here, this will never be the case. If there is a state for which the probability of leaving is zero, that state is called an absorbing state.

With the discrete-time process, observations Xn are made only for the discrete time intervals from 0 to 1, 1 to 2, etc. However, with a continuous-time process, observations Xt = X(t) can be made at any time t 0, and the probabilities r pij are determined by force of transition functions. For an entity with age x at time 0, we define the following force of transition function at time s (when the entity is age x + s):

ij =

x

x + sforce of transition from State i to State j = lim

Note that if ij is constant for all values of s, then it can be shown that t

pij is constant for all values of s, implying that the process is homogeneous; otherwise, the process is non-homogeneous.

h0 hx + s

h pij

x + s

x + s

It must of course be true that t + s pij =x

m

k = 0

t pik x

s pkj

x + t

We may now write t + h pij t pij =x x

m

k = 0

t pik x

h pkj

x + t t pij =

x

m

k j

t pik x

h pkj

x + t t pij =

x+ t pij

xh pjj

x + t

m

k j

t pik x

h pkj

x + t (1 h pjj ) t pij =

x + t

i =x + s

i1

j = 0

ij +x + s

m

j=i+1

ij =x + s

m

j i

ij .x + s

It will be convenient to define

m

k j

t pik x

h pkj

x + t t pij

Dividing both sides by h, we may write

t + h pij t pij

=h h h

Taking the limit of both sides as h0, we have Kolmogorov’s Forward Equation:

m

k j

t pik x

h pkj

x + t t pij

x + t

m

k j x + t

h pjk

m

k j

x + th pjk

x x

t pij =x

d—dt

m

k j

t pik x

kj x + t

m

k j

jk x + t

t pij

j

x + t

5.Chapter 3 Exercises

Suppose X(t) represents a continuous-time Markov chain where from State i it is only possible to move to State j which is an absorbing state. Let random variable T be the time it takes to transition from State i to State j.

(a) Write a formula for cumulative distribution function for T and the probability density function for T both in terms of t pij .

x

Pr[T t] = t pij

xPr[X(t) = j | X(0) = i] =

t pij

x

d—dt

Consequently, probability density function for the random variable T must be

(b) Use part (a) to write a formula for t pij in terms of an integral.x

t pij = Pr[X(t) = j | X(0) = i] =x

t

0

dr .r pij

x

d—dr

(c) Use part (a) to write a formula for t pii in terms of an integral.x

t pii = Pr[X(t) = i | X(0) = i] = Pr[T > t] =x

t

dr .r pij

x

d—dr

6.Chapter 3 Exercises

Let States 0 and 1 be defined respectively by a battery providing power or not providing power, and let X(t) be 0 or 1 depending respectively on whether the battery does or does not provide power at time t. For any time when the battery does not provide power, it must be true that the battery will not provide power at any future time, that is, if X(t0) = 1, then X(t) = 1 for all t > t0 .(a) Is either of the two states an absorbing state? Why or why not?

State 1 is an absorbing state, since once the battery does not provide power, the battery will not provide power at any future time, implying that probability of leaving this state is zero.

(b) Use Kolmogorov’s Forward Equation to write a system of equations involving the various probability functions, their derivatives, and the various force of transition functions.

(b) Use Kolmogorov’s Forward Equation to write a system of equations involving the various probability functions, their derivatives, and the various force of transition functions.

First, we realize that we must have t p11 = and t p10 = . Next, we realize that since t p10 = we must have 10 = .

x x1 0

t p01 =x

t p00 =x

d—dt

d—dt

t p00 x

x + t

01 (10 ) t p01

x + t x

t p01 x

x + t

10 (01 ) t p00

x + t x

0 0x x + t

=

=

t p00 x x + t

01

t p00 x

x + t

01

(d) Suppose the constant in part (c) is equal to 8. Use results from Exercise 5 to find each of the following:

Pr[X(10) = 1 | X(0) = 0] =

Pr[X(10) = 0 | X(0) = 0] =

(c) Suppose 01 is equal to a constant (i.e., the process is homogeneous). Find functions t p10 and t p11 which satisfy the differential equations in part (b).

x + t

x x

By realizing that the derivative of t p00 = et will be et, we find that this function satisfies the second equation in (a). The first equation in (a) can then be satisfied by letting t p01 =

x

x

10

0

dr =r p01

x

d—dr

10

dr =r p01

x

d—dr

1 et.

8e8r dr =

10

0

1 e80

8e8r dr =

10

e80

7. Read Text Exercise 3-1.

(a) Do part (a) of Text Exercise 3-1.

p01 = 1 p00 =x + 1 x + 1

(b) Do part (b) of Text Exercise 3-1.

p10 = 1 p11 =x + 2 x + 1

(c) Do part (c) of Text Exercise 3-1, realizing that Pr[X3 = 0 | X1 = 0] = Pr[X2 = 0 X3 = 0 | X1 = 0] + Pr[X2 = 1 X3 = 0 | X1 = 0] .

2 p00 = p00 p00 +x + 1 x + 1 x + 2

Completing this exercise is for homework.

(d) Do part (d) of Text Exercise 3-1.The stochastic process is non-homogeneous, since each probability depends on age.

(e) Find Pr[X3 = 1 | X1 = 1] .

2 p11 = p10 p01 + p11 p11 =x + 1 x + 1 x + 2 x + 1 x + 2

[1 0.6 0.2/(1+1)][1 0.7 0.1/(2+1)] +

[0.6 + 0.2/(1+1)][0.6 + 0.2/(2+1)] = 0.5466…

8. Read Text Exercise 3-2.

(a) Do part (a) of Text Exercise 3-2.

(b) Do part (b) of Text Exercise 3-2 by examining the probability of moving from endangered to thriving at time t as t increases.

(c) Do part (c) of Text Exercise 3-2.

First, consider the possible paths from State 1 to 2:

12, 112, 1112

The probability is

Completing this exercise is for homework.

9. Read Text Exercise 3-3.

(a) Do part (a) of Text Exercise 3-3.

The stochastic process is homogeneous, since each force of transition is constant.

(b) Do part (b) of Text Exercise 3-3 by noticing that after simplifying the Kolmogorov equation, you are looking for a function with a first derivative equal to a multiple of the original function.

First, we realize that 10 = 20 = 30 = 0. Consequently, we have

t p00 =x

d—dt

x + t

t p00 x x + t

(01 + 02 + 03 ) x + t x + t

x + t x + t

Completing this exercise is for homework.