Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite...

44
CHRINS(U) NAVAL POSTGRADUATE SCHOOL MONTEREY CA M T SUH SEP 34 UNCLASSIFIED FG 2/1 M mmmommmil Emhhhhhhommmls

Transcript of Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite...

Page 1: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

CHRINS(U) NAVAL POSTGRADUATE SCHOOL MONTEREY CA

M T SUH SEP 34

UNCLASSIFIED FG 2/1 M

mmmommmil

Emhhhhhhommmls

Page 2: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

:.0.

1132 2-2'liiiE '-2 IllllN

M!CROCOPY RESOLUTION TEST CHART

140-

ilia- . "" -i" . " i .. .

1111IL2

.. " ~ ~ ~ MCR C P REO UTO TES CH . .i . . . - , - - ART . ,. , , ,, ' , ' '" w - " '

Page 3: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

Lfl NAVAL POSTGRADUATE SCHOOLMonterey, California

DTICSELECTEAPR 4 S98

THESIS A

85 03 18 121

Page 4: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

r • - . -- - ' ".--

SECURITY CLASSIFICATION OF THIS PAGE 0l17mm Data Entered)REPORT DOCUMENTATION PAGE READ INSTRUCTIONS

BEFORE COMPLETING FORMIREPORT NUMBER 2.GO ACCESSION NO. 3. RECIPIENT'S CATALOG NUMER

4. TITLE (and Subtitle) S. TYPE OF REPORT & PERIOD COVERED

€ -..

'~ ~'~1S. PERFORMING ORG. REPORT NUMBER

7. AUTHOR(#) S. CONTRACT OR GRANT NUMBER(#)

* .on T*+', u

S. PERFORMING ORGANIZATION NAME AND ADDRESS I0. PROGRAM ELEMENT, PROJECT. TASKAREA & WORK UNIT NUMBERS

< O_, iN_ ' e 'z,cool

I I. CONTROLLING OFFICE NAME AND ADDRESS 12. REPORT DATE

.W 1 J ! -t,-raJuat ,':hool e~temr.er 1' i 413. NUMBER OF PAGES

'~~o'a ~ ll rn ia 23 31LL 3

MONITORING AGENCY NAME & ADDRESS(t/ different from Controlling Office) IS. SECURITY CLASS. (of this report)

T'nclassif s; e.ISa. DECLASSIFICATION, DOWNGRADING

SCHEDULE

16. DISTRIBUTION STATEMENT (of thl Report)

"............. ..... 7u_!ic releas-; istribution unlimited.

17. DISTRIBUTION STATEMENT (of the ebetract mitered in Block 20, If dillf -it from Report)

I0. SUPPLEMENTARY NOTES

19. K EY WORDS (Continue on roerse side If necessary and Identify by block number)

"~r' ov - - 'D " "

20 ABSTRACT Contjnue on reveree side If neceeeary aid Identify by block number)

w h-:, flst-n -ro. . .

7 t 1, 1 7n0 1 - M- 1 7

DO ,.AN 73 1413 EDITION OF 1NOV 55, SSOLEiE! *'T"L: - r - ' 4, '.-T"T

S N 0102- LF- 014- b601 1 SECURITY CL.ASIFICATION OF THIS PAGE (Wl~en Date Entered)

:: : - :: :: : -. • . 7.

Page 5: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

SECURITY CLASSIFICATION Of THIS PAGE (Vtn Daa EatenCO

nu

,as- ore 7e n n,- r t ev in

75e r s se, r ii r'-asons c7, : onhih y lt

"h nher t:- "riown mr Srllne7d SteOns p C

I model (th 'rlov hovrr- ) 7eoon. hpre v

1l m< 7, b v s mancreasedr o nj,! m 791 estimates oe n a .n5 fon

I i- " h " t 1_ro.cess j1n. d invest - :,n.4 ;.''. a, rtm ' i-'. t ri.'ses ln oonn'iction :.ih -v .cc .a

1 Sa P-. i,.--~c v c- n .n ,-a ! " V c a : modc els is t o f ter- ine w e h

'I'm I z e~cq=!' v

t tr~i n o tn rokl a 1 e s ISs t "a i- i". -: n is case i- is desirable no ns

4~ V1-r-2t -rror - 2 . e st I.-a*'na

- th m' nv1.-sf ih e7 na -v 11 of the m lr-'S :tn2 base-? on . -t'ma o ri ceneral, i-

-~~~~~~~~~~~~~~~~~~~~~~~ r- e .. , -_te,. iat - OS%] C1

[L ]n ,@ e x t[ r rep 1 v _

_

hhl~lh I 1 w< .an *e Te. - .... . .. .... . . - . I . . . . . , " . .nd it ions - v

!02 .',1 -.- '

-.. .. T. ASFC TO OF TI 1.GE1h21 Dta E-ntered)

* SECURITY CL.ASIFICATION OF THIS PAOE4lfle Data Entetad)

Page 6: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

Approved for public release; distribution unlimited.

Sensitivity of Conditionsfor Luupin

Finite Markov Chains

by

Noon Taek. SUHMajor, Republic of Korea Army i

B.S., Korea Military Academy, 1975 -

Submitted in partial fulfillment of the

requirements for tae degree of

AASTER OF SCIENCE IN OPERATIONS RESEARCH

from the

NAVAl POSTGRADUATE SCHOOLSeptember 1984 ~M t--- - -- '.-o '- -a k.-

Author: I"_ _

Approved by: ___ __ .

partment of Operations Research

Dean of Information and PolicyNiences

3

Page 7: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

ABST EACT

:iarkcv chains with large transition probability matrices

occur in many applications such as manpower models. Under

certain conditions the state space of a stationary discrete

parameter finite Markov chain may be partitioned into

subsets, each of which may be treated as a single state of a

smaller chain that retains the Markov pro4 erty. Such a chain

is said to he "lumpatlew and the resulting lumped chain is a

special case of more general functions of Markov chains.

There are several reasons why one might wish to lump.

First, there may be analytical benefits, including relative

simplicity of the reduced model and development of a new

model which inherits known or assumed strong properties of

the original model (the larkov property). Second, there may

he statistical benefits, such as increased robustness of the

smaller chain as well as improved estimates of transition

prohabilities. Finally, the identification of lumps may

irovide new insights about the process under investigation.

However, a problem that arises in connection with Frac-

tical acplications cf Markov chain models is to determine

whether the chain is lumpable. This is especially difficult

when the matrix P = [p..] of transition probabilities is

estimated from transition data. In this case, it is desir-

able to find bounds Cr E , the largest error, F.- ,

estimating p, , for all i and j.

This thesis examines the sensitivity of the lumping

conditicns~tased on E, the estimate of P. In general, it is

found that the classical lumping conditions are extremely

sensitive to the estimation error which can be expectei to

cccur even with large data sets. Thus, these conditions may

he of limited value ir many actual applications.

4

- - -.. - " <XV' . -: ." . ~ ":t "'-"

Page 8: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

•.0

(7ABLE OF CONTENTS

I. 15TSCDUCTION . . . . . . . . . . . . . . . . . . . 8

II. TEORY OF LUMPABILITY . . . . 11

A. CONDITIONS FOR LUMPING .... ............ 11

B. PARTITICNS OF A SET OF STATES ........ 14

- C. AN EIGENVECTOR CONDITION FOR MARKOV CHAIN

LUIPABILIIY.. . .... ................. 16

III. BCUNDS ON THE lARGEST ERROR, 4, IN P . . . . . . 23

A. APPROACH ESING CENTRAL LIMIT THEOREM ..... 23

B. APPROACH USING ORDER STATISTICS . ....... 27

C. COMPARISON OF THE THREE EXPRESSIONS ..... 30

D. SENSITIVITY OF LUMPING CONDITIONS . . . . . . 34

IV. SUMMARY AND CCNCLUSIONS .... ............. 36

iIST CE E-EFEINCES ................... 37

INITIAl DISTRIBUTION lIST ...... ................ 39

5

0o

Page 9: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

4 LIST OF TIBLES

1. Partitions of a Set of N States..........16

6

Page 10: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

LIST OF FIGURES

3.1 Variation of A for Varying K.. and m ...... 31

3.2 Variation of Z by Changing Alpha ()...... 33

7

Page 11: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

a

I. INTRODUCTION

Markcv chains with large transition probability matrices

occur iz many applications such as manpower models. Under

certain conditions the state space of a stationary discrete

parameter finite narkov chain may be partitiored into

subsets, each of which may be treated as a single state or a

smafler chain that retains the .larkov property. Such a chain

is said to be "lumpahle" and the resulting lumped chain is a

special case of more general functions of Narkov chains.

Consider a larkov chain (X:t = 0,1,2,...) with finite

state space S = (1,2,...,n), stationary transiticn Frob-

ability matrix P = p }, and a priori distribution of

"initial states", po (po, o,.,3) . Let § denote a

nontrivial partitior of S into m < n "lumps", say

S= {I(1) , (2) ,...,L(m). If [Xt} is lumpable with respect

to W, denote by £ ) the lumped chain with state space and

transiticr probability matrix P.

A well-known characterization [Ref. 2] is that [Xt) is

lumpaLle to [.) if ard only if there exist matrices A and B

such that

BAPB = PB (1.1)

where B consists of m nonzero ortnogonal n-dimensional

column vectors whose components are zeros or ones, and A isE' with rows normalized to probability vectors (i.e,

A = (E'B)-1B'). The icsitions cf the 1's in each column of 3

corresiond to states in S that together form a lump in .

It follows that if BAEB = PB is satisfied, then P APB as

is shown in Chapter 2.

Page 12: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

Many of the mathematical juantities associated with (Xt)

can Le transformed directly to corresponding quantities for

I*., using the lumping matrix B. In Chapter 2, for example,we show that if an original .arkov chain [X.j is lumpable to

[X } an i {Xt) is fcrther lumable to c?], then {Xt} is

directly lumpable to [X,, and we give the lumping matrix

for jXtj in terms of the underlying two lumpings.

.herE are several reasons why one might wish to lump

[Ref. 1]. First, there may be analytical benefits,

including relative simplicity of the reduced model and

development of a neu molel which inherits known or assumed

strong properties of the original model (the Markcv prcp-

erty). Second, there may be statistical benefits, such as

increased robustness of the smaller chain as well as

improved estimates of transition probabilities. Firally,

tLe identification of lumps may provide new insights about

the ;rccess under investigation.

iiowever, a problem that arises in connection with Frac-

tical a plications of Markov chain modeis is to determine

whether the chain is lumpable. For chains with large state

spaces S, it is practically impossible to use an exhaustive

search to determine whether limpaDility conditions such as

those given in equation (1. 1) arE met for some matrices B,

lecause of the large rumber of ways partitioning S, i.e, the

larje n,iner of candidate B matrices. For example, if S has

1) elemerts, there are 115,975 partitions of S.

Another problem is to estimate the matrix = [pJ of

transiticn probabilities and to find bounds on A, the

largest error of .- for all i and j. We shall investi-

c~te the sensitivity of the lumping conditions in equation

(1.1) i cr vaLying A. If Ixt] is lumpable with lumping

matrix 3, is ccnditicn (1. 1) satisfied with P replaced by

the EstilatE P?

9

Page 13: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

0]

This thesis will attempt to examine the sensitivity of

the lumping conditions based on reasonable estimation err7crs

Z\ when P is not kncwn and must use estimated by e. e

iescribE these facts about lumpability using eigenvalues and

cigenvectors, including the theorem mentioned by D.R.Barr

and M.0.homas [Pef. 3]. We do not review elementary

concepts of Markov chains here; the reader may wish to

consult [Ref. 2] and [Ref. 4] for review of basic facts and

specific terminologies such as lumpability, regular Markov

chai n, etc.

01

a

0 _.u .u - h m d~~-- -mm'--u " " m=~ ,m 1 , --. ,, .

0 U4 t

Page 14: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

II. 1aEORY OF LUMPABILITY

'his chapter will ccver general facts about lumping such

as, corditions for lumpinj, tr.e number of partiticrs

possible for any givEn size of state space S, and theorems

associated with eigEnvector conditions for Markov chain

lumpatility.

A. CCNDIIIONS FOR LUMPING

Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite

state space 3 = 1 stationary transiticn prcb-

ability matrix 2 = jp.j), and a priori distributior of

"initial states", F0 = (po, o,...,:pO). Let S denote a

nontrivial iartition of S into m < n "lumps", that is

3 = (L1(), 1 (2), ... , L(m)). If (Kt is lumpable with

ztspect to S, denote by [X) the lumped chain with state

space S and transition probability matrix P.

:;E now show that if the condition (1.1) for lumpability

with respect to the lumping matrix B,

BAP5 = PE (2.1)

ij 6atislicd, then the lumped transition matrix P is ziven

P = AP3 (2.2)

rcca. . is the sum '• were L(j) is the partition

CCntir and i is any element of L (i). By

11

S

Page 15: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

0/

the lumpabibility condition, this value is the same for any

i e I (i). But the product PB sums the columns of P in accor-

dance with the partition subsets indicated by the columns of

E. Hence, PB is an n x m matrix with rows repeated in accor-

dancE with the partition sets 1(1), L(2), ... , L(m); the

effect of ere-multiplying by A = (B'B)-1 3' is to "average"

these coamcn rows yielding an m x m matrix 'P without the

repeated rows. But such "averages" are just the common rows

heiny averaged. hence, = APB is the m x m transition

matrix of the lumped chain with state space

{L(1) ,i (2) ....,L (m)).

ExamFle 1. Consider a transition probability matrix P

with 4 states uhich can be partitioned into

S= {(l,2,3), [4)] = (L(1),L(2) ,L(3)}. Let

1/4 1/16 3/16 1/21P 0 1/12 1/12 5/6

0 1/12 1/12 5/6

.7/8 1/32 3/32 ad

* = 0 1 and A 0 0.5 0.5 0

0 10! 0 0 1J

S0 0

Ve know eua tion (2. 1) is satisfied with partitioning

)= (IT),L(2),L(3)}. Thus, the lumped t insilion matrix is

0.25 0.25 0.5

APB = = 0.167 0.833

0.875 0.125 1.Many of the mathematical cuantities associated with (Xt)

can he transformed directly to corresponding quantities for

S

12

0A

Page 16: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

{Xt), using A and B cf equation (2. 1). For example, since

AB is the m-dimensional identity matrix, it follows that fors a positive integer,

M (APB? =A (PE),A (PB). .A (PB) =AiPS 3 (2.3)j

We now show that = APSB

P = (AB) (APB) (APB) (APB)

= AP (BAPB) (APB) . . . (APB)

= AP (PB) (APE) . . . (APB)

= AP2 (BAPB) . . . (APB)

= AP2 PB. . . (APB)

= APS B

-ut A sB = P, since

BAPB = PE

PBAPB = p2B

BAFBAPB = P2B

EAP 2B = p2p

EAP;B = B

so PS is lumpable hith the same matrix B and P = AP B.

This imilies in turn that if [Xt) has steady state distribu-tion IT, then [X,} has steady state distribution IT TrB.

Theorem I. The steady state distribution 1T of the

lumped chain (i) is WB where T = -tP.

Proof. iTB = (WPE) B

= (APB)

TherEfore, WB.

13

Page 17: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

Similarly,the a priori distribution of the initial

state of the lumped chain corresponding to that of the orig-

inal chain pO, is given by = pOB, since by eguaticns

(2.1) and (2.3),

pOpSB pOP ... PB

= pOP ... PBAPB

= pOP ... PBP

= pOB f.

Note that pOP B is the distribution of lumped states cccu-

pied ty the lumped chain after s transitions. Since this

equals pOBP= P-OP, it follows that pO = pOB.

B. PARTITIONS OF A SET OF STATES

The matrix B consists of m nonzero orthogonal

n-dimensional column vectors whose components are zeros and

cnes which determine a specific partition of

S = [1,2,...,n). Example 1 illustrates this, where the

state space S = (1,2,3,4) is partitioned into

S =If(, (2,31, [4)) = [L(1),L(2) ,L(3)) , and

101 0 0-B[ 10

0Permutations of these columns give a matrix which also

lumps [Xt]. In order to see this, let B be B with cclumns

permuted in some order. Then B- = BI9, where I is the iden-

tity matrix with its columns permuted in the same order. Now

if BAPB = PB, then

14

Page 18: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

?B*' P B*(B*E*)rl BPB*

B I * (B, BI*)-II*B , PBi

ai = *r, (BI B)-I (i *')- t I*Bl PBI*

= B (B1 B4) B, PBI*

= BAPBI*

= PBI*

= P B *

so it follows that (Xe) is also lumpable with respect to the

matrix B*.

Now, how many candidate lumping matrices are there? This

would be the number of partiticns of S. [Ref. 5] gives a

recursion relation for the number Am of ways of partitioning

a set S =(,2, N)

N-1

AN = Ay (N Z 1 , AO = 1) (2.4)

1Mo K

From this relation we find A,= 1, A2= 2, As= 5, Ar= 15, etc.

The sizes of the entries in Table 1 show that it would be

impossible to use a trial and error approach to finding

lumping ratrices B for lumping a chain with larger state

spaces, say with 10 or more elements. Values of AN for

larger N are shown in Table 1.

15

I

Page 19: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

TABLE 1

Partiticns of a Set of N States

N Partiticns N Partitions

52 20 5.172415 x 1013

6 203 30 8.467490145 x 1023

7 877 40 1.574505884 x 10 3

8 4140 50 1.857242688 x 1C 4 7

9 21147 60 9.769393075 x 1059

10 115975 70 1.80750039 x 13 7 3

it is of interest to be able to systematically prescribe

alternative lumjings by generating matrices B for a given

transiticn matrix P, using some method other than trial and

error. In the next section, we describe an approach to

finding E matrices using the eigenvalues and eigenvectcrs of

C. AN EIGENVECTOR CCIDITION FOR HARKOV CHAIN LUMPABILITY

Many problems in science and mathematics deal with a

linear operator T : V--)V, and it is of imFortance to deter-

mine thcse scalars for which the equation Tx = Ax has

nonzetc solutions x. In this section we discuss this

problem and its relationship with finding matrices B.

Theorem 2. The value 1 is always an eigenvalue fcr any

Markov chain transiticn probability matrix.

Proof. Let P be any n x n transition probability matrix

cf (Xp , x 'e a left eigenvector in Rf, and A be the ccrre-

sponding eijenvalue c P. Then xP = xA which is equivalent

to

16

Page 20: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

x(P - AI) = 0 (2.5)

For A to be an eigervalue, there must !e a nonzero solution

x of equation (2.5). Equation (2.5) will have a nonzero

solution if and only if

dEt(P -AI) = 0 . (2.6)

-1his is callel the characteristic equation. To show that

A = 1 always satisfies eguaticn (2.6), we need only show

that the columns of the matrix in euation (2.6) are

linedrly dependent.Ncte that

*~~~P (P-)=[P p. 1 0 . 0]

P : 1 :01LO 1

P E . - . 1]

= p-1 p... p . . p

P P-. . p ... p

(2.7)

p p.... .. . . ...n

Since L P, 1 for Markov chains, it follows that the rowsJ: I IJ

in equation (2.7) sum to zero, so the determinant in elua-

tion (2.E) is zero with A = I. It follows that A = 1 is an

Eigenvalue of the Markov chain [X,}. We'd next like to see

properties of eigenvectors corresponding to the eigenvalue

* A-I.

Theorem 3. For any regular Markov chain, components of

the eigenvector corresponding to / = 1 are proportional to

the steady state distribution of (Xt).

17

Page 21: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

Froot. Let x be a left eigenvector of P, and ?k be the

correspondin eigenvalue of P, such tnat xP = xA, anm

assume xi= 1. For given A = 1, xP = x. The steady state

distribution of [Xt) is unique [Ref. 4]. Therefore, x aust

te the steady state distribution IT since 1. xi= 1.

The following example demonstrates Theorem 3.

Example 2. Let

1/4 1/16 3/16 1/21

P = 0 1/12 1/12 5/6

0 1/12 1/12 5/6

7/8 1, 2 3/32 0

The eigenvectors corresponding to the eigenvalues of P are

displayed as column vectors below:

Ei~envalues : 1 0 -0.25 -0.333--

'C.7367 0 10.5 0.8247

Eicenvectors : 0.09209 -0.7201 -0.375 -0.03436

C.2236 0.7201 -4.125 -3.2405

0.6315 0 -6 -0.5498

Note that T = -T r, -T. i, -I+)

= (0.4375, 0.0547, 0.1328, 0.375),

v ere - 7 etc.

Theozem 4. Eigervectors corresponding to eigenvalues

other than 1 are orthogonal to e (,,...,I).

Proof. xe' x(Pe') = (xP)e' : xAe'. Therefcre, xe'

must le zerc for A # 1.

WE are also interested in finding the relationship

!etwEn eigenvalues cf P and those of lumijed transition

probahility matrix P, where ? = APB as described above.

18

•I

Page 22: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

I.

Theorem 5. SuPIcse [Xtj with transition matrix F is

lumpable to (X with transition matrix P. !he eigenvalues

of - are eigenvalues cf P.

Proof. Let Q(%J = 0 be the (n*e degree) characteristic

equaticn of P. By the Cayley - Hamilton theorem [Ref. 6],

OAP) = 01PP O-"... + QP Q, = 0

which together with eguation (2.3) implies

Ao, (P) B = h,," a . . . + .1

= (P)=0

Since P satisfies 2's characteristic equation and since

eigenvalues of A(P) are of the form 02A), it follows that

( ) = 0. Thus all eigenvalues A of P are also eigenva-

lues cf P.

We next examine the eigenvectors of P and P, with the

aim cf identifying lumpings of [X,-} directly in terms of the

eigenvectcrs of P. We have seen that 'pV is obtained directly

as pOr; a similar relationship holds with eigenvectcrs cf P.

Theorez 6. Sup~cse x is a left eigenvector of P ccrre-

sponing to eigenvalue >A, and suppose (Xy is lumpable to a

chain with transition matrix P = APB. Then xB satisfies the

e-uaticn (x_)P = (xB)A

Prroof. By equation (2.1) , (xB)? = xBAPB = xFE. Eut

xP = xX , and the result follows.

We ncte that xB is not necessarill an eigenvector of P

lecause it may he zero. In fact, it easily follcws that

xB = 0 if A is not an eigenvalue of '. But xB may be null

even if A is an eigenvalue of I, in cases of where A is a

repeatej eijenvalue cf P more times than o ?.

19

Page 23: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

[ef. 7] pointed out some other useful properties asso-

ciated with eigenvalues and eigenvectors such as : 1) if the

matrix P is symmetric , then eigenvalues are real and eigen-

vectcrs are different for repeated eigenvalues, 2) if the

matrix is not symmetric, then the eigenvectors are the same

for reFEated ei envalues.

Theorem 7. If [Xp) with transition matrix P is lumpakle

to [Xt1 with transition matrix P, and [X} is lumiable to

jX-t with transiticL matrix , then fX t} is directly

lumiyahle to w } where CXI is the lumped chain of [(T].

Proof. Let (X be lumpable to [X+1, and { t be

lumpat1e tc (+) by matrices B, and 3 ,, where B, and B, are

lumping matrices in which the dimension n x m of B, is

0 greater than that of Bz . By equation (2.1), P = APB, and

P = AP2,. Thus,

F =A PB, = A {(A PPI)B= (A.Ai)P(BIB,)

ao see tLat B,-B. is a lumping matrix and AiAI is cf the

required form, we need to show that (AiAI)(BI*B.) is the

identity matrix as mentioned in Stction A. But

(AA%).(BBa) = Aa-(AIB9I)-BA = A2-I.2, = A'B, = I

Also, note that B,.B, is 31 lumped by B,, so E,-B has columns

cf the required form. Therefore, (Xt} is directly lulalle

to JX't1, hy the lumping matrix B = B,'B.

?0

0 •

• 0

Page 24: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

Example 3. Consider a Markov chain with 5 states, and

transiticn robability matrix

0.3 0.1 0.2 0.1 0.3

0.1 0.3 0.1 0.3 0.2

0.5 0.1 0 0.1 0.3

0.1 0.5 0.2 0.1 0.1

0.5 0 0.1 0.2 0.2,

First, consider S = (1,2,3,4,5) which can be partitioned to

S ill ,[2,41, [3,5) = L(1),L(2), L(3)}. The correspcnding

lumping matrices are

'1 0 01

B= 0 0 1 and A1= 0.5 0 0.5 00 1 0 1O 0 01

01 3 0 0.5 0 0.5,

.0 0 1

and the lumped transition protability matrix is

0.3 0.2 0.51

1 AI BI 0.1 0.6 0.5 0.2 0.3

Secondly, consider S with 3 states which can be partitioned

to S = [[1,3), (2)) = [L' (1),L' (2)), with matrices

3 : 0 1] and A1 = [0.5 0 0.5JI 0L 0J 1 0

The corresponding lumped transition matrix is

A = P ~B~ 0. [ 0. 20.4 0.6 .

inaily, consider lumping the transition probability matrix

directly. For partitioning,

r2 21

-L7- --. -_

Page 25: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

S =[(1, 3,5], [2,4)1 [L" (1), L"(2)), and

001 1 0 1R1 = .. AZA = [1/3 0 1/3 9 1/3

0 1 0 1 0 10 0 1/2 0 1/2 0 ,

Ll 01 LO 0o1

ani the directly lumped transition probability matrix is

Theorem 7 shows that lumping is "-:ansitive", in tne

follcwing sense. Define two transition matrices P and Q to

te eguivalent, (P Q), if Q = ,P for a lumping matrix P

4 whose columns are thcse of the identity matrix, in scme

-ermuted order. (Thus the chain iXtj and [Y.] differ only in

the labels associated with their states). Define a relation

< between transition matrices as follows: Q 5 P if and

cnly if Q = for scme lumping matrix 3. Then theorem 7

saows that Q _< P, RQ 4 E < P. This relation "<"is

reflexive, since Q 5 Q using the lumping matrix I (iden-

tity). Finally, " _< " is antisymmetric since Q :_ P and

A P 5 P ". Thus, the set if all transition probability

matrices is partially ordered by the "lumping" partial

crder, _ S

22

L . - ,,-- - - - -... - "" "

Page 26: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

III. BOUNDS CY THE LARGEST ERROR , , IN P

in this chapter we consider three procedures to find

hounds on A. First, we use the central limit thecrem for

given i and j. Secondly, we use a binomial approximation or,

the hasis of the first procedure. Finally, we get the

largest error /%, using the asymptotic extreme value distri-

hution. These three approximations are only designed to give

a couch idea of the relationships between 6 and the number

M of elements in the state space, the total number of

cbserved transitions K.., and the probability oC.

A. APPECACH USING CENTRAL LIMIT THEOREM

t'e are interested in the sizes of the errors between the

estimate p and the unknown P, where P is the transition

probability matrix cf [X.}. We assume tae transiticn -rob-

ability matrix P is of size I x M.

Let Kij be the number of observed transitions of [Xt)

from state i to state j, and let K-. be the number of

observed transitions from state i. Similarly K.j is the

number of observed transitions into state j.

Let -. Le an unknown transition probability from state i

to state 3 and b be an estimate of p6 based on K.. observed

transiticns. Then the usual estimate *pi' of pjj is the ratio

of K<" to K-.. Now, as a rough approximation, imagine that

K-,. is fixed, and the number cf transitions from state i to

state j, K1 , is Pinomial (.i.,p ). Then by the cen-tral

limit theorem,

23

Page 27: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

,.i approximate by Normai~p,

sincc

ELPJ " - -= E , and

Varf[ F'j -1=Va[

4 ~ Va ri(K4.)

0(3.1)

We want to find a bound /, on the estimation error - .l

which occurs with picbability at least &(; that is, the

largest Ax for which

Now

- 'L.- ; , t-? &j = ( .s)(3.2)

CK.P

0 ,

let z = j , then

24

,*

Page 28: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

Z i3 ap-roxcilaite by standard Normal. Rewrite equation (3.2)ds

E[IZl - ] _ . ; 3 < (< 1

E.uation (3.2) is approximately

>r >. -.K" P

since the Normal distribution is symmetric. Solving for A,

we have

%herr N-1 (1 -is the (1 - - t- quantile of the stan-

dard Normal listribution. Suppose tne steady state prcb-

ability TrL of state i is - based on the eually likely

case, and su-pose the worst case in which

J - p.. = -

T.en an approximate value for A is given by

2

[ 25

Page 29: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

N-, (1- ) (0.5)2.< IL

= N-' (1 - ) (0.5);L K.-

= N-I 1 ) (0.5)M

0() (0.5) (3.3)

Equation (3.3) ccncerns the error I"'- p,,l for fixed iPL

and j. We'd now like to find an error bound A overall i

and j. That is, we wish to find the largest A for which

P Pz - p j1 a for some i and jj _oL,

which is rcughly the same as

P[%- P _ for some i and j] (3.4)

We aply the binomial approximation in equation (3.4), so

that

P[- 6 ij > A for soue i and j]

= 1 - [o- p. < a for all i and J

= I - (1 - )M (3.5)2-

let 1-(1 - - -)M= p for some 0 < < 1. Solve for p(, which

gives

-2-2 I- . (3.6)

26

I'

*i

Page 30: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

Substitute the value cf o( in equation (3.6) into eiuation

(3.3). Finally we get the approximate bound &l for all i and( j:

N( 1 ) (0.5) . (3.7)

E-uation (3.7) gives an approximate expression for z,, using

binomial approximation.

B. APPECACH USING ORLER STATISTICS

Assume ZI , Z , ... , Z4 are independent ccntinucus

random variables, each with density function f,(z) and

distribution F3 (z). Now let Z(h , Z(,) , ... , Z,, ) denote

their ordered values, from ssallest Z , , to largest Z(M)

these are called the order statistics of Z,, Z,, ... ,

We now consider the Erobability law for Z(,) (Ref. 8], the

largest or maximum value.

The event (Z(M)_< z) occurs if and only if the event

(Z1 1 z, Z< z, .. . , ZM _ z] occurs, since if the largest

Z is smaller than z, all M of the random variables must be

* smaller than z, where z is any fixed real number. The

distribution function for Z(M) is

F (z) = PrZ 5 z]

= PCZ 1 _ z, Z Z :_ z, ... , Z1 <_ z]

= P-Z, _ z] P"Z < z] ... P ZM :_ z]

since Z ,Z,,...,ZM are assumed independent. But each of Z1 ,ZL

,.... 2M has the same distribution F (z). SoMM

F (z) = [F3 (z)]J

27

-I

Page 31: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

The density function for Z then isd

f ) (Z) = dz)

d=-i [F (z)

= I [FZ(z) ] Mf(z)

where f (z) = az

Consider the limiting distribution function of the

maximum Z(M) as n tends to infinity. [Ref. 9, 10] show this

distribution is

imF(z) M = e (Z) (3.8)

* ~M- .

if Z1 , Zz, ... , Z. is a random sample from standard Ncrmal

Fopulaticn. We want find a bound 4 on the largest of M2

errors between estimates in P and the unknown components of

P. The random variables p- p.- are very roughly Normal with

mean 0 and variance -- which is derived from e~uation+, X..

(3. 1) fcr i,j = 1,2,...,M. Recall that K.. is the total

number o. transitions observed.

:..et = pq- p. where £ 1,2,...,12. Then we know the

random variable X is approximately equai to V 1-TO Z, where

z Las a standard Normal distribution. Let the random vari-

able X(M) be e ual to maxi i - p. Then

lir PEXM4) _ , J = lir PI max | j: < A ]

28

--

t.av 9et f

Page 32: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

Now XO) and X(M, are asymptotically independent, so for large

Ic M,

lira PCX 5M) <

re= { 2X 1 -- A] ... P - .F(X) ] M

X= F,(A)] • (3.9)

Frow equation (3.9) we derive an expression for A asfollows. Let A be the largest value for which

PC A 1 <5 1 - O<

'his is the complemertary probability because we wish tohave PCif' pj.j _ 4 for some i and j) > , as in the

previous section. The limiting distribution function of the

maximum X( )is the sane as

0 2±~lmfF, (4) = imf F ( 6 .

7hen, aproximately,

los(o-,) e

and

29

A - - •

Page 33: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

log 9-lcg(1 - )~ log2MI2 (2 A -- -2log2M2).

finally,

0 (05) - (V'2 o -g2 M) (3. 10)

Equation (3.10) is an approximate expression for A tasel oru

the asymftctic distritution of the extreme order statistic.

We will ccmpare the central limit theorem 6,'s with those

cbtained with the extreme value distribution, in the next

section.

C. COMPAISOI OF THE THREE EXPRESSIONS

The three expressions for . obtained using the central

limit theorem and order statistics have been developed under

aeproximations such as: 1) the steady state distribution

of {xt) is -- (equally likely), 2) the variances of

Ij- pLI have -F.. as a maximum value (worst case), and 3)

all transitions are irdependent. Information about [X..} is

from the estimate I because we don't have informaticr about

the unkncwn P. In a view of the above approximations and

comutations, our expressions for & are very rough. Hcwever

they dc Frcvide some insight into the occurrences and sizes

of estimation errors in .

Figure 3.1 contains 3 graphs showing A as a function cf

K.. and M for fixed o. = 0.90 based on the three expressions

(3.3), (3.7) and (3.10).* 'he first graph shows error bounds using the central liiit

theorem cnr - p for fixed i and j. The second Sra'h is

given by the same approach as the first yraph, except

overall estimation errors are considered, for all i and j.

30

Page 34: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

ALPHA =0.90

PON?4 E~11MATION fff USINGC C.0. -OvTRALL ESTwATiON By USING C.LY.oGRAP#I CRAP" 2

-K. S(.Q

0 20 40 60 10 g0o 0 2 '0 60 so 100S'Zr(&d) OF TRAkiSION MARI SiZ(&a1 Of TRAt4SIlICN MATRIX

OVE RAL L ESTIMATION BY USING OPO(R 4STATtil CS

o CRAPC

0

0 0

330

Page 35: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

'The third graph is Lased on the asymptotic distritution of

the largest value of I~q- p'I over all i and ~

C From Figure 3.1 we see that the ldrgest estimaticn Error

depends very much on the number of transition observaticns

and matrix size, but not so ffuch on the c0( value as seen

from Figure 3.2

32

Page 36: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

VARIATION CF A BY CHANG.NC ALPHA

OW(VALJ. C''11YM BY US" C.LT. OVRAL MIUATION OY USINC ORDER STATISTCSo .. aOo o K..SOOO

- °0 LECLO .0

I ALPHA .050 t2~ ?A4Y.A . 05 0

S:ALf' kA . 0 9C

4 :ALPHA . 095

13 0

22- 1.

o4

m so so Ic 2 20 40 10 So

g ZE(m) TRA rTK ATMRIX SIZE(m) OF TRANS1tON MATRIX

0

Figure 3.2 variation of a by Changing lpba Coo.

3

. . ..- .. . . .- . . " .- .

Page 37: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

Graphs 2 and 3 in Figure 3.1 are very similar even though

they use different approaches. They give an idea of howlarge likely values cf A are for given K.. and M, in the

"worst case".

If we consider a Markov chain (X+} with M = 20 or 30

states, and we have observed K = 5000 transitions then,

roughly, it is likely (prob = 0.90) that at least one

element of ' is in error by at least 0.1. In general,

expressions (3.7) and (3.10) may be useful for Markov chains

[Xt) with M = 20 or 30 states and large numbers of observed

transiticns.

D. SENSITIVITY OF LUBPING CONDITIONS

We have developed expressions for A, using the central

limit theorem and order statistics. We want to examine the

sensitivity of the lumping conditions applied to the

estimate of P. If equation (2. 1), which is a necessary

condition for lumping a Markov chain with transition matrix

, is satisfied, then the lumped transition matrix is

given by eguation (2.2). However, even though P satisfies

the lumping conditions, it is extremely unlikely that its

estimate will also satisfy these conditions, as we shall

now demonstrate.

In order to simulate the difference between and P,

consider a matrix of errors RHA, where R is a random matrixwith dimension the same as P, whose components are 1's,

-1's, and O's where the sum of each row is zero. Now

consider the lumpability of the simulated estimate P, which

is constructed by taking P plus the random matrix R times

* AI, that is, P* = P + R-6.

7c show the sensitivity of the lumping conditions, we

assume the unknown P is lumpable with lumping matrix B, and

consider the difference (BAP*B - P*B). If eqgation (2. 1) is

satisfied by P then all if these components must be zero.

34

Page 38: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

.0

Theorem 8. The difference (BAP*B - P*B) is a linear

. function of A.

Proof. Let R be the random matrix as defined atove and

let P* = P + R.&. Then (BAP*B - pV*B) is given by

[BA (F+iB.A) B - (P+R-)E] = (BAPE + BAR. .3 - PB - R B)= (BAPB - PB + (BARB- RB).)

= (BARB - RB).a

= C4.

Therefore the difference of EAP*B - P*B is linearly depen-Udent cn 4 and P* is not lumpable unless BARB = RB (i.e., R

is "lumable,'), which is not likely to occur.

Since is likely to have elements differing appreciatl"

from the ccrresponding elements in P (errors of size a ), it

can be seen that tic lumpability conditions will not be

satisfied (not even nearly so) by , even though [Xt) is

lumpable. We conclude that attempting to check the lumpa-

bility of tie estimate when P is not known is not useful.

0

35

0 ..

! "' "- - " "" '" " "" ". "" ' " -- " " " '. - ,_.,. :_ L . .m . , -

,, ,,,*,,,.--.,,, ,, .'

Page 39: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

IV. SUMMARY AND CONCLUSIONS

Ve have given several theorems associated with eigenva-

lues and eigenvectors for lumpable Markov chains (X. with

finite state spaces. We have derived rough, approximate

mathematical expressicns for the largest error made iL esti-

mating P by P based cn transition data.

Both expressions (3.7) and (3.10) are very similar even

though the estimated A 's for the first expression are

slightly less than those in the second expression. These

expressions show that the largest estimation errors depEn!

very much on the number of transition observations ard on

the matrix size, but rot so much on the o< value.

Since P is likely to have elements differing appreciably

from the ccrresponding elements in P, it is of interest to

examine whether the equation BAPB = PB is likely tc be

nearly satisfied with N, i.e., will (BAPB - PB) be neariv

zero? This is examined by simulation of "estimates" L of

F, using random perturbations of elements oz P of sizes 6

which are likely to cccur as errors in .

This shows that the classical lumping conditions are

Extremely sensitive to estimation errors which can be

expected to occur even when a large number of transitions

have been observed. Thus, the classical lumpinj conditicns

may be of limited value in many actual applications.

As further research, it is recommended that scme

constructive approach to finding matrices 3 for lumping a

lumpatle Markov chair [Xt) be developed , perhaps along the

lines of the theorers mentioned in Chapter 2. It is hoped

that the present study will be useful to those who might

ctherwise have endeavored to check the classical condition

for lumpability of a larkov chain [Xt} when the transition

matrix P has been estimated.

36

.6 -o

Page 40: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

LIST OF REFERENCES

1. D.E. Barr and M.U. Thomas, "An Approximation Test ofMarkov Chain Lumpability", J. Am. StatisticalAssociation, vcl.72, pp. 175-179, 1977 -----------

2. J.G. Kemeny and J.L. Snell, Finite larkov Chains, VanNostrand, Prinston, N.J., 1967-.--------

3. D.R. Barr and M.U. Thomas, "An Eigenvector Conditionfor Markov Chain Lumpability", O~erations Research,vol.25, no.6, FE.1028-1031, Rovembr;-!377.

4. S.P. Ross, Introduction to Probability sAcademic Press-2 -IT[on,-p. 12-T.

Altert Nijenhuis and Herbert S. Wilf, CombinatcrialAlSorithms for Computers and Calculators, ZeK

c97-2- "i e~Iioin, fp- 9 3 -9 [-

6. Marvin Marcus and Henryk Minc, Introduction to LinearAlcetra, Macmillan Company, pp. 163---T 6 5.

7. W.E. Boyce and B.C. Diprima, Elementary DifferentialEuations and Boundary Value PoITEm, JoEU-WrI7-"o73.- ticn, pp. 287=297.- ...

8. H.J. Larson, Introduction to Probailit Theory andInterference - -i ey - -ons; 3rd-- i-:.-:13320";- 1982.

9. A.E. Sarhan and B.G. Greenberg, Contributions to CrderStatistics, John Wiley & Sons, ppT------

10. M.G. Kendall and Alan Stuart, The Advanced Thecry ofStatistics, Griffith, pF.333-37, 13 S.-

11. Howard Anton, Elementary Linear Aliebra, John Wiley &Sons, 3rd editicn.

12. C.J. Burke and M. Rosenblatt, "A larkovian Function ofa Markov Chain", Ann.rMath.Statist., vol.29,pp. 112 - 1 2 2 , 1958.

13. Volker Abel "Conditions for Lumping the Grades of aHiErarchical Manpower System", Oerations Fesearch,vol.17, no.4, 1E.379-383, NovembeE,-1gr3.---

37

Page 41: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

0I

14. Volker A.el, "Test on !nrna. I lit,', for ~a--'ian "- P,

Mces:, Ole:lnspser:h , ncc

/

K . -. 7. .. t:: > : .. - -x ~ ~ . .

Page 42: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

INIIAL DISTRIBUTION LIST

No. Copies

1. Defense Technical Informaticn Center 2Cameron StationAlexandria, Virginia 22314

2. Iitrary, Code 0142 2Naval Postgraduate School!Montery, Califorria 93943

3. Department Chairnan,.Code 55 2Department of OperafionsNaval Postgraduate SchoolMontery, Ca 93943

4. Professor D.R.Barr, Code 55Bn 2Department of OperationsNaval Postgraduate School1ontery, Ca 93943

5. Professor F.R.Richards Code 55RhDeFartment of OerationsNaval Postgraduate SchoolMontery, Ca 93943

6. ! ajcr Moon Taek. SUH 10Ecstal Code 100-ElFyungkido Koyangkun1Jidcmyun Hangjunaeri 151Seoul, Korea

7. Majcr Tae Nam. AHNCcmputer CenterF.O.Eox 77Gongneunq Dong fLobong GuSeoul 130-09, Kcrea

8. libraryP.O.Eox 77Gcngneund Dong. Dobong GuSeoul 13 -09, Kcrea

9. Ma or Hee Sun. SCNGSMC 1359 NPGSMcntery, Ca 93943

39

4 . .' ," . " ' " - " .. " " " " - : ,,..,.. , ,. ',, . " - - - " - . -

Page 43: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

. . . . . . .

0

6

0

0

S

0

0

S

. . . . .

Page 44: Emhhhhhhommmls - DTIC · 2014. 9. 27. · Ccnsider a Markcv chain LX:t = 0,1,2,...) with finite state space 3 = 1 stationary transiticn prcb- ability matrix 2 = jp.j), and a priori

._°

FILMED

5-85

-1

DTICg

[_. ,.- .% ., . . - .