AM IOUCIO O MAKO CAI AAYSIS ndhrt Clln · PDF file01.09.2014 · AM IOUCIO O MAKO...

of 20/20
ISSN 0306-6142 ISBN 0 902246 43 7 © L. Collins 1975 AM INTRODUCTION TO MARKOV CHAIN ANALYSIS Lyndhurst Collins Printed in Great Britain by Headley . Brothers Ltd The Invicta Press Ashford Kent and London
  • date post

    06-Feb-2018
  • Category

    Documents

  • view

    220
  • download

    1

Embed Size (px)

Transcript of AM IOUCIO O MAKO CAI AAYSIS ndhrt Clln · PDF file01.09.2014 · AM IOUCIO O MAKO...

  • ISSN 0306-6142

    ISBN 0 902246 43 7

    L. Collins 1975

    AM INTRODUCTION TO

    MARKOV CHAIN ANALYSIS

    Lyndhurst Collins

    Printed in Great Britain by Headley. Brothers Ltd The Invicta Press Ashford Kent and London

  • CONCEPTS AND TECHNIQUES IN MODERN GEOGRAPHY No. 1

    CATMOG (Concepts and Techniques in Modern Geography)

    1 An introduction to Markov chain analysis - L. Collins

    2 Distance decay in spatial interactions - P.J. Taylor

    3 Understanding canonical correlation analysis - D. Clark

    4 Some theoretical and applied aspects of spatial interactionshopping models - S. Openshaw

    5 An introduction to trend surface analysis - D. Unwin

    6 Classification in geography - R.J. Johnston

    7 An introduction to factor analytical techniques - J.B. Goddard & A. Kirby

    8 Principal components analysis - S. Daultrey

    9 Causal inferences from dichotomous variables - N. Davidson

    10 Introduction to the use of logit models in geography - N. Wrigley

    11 Linear programming: elementary geographical applications of thetransportation problem - A. Hay

    12 An introduction to quadrat analysis - R.W. Thomas

    13 An introduction to time-geography - N.J. Thrift

    14 An introduction to graph theoretical methods in geography - K.J. Tinkler

    15 Linear regression in geography - R. Ferguson

    16 Probability surface mapping. An introduction with examples andFortran programs - N. Wrigley

    17 Sampling methods for geographical research - C. Dixon & B. Leach

    1 8 Questionnaires and interviews in geographical research -C. Dixon & B. Leach

    19 Analysis of frequency distributions - V. Gardiner & G. Gardiner

    20 Analysis of covariance and comparison of regression lines - J. Silk

    21 An introduction to the use of simultaneous-equation regression analysisin geography - D. Todd

    22 Transfer function modelling: relationship between time seriesvariables - Pong-wai Lai (in preparation)

    23 Stochastic processes in one-dimensional series: an introduction -K.S. Richards (in preparation)

    24 Linear programming: the Simplex method with geographical applications -James E. Killen (in preparation)

    Price each: 75p or $1.50 except Catmogs 16 & 19 which are 1.25 ($2.50)

    The series is produced by the Study Group in Quantitative Methods, ofthe Institute of British Geographers. For details of membership of theStudy Group, write to the Institute of British Geographers, 1 KensingtonGore, London SW7. The series is published by GEO ABSTRACTS, Universityof East Anglia, Norwich NR4 7TJ, to whom all other enquiries should beaddressed.

    AN INTRODUCTION TO MARKOV CHAIN ANA ALYSIS

    by

    Lyndhurst Collins(University of Edinburgh)

    CONTENTS

    I INTRODUCTION Page

    (i) Geographic attraction of Markov models 3

    (ii) The concept of probability 3

    (iii) A simple Markov model 3

    (iv) Markov models as particular cases of stochastic process models 6

    II PROPERTIES OF REGULAR FINITE MARKOV CHAINS

    (i) Transition probabilities 7

    (ii) Markov theorems 8

    (iii) The limiting matrix 9

    (iv) The concept of equilibrium 10

    (v) The fundamental matrix 10

    (vi) The matrix of mean first passage times 11

    (vii) Matrix of standard deviations 12

    (viii)Central limit theorem for Markov chains 12

    III PROCEDURE FOR CONSTRUCTING A MARKOV MODEL

    (i) The Markov property 14

    (ii) Maximum likelihood ratio criterion test for the Markov property 15

    (iii) The first-order property 16

    (iv) Maximum likelihood ratio criterion test for the first-order 16property

    (v) The concept of stationarity 19

    (vi) Statistical test for homogeneity 19

    1

  • 22

    23

    23

    24

    24

    25

    25

    26

    26

    27

    28

    28

    31

    34

    Page

    IV DERIVATION OF TRANSITION PROBABILITIES

    (i) Conceptual

    (ii) Statistical estimation from aggregate data

    (iii) Statistical estimation from individual observations

    V SOME GEOGRAPHICAL IMPLICATIONS OF MARKOV ASSUMPTIONS

    (i) The system of states

    (ii) The first-order assumption

    (iii) Assumption of stationarity

    (iv) Assumption of uniform probabilities

    VI GEOGRAPHICAL APPLICATIONS OF MARKOV MODELS

    (1) The existing literature

    (ii) Aspatial systems of states

    (iii) Spatial systems of states

    (iv) Markov chains as descriptive tools

    (v) Markov models as predictive mechanisms

    (vi) Modifications and refinements

    BIBLIOGRAPHY

    Acknowledgement

    We thank W.F. Lever for permission to reproduce tables oftransition matrices from The intra-urban movement of manufacturing:a Markov approach, Transactions I.B.G., 1972.

    2

    I. INTRODUCTION

    (i) Geographic attraction of Markov models

    Markov chain models are particularly useful to geographers concerned withproblems of movement , both in terms of movement from one location to anotherand in terms of movement from one "state" to another. "State", in this con-text, may refer to the size class of a town, to income classes, to type ofagricultural productivity, to land use, or to some other variable. Markovchain models are neat and elegant conceptual devices for describing andanalysing the nature of changes generated by the movement of such variables;in some cases Markov models may be used also to forecast future changes.Markov chain models, therefore, are valuable both in studies of migration inwhich the aim may be to assess the predominant direction or the rate of changeand in studies of growth or development of say an urban system in which theaim may be to determine which sort of cities are tending to increase in sizeand which cities are tending to decline. These preliminary and basic commentsconcerning Markov chain models are best exemplified with reference to aspecific example.

    (ii) The concept of probability

    In the real world we may attach, conceptually, a probability to the occurr-ence of every event. There is a probability, however remote, that an aero-plane will crash or an ocean liner will sink this year; there is a pro-bability that a red car will win the next grand prix; there is a probabilitythat at least two towns in Britain will double their size during the nextdecade, and there is a probability that during the next five years five foodmanufacturing factories will relocate from London to Birmingham. We do notknow in most cases the value of these true underlying fixed probabilitiesand for most courses of action we must assume or estimate the probabilities.Thus in terms of aeroplanes or shipping, insurance companies will charge arate which, based on their judgement of experience of past trends, will morethan cover the cost incurred by probable air crashes or sea disasters. Ineither case the companies need to know not which particular plane or shipwill be involved but only the "size" or probable value. However, if an insur-ance company's judgement is grossly wrong the company will either go bankruptbecause it has not attached a sufficiently high probability to the occurrenceof a possible disaster or the company will charge rates of such a high valuethat its clients will seek insurance coverage elsewhere. It is in the interestof all insurance companies, therefore, to "estimate" the probabilities ofdisaster as accurately as possible. Likewise, so that they too can adopt thecorrect course of action, planners and administrators concerned with thepresent and future spatial organization of the landscape are interested inthe probabilities of changes in town size or in the affects of changes ofhuman migration.

    (iii) A simple Markov model

    Information relating to the observed probabilities of past trends, say overthe last ten years, can be organized into a matrix which is the basic frame-work of a Markov model. To use Harvey's (1967) example, let us assume thatbetween 1950 and 1960 the probabilities of movement between London city , itssuburbs, and the surrounding country are described by the following matrix.(Table 1).

    3

  • Table 1

    London Suburbs Country

    London 0.6 0.3 0.1

    Suburbs 0.2 0.5 0.3

    Country 0.4 0.1 0.5

    The three locations in this matrix form the "states" of the model, and eachelement represents the value of the probability of moving from one state toany other state; in this context, therefore, we refer to the probabilitiesas transition probabilities which in this example are assumed. We assumethere ore, t at etween 950 and 1960 of all the people who were living inLondon in 1950, 60 percent or 0.6 were still in London in 1960, 30 percent(0.3) moved from London to the suburbs and 10 percent moved from London tothe country. Thus, each row of the matrix, unlike the columns, sums to100 percent or 1.0. During the same period, of those people living in thesuburbs in 1950, 0.2 of them had moved into London by 1960, and of thoseliving in the country in 1950, 0.1 of them had moved into the suburbs. Thus,the matrix of transition probabilities or transition matrix describes theprobability of movement from one state to any other state during a specifiedor discrete time interval (10 years). With further reference to Harvey'sexample, let us assume that between 1950 and 1960 there was no change in thetotal number of the population in the three states. We are concerned, there-fore, only with the redistribution of a constant population. Let us furtherassume that in 1950 of the total population (10 million) in the three states50 percent were in London, 30 percent were in the suburbs, and 20 percentwere in the country. This initial state of the system can be expressed in the

    form of a probability vector:

    The initial state vector p(0 ), therefore, refers to the state of the sy