GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf ·...

67
GEOGG121: Methods Introduction to Bayesian analysis Dr. Mathias (Mat) Disney UCL Geography Office: 113, Pearson Building Tel: 7670 0592 Email: [email protected] www.geog.ucl.ac.uk/~mdisney

Transcript of GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf ·...

Page 1: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

GEOGG121: Methods Introduction to Bayesian analysis

Dr. Mathias (Mat) Disney UCL Geography Office: 113, Pearson Building Tel: 7670 0592 Email: [email protected] www.geog.ucl.ac.uk/~mdisney

Page 2: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Intro to Bayes’ Theorem

–  Science and scientific thinking –  Probability & Bayes Theorem – why is it important? –  Frequentists v Bayesian –  Background, rationale –  Methods –  Advantages / disadvantages

•  Applications: –  parameter estimation, uncertainty –  Practical – basic Bayesian estimation

Lecture outline

Page 3: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

Reading and browsing Bayesian methods, data analysis •  Gauch, H., 2002, Scientific Method in Practice, CUP. •  Sivia, D. S., with Skilling, J. (2008) Data Analysis, 2nd ed., OUP, Oxford.

•  Shih and Kochanski (2006) Bayes Theorem teaching notes: a very nice short intro to Bayes Theorem: http://kochanski.org/gpk/teaching/0401Oxford/Bayes.pdf

Computational •  Press et al. (1992) Numerical Methods in C, 2nd ed – see

http://apps.nrbook.com/c/index.html •  Flake, W. G. (2000) Computational Beauty of Nature, MIT Press. •  Gershenfeld, N. (2002) The Nature of Mathematical Modelling,, CUP. •  Wainwright, J. and Mulligan, M. (2004) (eds) Environmental Modelling:

Finding Simplicity in Complexity, John Wiley and Sons.

Page 4: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

Reading and browsing Papers, articles, links P-values •  Siegfried, T. (2010) “Odds are it’s wrong”, Science News, 107(7),

http://www.sciencenews.org/view/feature/id/57091/title/Odds_Are,_Its_Wrong •  Ioannidis, J. P. A. (2005) Why most published research findings are false, PLoS Medicine,

0101-0106. Bayes •  Hill, R. (2004) Multiple sudden infant deaths – coincidence or beyond coincidence, Pediatric

and Perinatal Epidemiology, 18, 320-326 (http://www.cse.salford.ac.uk/staff/RHill/ppe_5601.pdf)

•  http://betterexplained.com/articles/an-intuitive-and-short-explanation-of-bayes-theorem/ •  http://yudkowsky.net/rational/bayes •  http://kochanski.org/gpk/teaching/0401Oxford/Bayes.pdf

Page 5: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Carry out experiments?

•  Collect observations? •  Test hypotheses (models)? •  Generate “understanding”? •  Objective knowledge?? •  Induction? Deduction?

So how do we do science?

Page 6: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Deduction –  Inference, by reasoning, from general to particular –  E.g. Premises: i) every mammal has a heart; ii)

every horse is a mammal. –  Conclusion: Every horse has a heart.

–  Valid if the truth of premises guarantees truth of conclusions & false otherwise.

–  Conclusion is either true or false

Induction and deduction

Page 7: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Induction –  Process of inferring general principles from

observation of particular cases –  E.g. Premise: every horse that has ever been

observed has a heart –  Conclusion: Every horse has a heart.

–  Conclusion goes beyond information present, even implicitly, in premises

–  Conclusions have a degree of strength (weak -> near certain).

Induction and deduction

Page 8: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

Induction and deduction

•  Example from Gauch (2003: 219) which we will return to: –  Q1: Given a fair coin (P(H) = 0.5), what is P that 100

tosses will produce 45 heads and 55 tails? –  Q2: Given that 100 tosses yield 45 heads and 55 tails,

what is the P that it is a fair coin? •  Q1 is deductive: definitive answer – probability

•  Q2 is inductive: no definitive answer – statistics –  Oh dear: this is what we usually get in science

Page 9: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Informally, the Bayesian Q is:

–  “What is the probability (P) that a hypothesis (H) is true, given the data and any prior knowledge?”

–  Weighs different hypotheses (models) in the light of data

•  The frequentist Q is: –  “How reliable is an inference procedure, by virtue of

not rejecting a true hypothesis or accepting a false hypothesis?”

–  Weighs procedures (different sets of data) in the light of hypothesis

Bayes: see Gauch (2003) ch 5

Page 10: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  To Bayes, Laplace, Bernoulli…:

–  P represents a ‘degree-of-belief’ or plausibility –  i.e. degree of truth, based on evidence at hand

•  BUT this appears to be subjective, so P was redefined (Fisher, Neyman, Pearson etc.) : –  P is the ‘long-run relative frequency’ with which an event

occurs, given (infinite) repeated expts. –  We can measure frequencies, so P now an objective tool for

dealing with random phenomena

•  BUT we do NOT have infinite repeated expts…?

Probability? see S&S(1006) p9

Page 11: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  The “chief rule involved in the process of

learning from experience” (Jefferys, 1983) •  Formally: •  P(H|D) = Posterior i.e. probability of hypothesis

(model) H being true, given data D •  P(D|H) = Likelihood i.e probability of data D

being observed if H is true •  P(H) = Prior i.e. probability of hypothesis being

true before measurement of D

Bayes’ Theorem

P H |D( )!P D |H( )"P H( )

Page 12: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Prior?

–  What is known beyond the particular experiment at hand, which may be substantial or negligible

•  We all have priors: assumptions, experience, other pieces of evidence

•  Bayes approach explicitly requires you to assign a probability to your prior (somehow)

•  Bayesian view –  probability as degree of belief rather than a

frequency of occurrence (in the long run…)

Bayes: see Gauch (2003) ch 5

Page 13: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Importance? P(H|D) appears on the left of BT

•  i.e. BT solves the inverse (inductive) problem – probability of a hypothesis given some data

•  This is how we do science in practice •  We don’t have access to infinite repetitions of

expts (the ‘long run frequency’ view)

Bayes’ Theorem

Page 14: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  I is background (or conditioning) information as there is ‘no such thing as absolute probability’ (see S & S p 5)

•  P(rain today) will depend on clouds this morning, whether we saw forecast etc. etc. – I is usually left out but ….

•  Power of Bayes’ Theorem –  Relates the quantity of interest i.e. P of H being true given D, to

that which we might estimate in practice i.e. P of observing D, given H is correct

Bayes’ Theorem

P Hypoth. |Data, I( )!P Data |Hypoth., I( )"P Hypoth. | I( )

Page 15: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  To go from to ∝ to = we need to divide by P(D|I)

•  Where P(D|I) is known as the ‘Evidence’ •  Normalisation constant which can be left out for parameter

estimation as independent of H •  But is required in model selection for e.g. where data

amount may be critical

Bayes’ Theorem & marginalisation

P H |D, I( ) =P D |H, I( )!P H, I( )

P(D | I )

Page 16: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Suppose a drug test is 99% accurate for true positives, and 99% accurate for

true negatives, and that 0.5% of the population use the drug. •  What is the probability that someone who tests positive is in fact a user i.e.

what is P(User|+ve)?

•  So

•  P(D) on bottom , evidence, is the sum of all possible models (2 in this case) in the light of the data we observe

Bayes’ Theorem: example

http://kochanski.org/gpk/teaching/0401Oxford/Bayes.pdf http://en.wikipedia.org/wiki/Bayes'_theorem

P User |+ve( ) =P +ve |User( )!P User( )

P(+ve |User)P(User)+P(+ve | Non"user)P(Non"user)

=0.99!0.005

0.99!0.005+ 0.01!0.995= 0.332

True +ve False +ve

Page 17: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  So, for a +ve test, P(User) is only 33% i.e. there is 67% chance they are NOT

a user •  This is NOT an effective test – why not? •  Number of non-users v. large compared to users (99.5% to 0.5%) •  So false positives (0.01x0.995 = 0.995%) >> true positives (0.99x0.005 =

0.495%) •  Twice rate (67% to 33%) •  So need to be very careful when considering large numbers / small results •  See Sally Clark example at end….

Bayes’ Theorem: example

http://kochanski.org/gpk/teaching/0401Oxford/Bayes.pdf http://en.wikipedia.org/wiki/Bayes'_theorem

Page 18: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Laplace (1749-1827) estimated MSaturn from orbital data

•  i.e. posterior prob(M|{data},I) where I is background knowledge of orbital mechanics etc.

•  Shaded area under posterior pdf shows degree of belief that m1 ≤ MSaturn < m2 (he was right to within < 0.7%)

•  How do we interpret this pdf in terms of frequencies? –  Some ensemble of universes all constant other than MSaturn? Distribution of

MSaturn in repeated experiments? –  But data consist of orbital periods, and these multiple expts. didn’t happen

Eg Laplace and the mass of Saturn

Best estimate of M

Degree of certainty of M

The posterior pdf expresses ALL our best understanding of the problem

Page 19: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  H? HT? HTTTTHTHHTT?? What do we mean fair? •  Consider range of contiguous propositions (hypotheses)

about range in which coin bias-weighting, H might lie •  If H = 0, double tail; H = 1, double head; H = 0.5 is fair •  E.g. 0.0 ≤ H1 < 0.01; 0.01 ≤ H2 < 0.02; 0.02 ≤ H3 < 0.03 etc.

Example: is this a fair coin?

Heads I win, tails you lose?

Page 20: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  If we assign high P to a given H (or range of Hs), relative to

all others, we are confident of estimate of ‘fairness’ •  If all H are equally likely, then we are ignorant •  This is summarised by conditional (posterior) pdf prob(H|

{data},I) •  So, we need prior prob(H,I) – if we know nothing let’s use

flat (uniform) prior i.e.

Example: is this a fair coin?

prob H | I( ) = 1 0 ! H !10 otherwise

"#$

P Hypoth. |Data, I( )!P Data |Hypoth., I( )"P Hypoth. | I( )

Page 21: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Now need likelihood i.e. prob({data}|H,I) •  Measure of chance of obtaining {data} we have actually

observed if bias-weighting H was known •  Assume that each toss is independent event (part of I) •  Then prob(R heads in N tosses) is given by binomial

theorem i.e.

–  H is chance of head and there are R of them, then there must be N-R tails (chance 1-H).

Example: is this a fair coin?

prob data{ } |H, I( )!HR 1"H( )N"R

P Hypoth. |Data, I( )!P Data |Hypoth., I( )"P Hypoth. | I( )

Page 22: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  How does prob(H|{data},I) evolve?

Example: is this a fair coin?

prob data{ } |H, I( )!HR 1"H( )N"R prob H | I( ) = 1 0 ! H !10 otherwise

"#$

H HT TTT TTTH

Page 23: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  How does prob(H|{data},I) evolve?

Gaussian prior µ = 0.5, σ = 0.05

prob data{ } |H, I( )!HR 1"H( )N"R prob H | I( ) = 1 0 ! H !10 otherwise

"#$

H0 (mean) not always at peak Particularly when N small

T

Page 24: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  The posterior pdf summarises our knowledge, based on

{data} and prior –  Note{data} in this case actually np.random.binomial(N, p)

•  Weak prior shifted easily

•  Stronger Gaussian prior (rightly) requires a lot more data to be convinced

•  See S & S for other priors…. •  Bayes’ Theorem encapsulates the learning process

Summary

prob H | I( ) = 1 0 ! H !10 otherwise

"#$

P Hypoth. |Data, I( )!P Data |Hypoth., I( )"P Hypoth. | I( )

prob H | I( ) = e!H!µ( )2! 2

2

Page 25: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Takes a lot of coin tosses to estimate H to within 0.2-0.3 •  If we toss 10 times and get 10 T, this might be strong

evidence for bias •  But if we toss 100 times and get 45H 55T, difference still

10 BUT much more uncertain •  Gaussian: Although H(0.5) ~ 250000 H(0.25), 1000 tosses

gets posterior to within 0.02

Summary

P Hypoth. |Data, I( )!P Data |Hypoth., I( )"P Hypoth. | I( )

Page 26: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Can we summarise PDF prob(H|{data},I) concisely (mean, error)? •  Best estimate Xo of parameter X is given by condition

•  Also want measure of reliability (spread of pdf around Xo) •  Use Taylor series expansion •  Use L = loge[prob(H|{data},I)] - varies much more slowly with X •  Expand about X-Xo = 0 so

•  First term is constant, second term linear (X-Xo) not important as we are expanding about maximum. So, ignoring higher order terms….

Reliability and uncertainty

dP

dXXo

= 0d2P

dX2

Xo

< 0

f x( ) = f a( )+!f a( )1!

x " a( )+!!f a( )2!

x " a( )2

+...

L = L X0( )+

1

2

d2L

dX2

X0

X ! X0( )2

+...

Page 27: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  We find

•  Where A is a normalisation constant. So what is this function?? •  It is pdf of Gaussian (normal) distribution i.e.

•  Where µ, σ are maximum and width (sd) •  Comparing, we see µ at Xo and

•  So X = Xo ±σ

Reliability and uncertainty

prob X | data{ }, I( ) ! Aexp1

2

d2L

dX2

X0

X " X0( )2

#

$%%

&

'((

prob x |µ,!( ) =1

! 2"exp !

x !µ( )2

2! 2

"

#$$

%

&''

http://en.wikipedia.org/wiki/File:Normal_Distribution_PDF.svg

! = !d2L

dX2

X0

"

#

$$

%

&

''

!1 2

Page 28: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  From the coin example •  So •  Therefore

•  So Ho = R/N, and then

•  Ho tends to a constant, therefore so does Ho(1-Ho), so σ ∝ 1/√ N •  So can express key properties of pdf using Ho and σ •  NB largest uncertainty (σmax ) when Ho = 0.5 i.e. easier to identify

highly-biased coin than to be confident it is fair

Reliability and uncertainty prob H | data{ }, I( )!HR

1"H( )N"R

L = loge prob H | data{ }, I( )!"

#$= const + R loge H( )+ N % R( ) loge 1%H( )

dL

dHH0

=R

H0

!N ! R( )1!H

0( )= 0

d2L

dH2

H0

= !R

H2!

N ! R( )1!H( )

2= !

N

H01!H

0( )! =

H01!H

0( )N

EXERCISE: verify these expressions yourself

Page 29: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Asymmetric pdf? Ho still best estimate •  But preal more likely one side of Ho than another •  So what does ‘error bar’ mean then? •  Confidence intervals (CI)

–  shortest interval enclosing X% of area under pdf, say 95%

•  Assuming posterior pdf normalised (total area = 1) then need X1, X2 such that

•  The region X1 ≤ X < X2 is the shortest 95% CI •  For normalised pdf, weighted average given by •  Multimodal pdf?

–  As pdf gets more complex, single estimates of mean not relevant

–  Just show posterior pdf, then you can decide…..

Reliability and uncertainty

prob X1 ! X < X2 | {data}, I( ) = prob X | {data}, I( )dX " 0.95X1

X2

#

95%CI

X = Xprob X | data{ }, I( )dX!

X

Page 30: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Example: signal in the presence of background noise •  Very common problem: e.g. peak of lidar return from forest canopy?

Presence of a star against a background? Transitioning planet?

Parameter estimation: 2 params

! =H01!H

0( )NA

B

x 0

See p 35-60 in Sivia & Skilling

Page 31: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Data are e.g. photon counts in a particular channel, so expect count in

kth channel Nk to be where A, B are signal and background

•  Assume peak is Gaussian (for now), width w, centered on xo so ideal datum Dk then given by

•  Where n0 is constant (∝integration time). Unlike Nk, Dk not a whole no., so actual datum some integer close to Dk

•  Poisson distribution is pdf which represents this property i.e.

Gaussian peak + background

Dk = n0 Ae! xk!x0( )2 2w2 +B"

#$%&'

! A xk( )+B x

k( )( )k=1

N

"

prob N |D( ) = DNe!D

N!

Page 32: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Poisson: prob. of N events occurring over some fixed time interval if

events occur at a known rate independently of time of previous event •  If expected number over a given interval is D, prob. of exactly N events

Aside: Poisson distribution

prob N |D( ) = DNe!D

N!

Used in discrete counting experiments, particularly cases where large number of outcomes, each of which is rare (law of rare events) e.g. •  Nuclear decay •  No. of calls arriving at a call centre per

minute – large number arriving BUT rare from POV of general population….

See practical page for poisson_plot.py

Page 33: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  So likelihood for datum Nk is

•  Where I includes reln. between expected counts Dk and A, B i.e. for our Gaussian model, xo, w, no are given (as is xk).

•  IF data are independent, then likelihood over all M data is just product of probs of individual measurements i.e.

•  As usual, we want posterior pdf of A, B given {Nk}, I

Gaussian peak + background prob Nk | A,B, I( ) =

Dk

Nke!Dk

Nk !

Dk = n0 Ae! xk!x0( )2 2w2 +B"

#$%&'

prob Nk{ } | A,B, I( ) = prob Nk | A,B, I( )k=1

M

!

prob A,B | Nk{ }, I( )! prob Nk{ } | A,B, I( )" prob A,B | I( )

Page 34: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Prior? Neither A, nor B can be –ve so most naïve prior pdf is

•  To calc constant we need Amax, Bmax but we may assume they are large enough not to cut off posterior pdf i.e. Is effectively 0 by then

•  So, log of posterior •  And, as before, we want A, B to maximise L •  Reliability is width of posterior about that point

Gaussian peak + background

prob A,B | I( ) =const A ! 0& B ! 0,

0 otherwise.

"#$

%$

prob Nk{ } | A,B, I( )

L = loge prob A,B | Nk{ }, I( )!"

#$= const + Nk loge Dk( )%Dk

!" #$k=1

M

&

Page 35: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  ‘Generate’ experimental data (see practical)

•  n0 chosen to give max expectation Dk = 100. Why do Nk > 100?

Gaussian peak + background

Page 36: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

Gaussian peak + background •  Posterior PDF is now 2D •  Max L A=1.11, B=1.89 (actual 1.09, 1.93)

Page 37: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Changing the experimental setup?

–  E.g. reducing counts per bin (SNR) e.g. because of shorter integration time, lower signal threshold etc.

Gaussian peak + background

Same signal, but data look much noisier – broader PDF Truncated at 0 – prior important

Page 38: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Changing the experimental setup?

–  Increasing number of bins (same count rate, but spread out over twice measurement range)

Gaussian peak + background

Much narrower posterior PDF BUT reduction mostly in B

Page 39: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  More data, so why uncertainty in A, B not reduced

equally? –  Data far from origin only tell you about background –  Conversely – restrict range of x over which data are collected

(fewer bins) it is hard to distinguish A from B (signal from noise) –  Skewed & high correlation between A, B

Gaussian peak + background

Page 40: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  If only interested in A then according to marginalisation

rule integrate joint posterior PDF wrt B i.e.

•  So •  See previous experimental cases…..

Marginal distribution

prob X | I( ) = prob X,Y I( )dY!"

"

#prob A | Nk{ }, I( ) = prob A,B Nk{ }, I( )dB

0

!

"

15 bins, ~100 counts maximum 15 bins, ~10 counts maximum

1 2

Page 41: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Marginal conditional •  Marginal pdf: takes into account prior ignorance of B •  Conditional pdf: assumes we know B e.g. via calibration •  Least difference when measurements made far from A (3) •  Most when data close to A (4)

Marginal distribution

31 bins, ~100 counts maximum 7 bins, ~100 counts maximum

prob A | Nk{ }, I( ) ! prob A | Nk{ },B, I( )

4 3

Page 42: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

Note: DO NOT PANIC

•  Slides from here are for completeness •  I will NOT ask you about uncertainty in the 2

param or general cases from here on

Page 43: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Posterior L shows reliability of parameters & we want optimal

•  For parameters {Xj}, with post. •  Optimal {Xoj} is set of simultaneous eqns •  For i = 1, 2, …. Nparams

•  So for log of P i.e. and for 2 parameters we want

•  where

Uncertainty

!P!Xi Xoj{ }

= 0

L = loge prob X j{ } data{ }, I( )!"

#$

!L!X Xo ,Yo

=!L!Y Xo ,Yo

= 0

P = prob X j{ } data{ }, I( )Max??

L = loge prob X,Y data{ }, I( )!" #$

Sivia & Skilling (2006) Chapter 3, p 35-51

Page 44: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  To estimate reliability of best estimate we want spread of P about

(Xo, Yo) •  Do this using Taylor expansion i.e.

•  Or

•  So for the first three terms (to quadratic) we have

•  Ignore (X-Xo) and (Y-Yo) terms as expansion is about maximum

Uncertainty

L = L Xo,Yo( )+ 12!2L!X 2

Xo ,Yo

X " Xo( )2 + !2L!Y 2

Xo ,Yo

Y "Yo( )2#

$%%

+ 2 !2L!X!Y Xo ,Yo

X " Xo( ) Y "Yo( )&

'((+...

http://en.wikipedia.org/wiki/Taylor_series

f x( ) = f a( )+!f a( )1!

+!!f a( )2!

x " a( )2 +!!!f a( )3!

x " a( )3 +....

f x( ) =f n( ) a( )n!n=0

!

" x # a( )n

Sivia & Skilling (2008) Chapter 3, p 35-51

Page 45: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  So mainly concerned with quadratic terms. Rephrase via matrices

•  For quadratic term, Q

•  Where

Uncertainty

A = !2L!X 2

Xo ,Yo

,B = !2L!Y 2

Xo ,Yo

,C = !2L!X!Y Xo ,Yo

Q = X ! Xo Y !Yo( ) A CC B

"

#$

%

&'

X ! Xo

Y !Yo

"

#

$$

%

&

''

k!1

k!2

e2

e1

X

Y

Q=k

Xo

Yo

•  Contour of Q in X-Y plane i.e. line of constant L

•  Orientation and eccentricity determined by A, B, C

•  Directions e1 and e2 are the eigenvectors of 2nd derivative matrices A, B, C

Sivia & Skilling (2008) Chapter 3, p 35-51

Page 46: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  So (x,y) component of e1 and e2 given by solutions of

•  Where eigenvalues λ1 and λ2 are ∝1/k2 (& k1,2 are widths of ellipse along principal directions)

•  If (Xo, Yo) is maximum then λ1 and λ2 < 0 •  So A < 0, B < 0 and AB > C2 •  So if C ≠ 0 then ellipse not aligned to axes, and how do we

estimate error bars on Xo, Yo? •  We can get rid of parameters we don’t want (Y for e.g.) by

integrating i.e.

Uncertainty

A CC B

!

"#

$

%&

xy

!

"##

$

%&&= !

xy

!

"##

$

%&&

prob X data{ }, I( ) = prob X,Y data{ }, I( )!"

+"

#

Sivia & Skilling (2008) Chapter 3, p 35-51

Page 47: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  And then use Taylor again &

•  So (see S&S p 46 & Appendix)

•  And so marginal distn. for X is just Gaussian with best estimate (mean) Xo and uncertainty (SD)

•  So all fine and we can calculate uncertainty……right?

Uncertainty

Sivia & Skilling (2008) Chapter 3, p 35-51

prob X,Y data{ }, I( ) = exp L( )! expQ

2

"

#$

%

&'

prob X data{ }, I( )! exp1

2

AB"C2

B

#

$%

&

'( X " Xo( )

2)

*+

,

-.

!X=

!B

AB!C2

!Y=

!A

AB!C2

Page 48: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Note AB-C2 is determinant of and is λ1 x λ2

•  So if λ1 or λ2 0 then AB-C2 0 and σX and σY ∞ •  Oh dear…… •  So consider variance of posterior

•  Where µ is mean •  For a 1D normal distribution this gives •  For 2D case (X,Y) here

•  Which we have from before. Same for Y so…..

Uncertainty

Sivia & Skilling (2008) Chapter 3, p 35-51

A C

C B

!

"#

$

%&

e2

e1 Var X( ) = X !µ( )

2

= X !µ( )2

prob X data{ }, I( )" dX

X

X !µ( )2

=! 2

! 2

X = X ! Xo( )2

= X ! Xo( )2

"" prob X,Y data{ }, I( )dXdY

Page 49: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Consider covariance σ2

XY

•  Which describes correlation between X and Y and if estimate of X has little/no effect on estimate of Y then

•  And, using Taylor as before

•  •  So in matrix notation •  Where we remember that

Uncertainty

Sivia & Skilling (2008) Chapter 3, p 35-51

! 2

XY = X ! Xo( ) Y !Yo( ) = X ! Xo( ) Y !Yo( )"" prob X,Y data{ }, I( )dXdY

! 2

XY<< ! 2

X! 2

Y

! 2

XY=

C

AB!C2

!X

2 !XY

2

!XY

2 !Y

2

!

"

##

$

%

&&=

1

AB'C2

'B C

C 'A

!

"#

$

%&= '

A C

C B

!

"#

$

%&

'1

A = !2L!X 2

Xo ,Yo

,B = !2L!Y 2

Xo ,Yo

,C = !2L!X!Y Xo ,Yo

Page 50: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Covariance (or variance-covariance) matrix describes covariance

of error terms •  When C = 0, σ2

XY = 0 and no correlation, and e1 and e2 aligned with axes

•  If C increases (relative to A, B), posterior pdf becomes more skewed and elliptical - rotated at angle ± tan-1(√A/B)

Uncertainty

After Sivia & Skilling (2008) fig 3.7 p. 48

C=0, X, Y uncorrelated Large, -ve correlation Large, +ve correlation

Page 51: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  As correlation grows, if C =(AB)1/2 then contours infinitely wide in

one direction (except for prior bounds) •  In this case σX and σY v. large (i.e. very unreliable parameter

estimates) •  BUT large off-diagonals in covariance matrix mean we can

estimate linear combinations of parameters •  For –ve covariance, posterior wide in direction Y=-mX, where

m=(A/B)1/2 but narrow perpendicular to axis along Y+mX = c •  i.e. lot of information about Y+mX but little about Y – X/m

•  For +ve correlation most info. on Y-mX but not Y + X/m

Uncertainty

After Sivia & Skilling (2008) fig 3.7 p. 48

Page 52: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Seen the 2 param case, so what about generalisation of Taylor

quadratic approximation to M params? •  Remember, we want {Xoj} to maximise L, (log) posterior pdf •  Rephrase in matrix form Xo i.e. for i = 1, 2, …. M we want

•  Extension of Taylor expansion to M variables is

•  So if X is an M x 1 column vector and ignoring higher terms, exponential of posterior pdf is

Uncertainty

Sivia & Skilling (2008) Chapter 3, p 35-51

!2L!Xi

2"o

= 0

L = L Xo( )+ 12

!2L!Xi!Xj Xoj=1

M

"i=1

M

" Xi # Xoi( ) Yj #Yoj( )+…X1X2!XM

!

"

#####

$

%

&&&&&

prob X data{ }, I( )! exp 12 X " Xo( )T ##L Xo( ) X " Xo( )$

%&'

()

Page 53: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Where is a symmetric M x M matrix of 2nd derivatives

•  And (X-Xo)T is the transpose of (X-Xo) (a row vector) •  So this is generalisation of Q from 2D case

•  And contour map from before is just a 2D slice through our now M dimensional parameter space

•  Constant of proportionality is

Uncertainty

Sivia & Skilling (2008) Chapter 3, p 35-51

prob X data{ }, I( )! exp 12 X " Xo( )T ##L Xo( ) X " Xo( )$

%&'

()

!!L

!!L =

"2L"X1"X1

! "2L"XM"X1

" # ""2L

"X1"XM

! "2L"XM"XM

#

$

%%%%%%%

&

'

(((((((

prob X data{ }, I( )! exp1

2

AB"C2

B

#

$%

&

'( X " Xo( )

2)

*+

,

-.

2!( )!M2 det ""L( )

Page 54: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  So what are the implications of all of this?? •  Maximum of M parameter posterior PDF is Xo & we know •  Compare to 2D case & see is analogous to -1/σ2 •  Can show that generalised case for covariance matrix σ2 is

•  Square root of diagonals (i=j) give marginal error bars and off-diagonals (i≠j) decribe correlations between parameters

•  So covariance matrix contains most information describing model fit AND faith we have in parameter estimates

Uncertainty

Sivia & Skilling (2008) Chapter 3, p 35-51

!L Xo( ) = 0!!L

! 2!" #$ij = Xi % Xoi( ) Xi % Xoj( ) = % &&L( )%1!"

#$ij

Page 55: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Sivia & Skilling make the important point (p50) that inverse of

diagonal elements of matrix ≠ diagonal of inverse of matrix •  i.e. do NOT try and estimate value / spread of one parameter in M

dim case by holding all others fixed at optimal values

Uncertainty

After Sivia & Skilling (2008) p50.

Xj

Xi

Xoj

σii

Incorrect ‘best fit’ σii

•  Need to include marginalisation to get correct magnitude for uncertainty

•  Discussion of multimodal and asymmetric posterior PDF for which Gaussian is not good approx

•  S&S p51….

Page 56: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  We have seen that we can express condition for best estimate of set

of M parameters {Xj} very compactly as •  Where jth element of is (log posterior pdf) evaluated at

(X=Xo) •  So this is set of simultaneous equations, which, IF they are linear i.e.

•  Then can use linear algebra methods to solve i.e. •  This is the power (joy?) of linearity! Will see more on this later •  Even if system not linear, we can often approximate as linear over

some limited domain to allow linear methods to be used •  If not, then we have to use other (non-linear) methods…..

Summary

!L Xo( ) = 0!L !L !Xj

!L = HX +CXo = !H

!1C

Page 57: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Two parameter eg: Gaussian peak + background •  Solve via Bayes’ T using Taylor expansion (to quadratic) •  Issues over experimental setup

–  Integration time, number of bins, size etc. –  Impact on posterior PDF

•  Can use linear methods to derive uncertainty estimates and explore correlation between parameters

•  Extend to multi-dimensional case using same method •  Be careful when dealing with uncertainty •  KEY: not always useful to look for summary statistics – if

in doubt look at the posterior PDF – this gives full description

Summary

Page 58: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

END

Page 59: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

59

Sally Clark case: Bayesian odds ratios •  Two cot-deaths (SIDS), 1 year apart, aged 11 weeks and

8 weeks. Mother Sally Clark charged with double murder, tried and convicted in 1999 –  Statistical evidence was misunderstood, “expert” testimony was

wrong, and a fundamental logical fallacy was introduced

•  What happened? •  We can use Bayes’ Theorem to decide between 2

hypotheses –  H1 = Sally Clark committed double murder –  H2 = Two children DID die of SIDS

•  http://betterexplained.com/articles/an-intuitive-and-short-explanation-of-bayes-theorem/

•  http://yudkowsky.net/rational/bayes

Page 60: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

60

The tragic case of Sally Clark

•  Data? We observe there are 2 dead children •  We need to decide which of H1 or H2 are more

plausible, given D (and prior expectations) •  i.e. want ratio P(H1|D) / P(H2|D) i.e. odds of H1 being

true compared to H2, GIVEN data and prior

P H1|D( )P H2 |D( )

=P D |H1( )P D |H2( )

!P H1( )P H2( )

prob. of H1 or H2 given data D

Likelihoods i.e. prob. of getting data D IF H1 is true, or if H2 is true

Very important - PRIOR probability i.e. previous best guess

Page 61: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

61

The tragic case of Sally Clark •  ERROR 1: events NOT independent •  P(1 child dying of SIDS)? ~ 1:1300, but for affluent non-

smoking, mother > 26yrs ~ 1:8500. •  Prof. Sir Roy Meadows (expert witness)

–  P(2 deaths)? 1:8500*8500 ~ 1:73 million. –  This was KEY to her conviction & is demonstrably wrong –  ~650000 births a year in UK, so at 1:73M a double cot death is a 1

in 100 year event. BUT 1 or 2 occur every year – how come?? No one checked …

–  NOT independent P(2nd death | 1st death) 5-10 higher i.e. 1:100 to 200, so P(H2) actually 1:1300*5/1300 ~ 1:300000

Page 62: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

62

The tragic case of Sally Clark

•  ERROR 2: “Prosecutor’s Fallacy” –  1:300000 still VERY rare, so she’s unlikely to be innocent, right??

•  Meadows “Law”: ‘one cot death is a tragedy, two cot deaths is suspicious and, until the contrary is proved, three cot deaths is murder’

–  WRONG: Fallacy to mistake chance of a rare event as chance that defendant is innocent

•  In large samples, even rare events occur quite frequently - someone wins the lottery (1:14M) nearly every week

•  650000 births a year, expect 2-3 double cot deaths….. •  AND we are ignoring rarity of double murder (H1)

Page 63: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

63

The tragic case of Sally Clark •  ERROR 3: ignoring odds of alternative (also very rare)

–  Single child murder v. rare (~30 cases a year) BUT generally significant family/social problems i.e. NOT like the Clarks.

–  P(1 murder) ~ 30:650000 i.e. 1:21700 –  Double MUCH rarer, BUT P(2nd|1st murder) ~ 200 x more likely given first,

so P(H1|D) ~ (1/21700* 200/21700) ~ 1:2.4M

•  So, two very rare events, but double murder ~ 10 x rarer than double SIDS

•  So P(H1|D) / P(H2|D)? –  P (murder) : P (cot death) ~ 1:10 i.e. 10 x more likely to be double SIDS –  Says nothing about guilt & innocence, just relative probability

Page 64: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

64

The tragic case of Sally Clark •  Sally Clark acquitted in 2003 after 2nd appeal (but not on

statistical fallacies) after 3 yrs in prison, died of alcohol poisoning in 2007 –  Meadows “Law” redux: triple murder v triple SIDS?

•  In fact, P(triple murder | 2 previous) : P(triple SIDS| 2 previous) ~ ((21700 x 123) x 10) / ((1300 x 228) x 50) = 1.8:1

•  So P(triple murder) > P(SIDS) but not by much

•  Meadows’ ‘Law’ should be: –  ‘when three sudden deaths have occurred in the same family, statistics give no

strong indication one way or the other as to whether the deaths are more or less likely to be SIDS than homicides’

From: Hill, R. (2004) Multiple sudden infant deaths – coincidence or beyond coincidence, Pediatric and Perinatal Epidemiology, 18, 320-326 (http://www.cse.salford.ac.uk/staff/RHill/ppe_5601.pdf)

Page 65: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Where prob(X|I) is the marginalisation equation •  But if Y is a proposition, how can we integrate over it?

Bayes’ Theorem & marginalisation

prob X |Y, I( ) =prob Y | X, I( )! prob X, I( )

prob(Y | I )

prob X | I( ) = prob X,Y | I( )dY"#

+#

$

•  Generally, using X for Hypothesis, and Y for Data

Page 66: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  Suppose instead of Y and (not Y) we have a set of alternative possibilities: Y1, Y2, …. YM = {Yk}

•  Eg M candidates for an election, Y1 = prob. candidate 1 will win, Y2 cand. 2 will win etc.

•  Prob that X is true e.g. that unemployment will fall in 1 year, irrespective of who wins (Y) is

•  As long as

•  i.e. the various probabilities {Yk} are exhaustive and mutually exclusive, so that if one Yk is true, all others are false but one must be true

Bayes’ Theorem & marginalisation

prob X | I( ) = prob X,Yk | I( )k=1

M

!

Y

prob X,Yk | I( )k=1

M

! =1

Page 67: GEOGG121: Methods Introduction to Bayesian analysismdisney/teaching/GEOGG121/bayes/nm_bayes.pdf · – Conclusion: Every horse has a heart. – Conclusion goes beyond information

•  As M gets larger, we approach

•  Eg we could consider an arbitrarily large number of propositions about the range in which my weight WMD could lie

•  Choose contiguous intervals and large enough range (M ∞), we will have a mutually exclusive, exhaustive set of possibilities

•  So Y represents parameter of interest (WMD in this case) and integrand prob(X,Y|I) is now a distribution – probability density function (pdf)

•  And prob. that Y lies in finite range between y1 and y2 (and X is also true) is

Bayes’ Theorem & marginalisation prob X | I( ) = prob X,Y | I( )dY

!"

+"

#

pdf X,Y = y | I( ) = lim!y!0

prob X, y "Y < y+!y | I( )!y

prob X, y1!Y < y

2| I( ) = pdf X,Y | I( )dY

y1

y2

"