Introduction to Bayesian Inference Lectures 1 & 2...

69
Introduction to Bayesian Inference Lectures 1 & 2: Fundamentals Tom Loredo Dept. of Astronomy, Cornell University http://www.astro.cornell.edu/staff/loredo/bayes/ Lectures http://inference.astro.cornell.edu/INPE09/ INPE — 14 September 2009 1 / 69

Transcript of Introduction to Bayesian Inference Lectures 1 & 2...

Page 1: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Introduction to Bayesian Inference

Lectures 1 & 2: Fundamentals

Tom LoredoDept. of Astronomy, Cornell University

http://www.astro.cornell.edu/staff/loredo/bayes/

Lectures http://inference.astro.cornell.edu/INPE09/

INPE — 14 September 2009

1 / 69

Page 2: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Lectures 1 & 2: Fundamentals

1 The big picture

2 A flavor of Bayes: χ2 confidence/credible regions

3 Foundations: Logic & probability theory

4 Probability theory for data analysis: Three theorems

5 Inference with parametric modelsParameter EstimationModel Uncertainty

6 Simple examplesBinary OutcomesNormal DistributionPoisson Distribution

2 / 69

Page 3: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Bayesian Fundamentals

1 The big picture

2 A flavor of Bayes: χ2 confidence/credible regions

3 Foundations: Logic & probability theory

4 Probability theory for data analysis: Three theorems

5 Inference with parametric modelsParameter EstimationModel Uncertainty

6 Simple examplesBinary OutcomesNormal DistributionPoisson Distribution

3 / 69

Page 4: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Scientific Method

Science is more than a body of knowledge; it is a way of thinking.The method of science, as stodgy and grumpy as it may seem,

is far more important than the findings of science.—Carl Sagan

Scientists argue!

Argument ≡ Collection of statements comprising an act ofreasoning from premises to a conclusion

A key goal of science: Explain or predict quantitativemeasurements (data!)

Data analysis: Constructing and appraising arguments that reasonfrom data to interesting scientific conclusions (explanations,predictions)

4 / 69

Page 5: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

The Role of Data

Data do not speak for themselves!

We don’t just tabulate data, we analyze data.

We gather data so they may speak for or against existinghypotheses, and guide the formation of new hypotheses.

A key role of data in science is to be among the premises inscientific arguments.

5 / 69

Page 6: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Data AnalysisBuilding & Appraising Arguments Using Data

Statistical Data

AnalysisInference, decision,

design...

Efficiently and accurately

represent informationGenerate hypotheses;

qualitative assessment

Quantify uncertainty

in inferences

Modes of Data Analysis

Exploratory Data

Analysis

Data Reduction

Statistical inference is but one of several interacting modes ofanalyzing data.

6 / 69

Page 7: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Bayesian Statistical Inference

• A different approach to all statistical inference problems (i.e.,not just another method in the list: BLUE, maximumlikelihood, χ2 testing, ANOVA, survival analysis . . . )

• Foundation: Use probability theory to quantify the strength ofarguments (i.e., a more abstract view than restricting PT todescribe variability in repeated “random” experiments)

• Focuses on deriving consequences of modeling assumptionsrather than devising and calibrating procedures

7 / 69

Page 8: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Frequentist vs. Bayesian Statements

“I find conclusion C based on data Dobs . . . ”

Frequentist assessment“It was found with a procedure that’s right 95% of the timeover the set Dhyp that includes Dobs.”Probabilities are properties of procedures, not of particularresults.

Bayesian assessment“The strength of the chain of reasoning from Dobs to C is0.95, on a scale where 1= certainty.”Probabilities are properties of specific results.Long-run performance must be separately evaluated (and istypically good by frequentist criteria).

8 / 69

Page 9: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Bayesian Fundamentals

1 The big picture

2 A flavor of Bayes: χ2 confidence/credible regions

3 Foundations: Logic & probability theory

4 Probability theory for data analysis: Three theorems

5 Inference with parametric modelsParameter EstimationModel Uncertainty

6 Simple examplesBinary OutcomesNormal DistributionPoisson Distribution

9 / 69

Page 10: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Estimating Parameters Via χ2

Collect data Dobs = di , σi, fit with 2-parameter model via χ2:

χ2(A, T ) =N

i=1

[di − fi (A, T )]2

σ2i

Two classes of variables

• Data (samples) di — Known, define N-D sample space• Parameters θ = (A,T ) — Unknown, define 2-D parameter space

A

T

!

“Best fit” θ = arg minθ χ2(θ)

Report uncertainties via χ2 contours, but how do we quantifyuncertainty vs. contour level?

10 / 69

Page 11: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Frequentist: Parametric Bootstrap

80 1 2 3 4 5 6 7

0.5

0

0.1

0.2

0.3

0.4

95%

A

T

A

T

!

θ(Dobs)

DobsDsim

∆χ2

f(∆

χ2)

∆χ2

!

Initial θ

Markov chain

θ(Dsim)

11 / 69

Page 12: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Parametric Bootstrap Algorithm

Monte Carlo algorithm for finding approximate confidence regions:

1. Pretend true params θ∗ = θ(Dobs) (“plug-in approx’n”)2. Repeat N times:

1. Simulate a dataset from p(Dsim|θ∗) → χ2

Dsim(θ)

2. Find min χ2 estimate θ(Dsim)3. Calculate ∆χ2 = χ2(θ∗) − χ2(θDsim

)4. Histogram the ∆χ2 values to find coverage vs. ∆χ2

(fraction of sim’ns with smaller ∆χ2)

Result is approximate even for N → ∞ because θ∗ 6= θ(Dobs).

12 / 69

Page 13: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Bayesian: Posterior Sampling Via MCMC

A

T

Dobs

80 1 2 3 4 5 6 7

0.5

0

0.1

0.2

0.3

0.4

95%

f(∆

χ2)

∆χ2

Initial θ

e−χ

2/2contours

Markov chain

13 / 69

Page 14: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Posterior Sampling Algorithm

Monte Carlo algorithm for finding credible regions:

1. Create a RNG that can sample θ from p(θ|Dobs) ∝ e−χ2(θ)/2

2. Draw N samples; record θi and qi = χ2(θi )3. Sort the samples by the qi values4. A credible region of probability P is the θ region spanned by the

100P% of samples with highest qi

Note that no dataset other than Dobs is ever considered.

The only approximation is the use of Monte Carlo.

These procedures produce different regions in general.

14 / 69

Page 15: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Bayesian Fundamentals

1 The big picture

2 A flavor of Bayes: χ2 confidence/credible regions

3 Foundations: Logic & probability theory

4 Probability theory for data analysis: Three theorems

5 Inference with parametric modelsParameter EstimationModel Uncertainty

6 Simple examplesBinary OutcomesNormal DistributionPoisson Distribution

15 / 69

Page 16: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Logic—Some Essentials“Logic can be defined as the analysis and appraisal of arguments”

—Gensler, Intro to Logic

Build arguments with propositions and logicaloperators/connectives:

• Propositions: Statements that may be true or false

P : Universe can be modeled with ΛCDM

A : Ωtot ∈ [0.9, 1.1]

B : ΩΛ is not 0

B : “not B,” i.e., ΩΛ = 0

• Connectives:

A ∧ B : A andB are both true

A ∨ B : A orB is true, or both are

16 / 69

Page 17: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Arguments

Argument: Assertion that an hypothesized conclusion, H, followsfrom premises, P = A, B, C , . . . (take “,” = “and”)

Notation:

H|P : Premises P imply H

H may be deduced from P

H follows from P

H is true given that P is true

Arguments are (compound) propositions.

Central role of arguments → special terminology for true/false:

• A true argument is valid

• A false argument is invalid or fallacious

17 / 69

Page 18: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Valid vs. Sound Arguments

Content vs. form

• An argument is factually correct iff all of its premises are true(it has “good content”).

• An argument is valid iff its conclusion follows from itspremises (it has “good form”).

• An argument is sound iff it is both factually correct and valid(it has good form and content).

Deductive logic and probability theory address validity.

We want to make sound arguments. There is no formal approachfor addressing factual correctness → there is always a subjectiveelement to an argument.

18 / 69

Page 19: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Factual Correctness

Passing the buckAlthough logic can teach us something about validity andinvalidity, it can teach us very little about factual correctness.The question of the truth or falsity of individual statements isprimarily the subject matter of the sciences.

— Hardegree, Symbolic Logic

An open issueTo test the truth or falsehood of premisses is the task ofscience. . . . But as a matter of fact we are interested in, andmust often depend upon, the correctness of arguments whosepremisses are not known to be true.

— Copi, Introduction to Logic

19 / 69

Page 20: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Premises

• Facts — Things known to be true, e.g. observed data

• “Obvious” assumptions — Axioms, postulates, e.g., Euclid’sfirst 4 postulates (line segment b/t 2 points; congruency ofright angles . . . )

• “Reasonable” or “working” assumptions — E.g., Euclid’s fifthpostulate (parallel lines)

• Desperate presumption!

• Conclusions from other arguments

20 / 69

Page 21: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Deductive and Inductive Inference

Deduction—Syllogism as prototypePremise 1: A implies HPremise 2: A is trueDeduction: ∴ H is trueH|P is valid

Induction—Analogy as prototypePremise 1: A, B, C , D, E all share properties x , y , zPremise 2: F has properties x , yInduction: F has property z“F has z”|P is not strictly valid, but may still be rational(likely, plausible, probable); some such arguments are strongerthan others

Boolean algebra (and/or/not over 0, 1) quantifies deduction.

Bayesian probability theory (and/or/not over [0, 1]) generalizes thisto quantify the strength of inductive arguments.

21 / 69

Page 22: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Deductive Logic

Assess arguments by decomposing them into parts via connectives,and assessing the parts

Validity of A ∧ B|P

A|P A|P

B|P valid invalid

B|P invalid invalid

Validity of A ∨ B|P

A|P A|P

B|P valid valid

B|P valid invalid

22 / 69

Page 23: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Representing Deduction With 0, 1 Algebra

V (H|P) ≡ Validity of argument H|P:

V = 0 → Argument is invalid

= 1 → Argument is valid

Then deduction can be reduced to integer multiplication andaddition over 0, 1 (as in a computer):

V (A ∧ B|P) = V (A|P)V (B|P)

V (A ∨ B|P) = V (A|P) + V (B|P) − V (A ∧ B|P)

V (A|P) = 1 − V (A|P)

23 / 69

Page 24: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Representing Induction With [0, 1] Algebra

P(H|P) ≡ strength of argument H|P

P = 0 → Argument is invalid; premises imply H

= 1 → Argument is valid

∈ (0, 1) → Degree of deducibility

Mathematical model for induction

‘AND’ (product rule): P(A ∧ B|P) = P(A|P)P(B|A ∧ P)

= P(B|P)P(A|B ∧ P)

‘OR’ (sum rule): P(A ∨ B|P) = P(A|P) + P(B|P)−P(A ∧ B|P)

‘NOT’: P(A|P) = 1 − P(A|P)

24 / 69

Page 25: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

The Product Rule

We simply promoted the V algebra to real numbers; the only thingchanged is part of the product rule:

V (A ∧ B|P) = V (A|P)V (B|P)

P(A ∧ B|P) = P(A|P)P(B|A,P)

Suppose A implies B (i.e., B|A,P is valid). Then we don’t expectP(A ∧ B|P) to differ from P(A|P).

In particular, P(A ∧ A|P) must equal P(A|P)!

Such qualitative reasoning satisfied early probabilists that the sumand product rules were worth considering as axioms for a theory ofquantified induction.

25 / 69

Page 26: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Firm Foundations

Today many different lines of argument deriveinduction-as-probability from various simple and appealingrequirements:

• Consistency with logic + internal consistency (Cox; Jaynes)

• “Coherence”/optimal betting (Ramsey; DeFinetti; Wald; Savage)

• Algorithmic information theory (Rissanen; Wallace & Freeman)

• Optimal information processing (Zellner)

• Avoiding problems with frequentist methods:

• Avoiding recognizable subsets (Cornfield)

• Avoiding stopping rule problems → likelihood principle(Birnbaum; Berger & Wolpert)

26 / 69

Page 27: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Interpreting Bayesian Probabilities

If we like there is no harm in saying that a probability expresses adegree of reasonable belief. . . . ‘Degree of confirmation’ has beenused by Carnap, and possibly avoids some confusion. But whateververbal expression we use to try to convey the primitive idea, thisexpression cannot amount to a definition. Essentially the notioncan only be described by reference to instances where it is used. Itis intended to express a kind of relation between data andconsequence that habitually arises in science and in everyday life,and the reader should be able to recognize the relation fromexamples of the circumstances when it arises.

— Sir Harold Jeffreys, Scientific Inference

27 / 69

Page 28: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

More On Interpretation

Physics uses words drawn from ordinary language—mass, weight,momentum, force, temperature, heat, etc.—but their technical meaningis more abstract than their colloquial meaning. We can map between thecolloquial and abstract meanings associated with specific values by usingspecific instances as “calibrators.”

A Thermal Analogy

Intuitive notion Quantification Calibration

Hot, cold Temperature, T Cold as ice = 273KBoiling hot = 373K

uncertainty Probability, P Certainty = 0, 1

p = 1/36:plausible as “snake’s eyes”

p = 1/1024:plausible as 10 heads

28 / 69

Page 29: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

A Bit More On Interpretation

Bayesian

Probability quantifies uncertainty in an inductive inference. p(x)

describes how probability is distributed over the possible values x

might have taken in the single case before us:

P

x

p is distributed

x has a single,uncertain value

Frequentist

Probabilities are always (limiting) rates/proportions/frequencies in

an ensemble. p(x) describes variability, how the values of x are

distributed among the cases in the ensemble:

x is distributed

x

P

29 / 69

Page 30: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Bayesian Fundamentals

1 The big picture

2 A flavor of Bayes: χ2 confidence/credible regions

3 Foundations: Logic & probability theory

4 Probability theory for data analysis: Three theorems

5 Inference with parametric modelsParameter EstimationModel Uncertainty

6 Simple examplesBinary OutcomesNormal DistributionPoisson Distribution

30 / 69

Page 31: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Arguments Relating

Hypotheses, Data, and Models

We seek to appraise scientific hypotheses in light of observed dataand modeling assumptions.

Consider the data and modeling assumptions to be the premises ofan argument with each of various hypotheses, Hi , as conclusions:Hi |Dobs, I . (I = “background information,” everything deemedrelevant besides the observed data)

P(Hi |Dobs, I ) measures the degree to which (Dobs, I ) allow one todeduce Hi . It provides an ordering among arguments for various Hi

that share common premises.

Probability theory tells us how to analyze and appraise theargument, i.e., how to calculate P(Hi |Dobs, I ) from simpler,hopefully more accessible probabilities.

31 / 69

Page 32: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

The Bayesian Recipe

Assess hypotheses by calculating their probabilities p(Hi | . . .)conditional on known and/or presumed information using therules of probability theory.

Probability Theory Axioms:

‘OR’ (sum rule): P(H1 ∨ H2|I ) = P(H1|I ) + P(H2|I )−P(H1,H2|I )

‘AND’ (product rule): P(H1,D|I ) = P(H1|I )P(D|H1, I )

= P(D|I )P(H1|D, I )

‘NOT’: P(H1|I ) = 1 − P(H1|I )

32 / 69

Page 33: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Three Important Theorems

Bayes’s Theorem (BT)Consider P(Hi , Dobs|I ) using the product rule:

P(Hi , Dobs|I ) = P(Hi |I )P(Dobs|Hi , I )

= P(Dobs|I )P(Hi |Dobs, I )

Solve for the posterior probability:

P(Hi |Dobs, I ) = P(Hi |I )P(Dobs|Hi , I )

P(Dobs|I )

Theorem holds for any propositions, but for hypotheses &data the factors have names:

posterior ∝ prior × likelihood

norm. const. P(Dobs|I ) = prior predictive

33 / 69

Page 34: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Law of Total Probability (LTP)Consider exclusive, exhaustive Bi (I asserts one of themmust be true),

i

P(A, Bi |I ) =∑

i

P(Bi |A, I )P(A|I ) = P(A|I )

=∑

i

P(Bi |I )P(A|Bi , I )

If we do not see how to get P(A|I ) directly, we can find a setBi and use it as a “basis”—extend the conversation:

P(A|I ) =∑

i

P(Bi |I )P(A|Bi , I )

If our problem already has Bi in it, we can use LTP to getP(A|I ) from the joint probabilities—marginalization:

P(A|I ) =∑

i

P(A, Bi |I )

34 / 69

Page 35: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Example: Take A = Dobs, Bi = Hi ; then

P(Dobs|I ) =∑

i

P(Dobs,Hi |I )

=∑

i

P(Hi |I )P(Dobs|Hi , I )

prior predictive for Dobs = Average likelihood for Hi

(a.k.a. marginal likelihood)

NormalizationFor exclusive, exhaustive Hi ,

i

P(Hi | · · · ) = 1

35 / 69

Page 36: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Well-Posed ProblemsThe rules express desired probabilities in terms of otherprobabilities.

To get a numerical value out, at some point we have to putnumerical values in.

Direct probabilities are probabilities with numerical valuesdetermined directly by premises (via modeling assumptions,symmetry arguments, previous calculations, desperatepresumption . . . ).

An inference problem is well posed only if all the neededprobabilities are assignable based on the premises. We may need toadd new assumptions as we see what needs to be assigned. Wemay not be entirely comfortable with what we need to assume!(Remember Euclid’s fifth postulate!)

Should explore how results depend on uncomfortable assumptions(“robustness”).

36 / 69

Page 37: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

RecapBayesian inference is more than BT

Bayesian inference quantifies uncertainty by reportingprobabilities for things we are uncertain of, given specifiedpremises.

It uses all of probability theory, not just (or even primarily)Bayes’s theorem.

The Rules in Plain English

• Ground rule: Specify premises that include everything relevantthat you know or are willing to presume to be true (for thesake of the argument!).

• BT: To adjust your appraisal when new evidence becomesavailable, add the evidence to your initial premises.

• LTP: If the premises allow multiple arguments for ahypothesis, its appraisal must account for all of them.

37 / 69

Page 38: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Bayesian Fundamentals

1 The big picture

2 A flavor of Bayes: χ2 confidence/credible regions

3 Foundations: Logic & probability theory

4 Probability theory for data analysis: Three theorems

5 Inference with parametric modelsParameter EstimationModel Uncertainty

6 Simple examplesBinary OutcomesNormal DistributionPoisson Distribution

38 / 69

Page 39: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Inference With Parametric Models

Models Mi (i = 1 to N), each with parameters θi , each imply asampling dist’n (conditional predictive dist’n for possible data):

p(D|θi , Mi )

The θi dependence when we fix attention on the observed data isthe likelihood function:

Li (θi ) ≡ p(Dobs|θi , Mi )

We may be uncertain about i (model uncertainty) or θi (parameteruncertainty).

Henceforth we will only consider the actually observed data, so we dropthe cumbersome subscript: D = Dobs.

39 / 69

Page 40: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Three Classes of Problems

Parameter EstimationPremise = choice of model (pick specific i)→ What can we say about θi?

Model Assessment

• Model comparison: Premise = Mi→ What can we say about i?

• Model adequacy/GoF: Premise = M1∨ “all” alternatives→ Is M1 adequate?

Model AveragingModels share some common params: θi = φ, ηi→ What can we say about φ w/o committing to one model?(Examples: systematic error, prediction)

40 / 69

Page 41: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Parameter Estimation

Problem statementI = Model M with parameters θ (+ any add’l info)

Hi = statements about θ; e.g. “θ ∈ [2.5, 3.5],” or “θ > 0”

Probability for any such statement can be found using aprobability density function (PDF) for θ:

P(θ ∈ [θ, θ + dθ]| · · · ) = f (θ)dθ

= p(θ| · · · )dθ

Posterior probability density

p(θ|D, M) =p(θ|M) L(θ)

dθ p(θ|M) L(θ)

41 / 69

Page 42: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Summaries of posterior

• “Best fit” values:• Mode, θ, maximizes p(θ|D,M)• Posterior mean, 〈θ〉 =

dθ θ p(θ|D,M)

• Uncertainties:• Credible region ∆ of probability C :

C = P(θ ∈ ∆|D,M) =∫

∆dθ p(θ|D,M)

Highest Posterior Density (HPD) region has p(θ|D,M) higherinside than outside

• Posterior standard deviation, variance, covariances

• Marginal distributions• Interesting parameters φ, nuisance parameters η• Marginal dist’n for φ: p(φ|D,M) =

dη p(φ, η|D,M)

42 / 69

Page 43: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Nuisance Parameters and Marginalization

To model most data, we need to introduce parameters besidesthose of ultimate interest: nuisance parameters.

ExampleWe have data from measuring a rate r = s + b that is a sumof an interesting signal s and a background b.

We have additional data just about b.

What do the data tell us about s?

43 / 69

Page 44: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Marginal posterior distribution

p(s|D, M) =

db p(s, b|D, M)

∝ p(s|M)

db p(b|s)L(s, b)

≡ p(s|M)Lm(s)

with Lm(s) the marginal likelihood for s. For broad prior,

Lm(s) ≈ p(bs |s) L(s, bs) δbs

best b given s

b uncertainty given s

Profile likelihood Lp(s) ≡ L(s, bs) gets weighted by a parameterspace volume factor

E.g., Gaussians: s = r − b, σ2s = σ2

r + σ2b

Background subtraction is a special case of background marginalization.44 / 69

Page 45: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Model ComparisonProblem statement

I = (M1 ∨ M2 ∨ . . .) — Specify a set of models.Hi = Mi — Hypothesis chooses a model.

Posterior probability for a model

p(Mi |D, I ) = p(Mi |I )p(D|Mi , I )

p(D|I )

∝ p(Mi |I )L(Mi )

But L(Mi ) = p(D|Mi ) =∫

dθi p(θi |Mi )p(D|θi , Mi ).

Likelihood for model = Average likelihood for its parameters

L(Mi ) = 〈L(θi )〉

Varied terminology: Prior predictive = Average likelihood = Globallikelihood = Marginal likelihood = (Weight of) Evidence for model

45 / 69

Page 46: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Odds and Bayes factors

A ratio of probabilities for two propositions using the samepremises is called the odds favoring one over the other:

Oij ≡p(Mi |D, I )

p(Mj |D, I )

=p(Mi |I )

p(Mj |I )×

p(D|Mi , I )

p(D|Mj , I )

The data-dependent part is called the Bayes factor:

Bij ≡p(D|Mi , I )

p(D|Mj , I )

It is a likelihood ratio; the BF terminology is usually reserved forcases when the likelihoods are marginal/average likelihoods.

46 / 69

Page 47: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

An Automatic Occam’s Razor

Predictive probabilities can favor simpler models

p(D|Mi ) =

dθi p(θi |M) L(θi )

DobsD

P(D|H)

Complicated H

Simple H

47 / 69

Page 48: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

The Occam Factorp, L

θ∆θ

δθPrior

Likelihood

p(D|Mi ) =

dθi p(θi |M) L(θi ) ≈ p(θi |M)L(θi )δθi

≈ L(θi )δθi

∆θi

= Maximum Likelihood × Occam Factor

Models with more parameters often make the data moreprobable — for the best fit

Occam factor penalizes models for “wasted” volume ofparameter space

Quantifies intuition that models shouldn’t require fine-tuning

48 / 69

Page 49: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Model Averaging

Problem statementI = (M1 ∨ M2 ∨ . . .) — Specify a set of modelsModels all share a set of “interesting” parameters, φEach has different set of nuisance parameters ηi (or differentprior info about them)Hi = statements about φ

Model averagingCalculate posterior PDF for φ:

p(φ|D, I ) =∑

i

p(Mi |D, I ) p(φ|D, Mi )

∝∑

i

L(Mi )

dηi p(φ, ηi |D, Mi )

The model choice is a (discrete) nuisance parameter here.

49 / 69

Page 50: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Theme: Parameter Space Volume

Bayesian calculations sum/integrate over parameter/hypothesisspace!

(Frequentist calculations average over sample space & typically optimize

over parameter space.)

• Marginalization weights the profile likelihood by a volumefactor for the nuisance parameters.

• Model likelihoods have Occam factors resulting fromparameter space volume factors.

Many virtues of Bayesian methods can be attributed to thisaccounting for the “size” of parameter space. This idea does notarise naturally in frequentist statistics (but it can be added “byhand”).

50 / 69

Page 51: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Roles of the Prior

Prior has two roles

• Incorporate any relevant prior information

• Convert likelihood from “intensity” to “measure”

→ Accounts for size of hypothesis space

Physical analogy

Heat: Q =

dV cv (r)T (r)

Probability: P ∝

dθ p(θ|I )L(θ)

Maximum likelihood focuses on the “hottest” hypotheses.

Bayes focuses on the hypotheses with the most “heat.”

A high-T region may contain little heat if its cv is low or if

its volume is small.

A high-L region may contain little probability if its prior is low or if

its volume is small.

51 / 69

Page 52: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Recap of Key Ideas

• Probability as generalized logic for appraising arguments

• Three theorems: BT, LTP, Normalization

• Calculations characterized by parameter space integrals• Credible regions, posterior expectations• Marginalization over nuisance parameters• Occam’s razor via marginal likelihoods• Never integrate/average over hypothetical data

52 / 69

Page 53: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Bayesian Fundamentals

1 The big picture

2 A flavor of Bayes: χ2 confidence/credible regions

3 Foundations: Logic & probability theory

4 Probability theory for data analysis: Three theorems

5 Inference with parametric modelsParameter EstimationModel Uncertainty

6 Simple examplesBinary OutcomesNormal DistributionPoisson Distribution

53 / 69

Page 54: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Binary Outcomes:

Parameter Estimation

M = Existence of two outcomes, S and F ; for each case or trial,the probability for S is α; for F it is (1 − α)

Hi = Statements about α, the probability for success on the nexttrial → seek p(α|D, M)

D = Sequence of results from N observed trials:

FFSSSSFSSSFS (n = 8 successes in N = 12 trials)

Likelihood:

p(D|α,M) = p(failure|α,M) × p(failure|α,M) × · · ·

= αn(1 − α)N−n

= L(α)

54 / 69

Page 55: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

PriorStarting with no information about α beyond its definition,use as an “uninformative” prior p(α|M) = 1. Justifications:

• Intuition: Don’t prefer any α interval to any other of same size• Bayes’s justification: “Ignorance” means that before doing the

N trials, we have no preference for how many will be successes:

P(n success|M) =1

N + 1→ p(α|M) = 1

Consider this a convention—an assumption added to M tomake the problem well posed.

55 / 69

Page 56: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Prior Predictive

p(D|M) =

dα αn(1 − α)N−n

= B(n + 1, N − n + 1) =n!(N − n)!

(N + 1)!

A Beta integral, B(a, b) ≡∫

dx xa−1(1 − x)b−1 = Γ(a)Γ(b)Γ(a+b) .

56 / 69

Page 57: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Posterior

p(α|D, M) =(N + 1)!

n!(N − n)!αn(1 − α)N−n

A Beta distribution. Summaries:

• Best-fit: α = nN

= 2/3; 〈α〉 = n+1N+2 ≈ 0.64

• Uncertainty: σα =√

(n+1)(N−n+1)(N+2)2(N+3)

≈ 0.12

Find credible regions numerically, or with incomplete betafunction

Note that the posterior depends on the data only through n,not the N binary numbers describing the sequence.

n is a (minimal) Sufficient Statistic.

57 / 69

Page 58: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

58 / 69

Page 59: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Binary Outcomes: Model ComparisonEqual Probabilities?

M1: α = 1/2M2: α ∈ [0, 1] with flat prior.

Maximum Likelihoods

M1 : p(D|M1) =1

2N= 2.44 × 10−4

M2 : L(α) =

(

2

3

)n (

1

3

)N−n

= 4.82 × 10−4

p(D|M1)

p(D|α,M2)= 0.51

Maximum likelihoods favor M2 (failures more probable).

59 / 69

Page 60: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Bayes Factor (ratio of model likelihoods)

p(D|M1) =1

2N; and p(D|M2) =

n!(N − n)!

(N + 1)!

→ B12 ≡p(D|M1)

p(D|M2)=

(N + 1)!

n!(N − n)!2N

= 1.57

Bayes factor (odds) favors M1 (equiprobable).

Note that for n = 6, B12 = 2.93; for this small amount ofdata, we can never be very sure results are equiprobable.

If n = 0, B12 ≈ 1/315; if n = 2, B12 ≈ 1/4.8; for extremedata, 12 flips can be enough to lead us to strongly suspectoutcomes have different probabilities.

(Frequentist significance tests can reject null for any sample size.)

60 / 69

Page 61: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Binary Outcomes: Binomial DistributionSuppose D = n (number of heads in N trials), rather than theactual sequence. What is p(α|n, M)?

LikelihoodLet S = a sequence of flips with n heads.

p(n|α,M) =∑

S

p(S |α,M) p(n|S, α,M)αn (1 − α)N−n

[ # successes = n]

= αn(1 − α)N−nCn,N

Cn,N = # of sequences of length N with n heads.

→ p(n|α,M) =N!

n!(N − n)!αn(1 − α)N−n

The binomial distribution for n given α, N.61 / 69

Page 62: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Posterior

p(α|n, M) =

N!n!(N−n)!α

n(1 − α)N−n

p(n|M)

p(n|M) =N!

n!(N − n)!

dα αn(1 − α)N−n

=1

N + 1

→ p(α|n, M) =(N + 1)!

n!(N − n)!αn(1 − α)N−n

Same result as when data specified the actual sequence.

62 / 69

Page 63: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Another Variation: Negative Binomial

Suppose D = N, the number of trials it took to obtain a predifinednumber of successes, n = 8. What is p(α|N, M)?

Likelihoodp(N|α,M) is probability for n − 1 successes in N − 1 trials,times probability that the final trial is a success:

p(N|α,M) =(N − 1)!

(n − 1)!(N − n)!αn−1(1 − α)N−nα

=(N − 1)!

(n − 1)!(N − n)!αn(1 − α)N−n

The negative binomial distribution for N given α, n.

63 / 69

Page 64: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Posterior

p(α|D, M) = C ′

n,N

αn(1 − α)N−n

p(D|M)

p(D|M) = C ′

n,N

dα αn(1 − α)N−n

→ p(α|D, M) =(N + 1)!

n!(N − n)!αn(1 − α)N−n

Same result as other cases.

64 / 69

Page 65: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Final Variation: “Meteorological Stopping”

Suppose D = (N, n), the number of samples and number ofsuccesses in an observing run whose total number was determinedby the weather at the telescope. What is p(α|D, M ′)?

(M ′ adds info about weather to M.)

Likelihoodp(D|α,M ′) is the binomial distribution times the probabilitythat the weather allowed N samples, W (N):

p(D|α,M ′) = W (N)N!

n!(N − n)!αn(1 − α)N−n

Let Cn,N = W (N)(

Nn

)

. We get the same result as before!

65 / 69

Page 66: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Likelihood Principle

To define L(Hi ) = p(Dobs|Hi , I ), we must contemplate what otherdata we might have obtained. But the “real” sample space may bedetermined by many complicated, seemingly irrelevant factors; itmay not be well-specified at all. Should this concern us?

Likelihood principle: The result of inferences depends only on howp(Dobs|Hi , I ) varies w.r.t. hypotheses. We can ignore aspects of theobserving/sampling procedure that do not affect this dependence.

This is a sensible property that frequentist methods do not share.Frequentist probabilities are “long run” rates of performance, anddepend on details of the sample space that are irrelevant in aBayesian calculation.

66 / 69

Page 67: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Goodness-of-fit Violates the Likelihood Principle

Theory (H0)The number of “A” stars in a cluster should be 0.1 of thetotal.

Observations5 A stars found out of 96 total stars observed.

Theorist’s analysisCalculate χ2 using nA = 9.6 and nX = 86.4.

Significance level is p(> χ2|H0) = 0.12 (or 0.07 using morerigorous binomial tail area). Theory is accepted.

67 / 69

Page 68: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Observer’s analysisActual observing plan was to keep observing until 5 A starsseen!

“Random” quantity is Ntot, not nA; it should follow thenegative binomial dist’n. Expect Ntot = 50 ± 21.

p(> χ2|H0) = 0.03. Theory is rejected.

Telescope technician’s analysisA storm was coming in, so the observations would have endedwhether 5 A stars had been seen or not. The proper ensembleshould take into account p(storm) . . .

Bayesian analysisThe Bayes factor is the same for binomial or negative binomiallikelihoods, and slightly favors H0. Include p(storm) if youwant—it will drop out!

68 / 69

Page 69: Introduction to Bayesian Inference Lectures 1 & 2 ...das.inpe.br/school/2009/lectures_loredo/inpe09-Bayes-1-2.pdf · 5 Inference with parametric models Parameter Estimation Model

Continued in Lecture 2. . .

69 / 69