Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

30
Semi-automatic ABCA Discussion Semi-automatic ABC A Discussion Christian P. Robert Universit´ e Paris-Dauphine, IuF, & CREST http://xianblog.wordpress.com November 2, 2011 L A T E X code borrowed from arXiv:1004.1112v2

description

Discussion seminar, CREST, Nov. 03, 2011

Transcript of Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Page 1: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Semi-automatic ABCA Discussion

Christian P. Robert

Universite Paris-Dauphine, IuF, & CRESThttp://xianblog.wordpress.com

November 2, 2011LATEX code borrowed from arXiv:1004.1112v2

Page 2: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Approximate Bayesian computation (recap)

Approximate Bayesian computation

Approximate Bayesian computation (recap)

Summary statistic selection

Page 3: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Approximate Bayesian computation (recap)

Regular Bayesian computation issues

When faced with a non-standard posterior distribution

π(θ|y) ∝ π(θ)L(θ|y)

the standard solution is to use simulation (Monte Carlo) toproduce a sample

θ1, . . . , θT

from π(θ|y) (or approximately by Markov chain Monte Carlomethods)

[Robert & Casella, 2004]

Page 4: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Approximate Bayesian computation (recap)

Untractable likelihoods

Cases when the likelihood function f(y|θ) is unavailable and whenthe completion step

f(y|θ) =

∫Zf(y, z|θ) dz

is impossible or too costly because of the dimension of zc© MCMC cannot be implemented!

Page 5: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Approximate Bayesian computation (recap)

Untractable likelihoods

c© MCMC cannot be implemented!

Page 6: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Approximate Bayesian computation (recap)

The ABC method

Bayesian setting: target is π(θ)f(x|θ)

When likelihood f(x|θ) not in closed form, likelihood-free rejectiontechnique:

ABC algorithm

For an observation y ∼ f(y|θ), under the prior π(θ), keep jointlysimulating

θ′ ∼ π(θ) , z ∼ f(z|θ′) ,

until the auxiliary variable z is equal to the observed value, z = y.

[Rubin, 1984; Tavare et al., 1997]

Page 7: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Approximate Bayesian computation (recap)

The ABC method

Bayesian setting: target is π(θ)f(x|θ)When likelihood f(x|θ) not in closed form, likelihood-free rejectiontechnique:

ABC algorithm

For an observation y ∼ f(y|θ), under the prior π(θ), keep jointlysimulating

θ′ ∼ π(θ) , z ∼ f(z|θ′) ,

until the auxiliary variable z is equal to the observed value, z = y.

[Rubin, 1984; Tavare et al., 1997]

Page 8: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Approximate Bayesian computation (recap)

The ABC method

Bayesian setting: target is π(θ)f(x|θ)When likelihood f(x|θ) not in closed form, likelihood-free rejectiontechnique:

ABC algorithm

For an observation y ∼ f(y|θ), under the prior π(θ), keep jointlysimulating

θ′ ∼ π(θ) , z ∼ f(z|θ′) ,

until the auxiliary variable z is equal to the observed value, z = y.

[Rubin, 1984; Tavare et al., 1997]

Page 9: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Approximate Bayesian computation (recap)

A as approximative

When y is a continuous random variable, equality z = y is replacedwith a tolerance condition,

%(y, z) ≤ ε

where % is a distance

Output distributed from

π(θ)Pθ{%(y, z) < ε} ∝ π(θ|%(y, z) < ε)

Page 10: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Approximate Bayesian computation (recap)

A as approximative

When y is a continuous random variable, equality z = y is replacedwith a tolerance condition,

%(y, z) ≤ ε

where % is a distanceOutput distributed from

π(θ)Pθ{%(y, z) < ε} ∝ π(θ|%(y, z) < ε)

Page 11: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Approximate Bayesian computation (recap)

ABC algorithm

Algorithm 1 Likelihood-free rejection sampler

for i = 1 to N dorepeat

generate θ′ from the prior distribution π(·)generate z from the likelihood f(·|θ′)

until ρ{η(z), η(y)} ≤ εset θi = θ′

end for

where η(y) defines a (generaly in-sufficient) statistic

Page 12: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Approximate Bayesian computation (recap)

Output

The likelihood-free algorithm samples from the marginal in z of:

πε(θ, z|y) =π(θ)f(z|θ)IAε,y(z)∫

Aε,y×Θ π(θ)f(z|θ)dzdθ,

where Aε,y = {z ∈ D|ρ(η(z), η(y)) < ε}.

The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:

πε(θ|y) =

∫πε(θ, z|y)dz ≈ π(θ|η(y)) .

[Not garanteed!]

Page 13: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Approximate Bayesian computation (recap)

Output

The likelihood-free algorithm samples from the marginal in z of:

πε(θ, z|y) =π(θ)f(z|θ)IAε,y(z)∫

Aε,y×Θ π(θ)f(z|θ)dzdθ,

where Aε,y = {z ∈ D|ρ(η(z), η(y)) < ε}.

The idea behind ABC is that the summary statistics coupled with asmall tolerance should provide a good approximation of theposterior distribution:

πε(θ|y) =

∫πε(θ, z|y)dz ≈ π(θ|η(y)) .

[Not garanteed!]

Page 14: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Summary statistic selection

Summary statistic selection

Approximate Bayesian computation (recap)

Summary statistic selectionF&P’s settingNoisy ABCOptimal summary statistic

Page 15: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Summary statistic selection

F&P’s setting

F&P’s ABC

Use of a summary statistic S(·), an importance proposal g(·), akernel K(·) ≤ 1 and a bandwidth h > 0 such that

(θ,ysim) ∼ g(θ)f(ysim|θ)

is accepted with probability (hence the bound)

K[{S(ysim)− sobs}/h]

[or is it K[{S(ysim)− sobs}]/h, cf (2)? typo] andthe corresponding importance weight defined by

π(θ)/g(θ)

Page 16: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Summary statistic selection

F&P’s setting

Errors, errors, and errors

Three levels of approximation

I π(θ|yobs) by π(θ|sobs) loss of information

I π(θ|sobs) by

πABC(θ|sobs) =

∫π(s)K[{s− sobs}/h]π(θ|s) ds∫

π(s)K[{s− sobs}/h] ds

noisy observations

I πABC(θ|sobs) by importance Monte Carlo based on Nsimulations, represented by var(a(θ)|sobs)/Nacc [expectednumber of acceptances]

Page 17: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Summary statistic selection

F&P’s setting

Average acceptance asymptotics

For the average acceptance probability/approximate likelihood

p(θ|sobs) =

∫f(ysim|θ)K[{S(ysim)− sobs}/h] dysim ,

overall acceptance probability

p(sobs) =

∫p(θ|sobs)π(θ) dθ = π(sobs)h

d + o(hd)

[Lemma 1]

Page 18: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Summary statistic selection

F&P’s setting

Optimal importance proposal

Best choice of importance proposal in terms of effective sample size

g?(θ|sobs) ∝ π(θ)p(θ|sobs)1/2

[Not particularly useful in practice]

Page 19: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Summary statistic selection

F&P’s setting

Calibration of h

“This result gives insight into how S(·) and h affect theMonte Carlo error. To minimize Monte Carlo error, weneed hd to be not too small. Thus ideally we want S(·)to be a low dimensional summary of the data that issufficiently informative about θ that π(θ|sobs) is close, insome sense, to π(θ|yobs)” (p.5)

I Constraint on h only addresses one term in the approximationerror and acceptance probability

I h large prevents π(θ|sobs) to be close to πABC(θ|sobs)

I d small prevents π(θ|sobs) to be close to π(θ|yobs)

Page 20: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Summary statistic selection

Noisy ABC

Calibrated ABC

Definition

For 0 < q < 1 and subset A, fix [one specific?/all?] event Eq(A)with PrABC(θ ∈ Eq(A)|sobs) = q. Then ABC is calibrated if

Pr(θ ∈ A|Eq(A)) = q

Why calibrated and not exact?

Page 21: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Summary statistic selection

Noisy ABC

Calibrated ABC

Theorem

Noisy ABC, where

sobs = S(yobs) + hε , ε ∼ K(·)

is calibrated

[Wilkinson, 2008]no condition on h

Page 22: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Summary statistic selection

Noisy ABC

Calibrated ABC

Theorem

For noisy ABC, the expected noisy-ABC log-likelihood,

E {log[p(θ|sobs)]} =

∫ ∫log[p(θ|S(yobs) + hε)]π(yobs|θ0)K(ε)dyobsdx,

has its maximum at θ = θ0.

[Last line of proof contains a typo]

True for any choice of summary statistic?[Imposes at least identifiability...]

Relevant in asymptotia and not for the data

Page 23: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Summary statistic selection

Noisy ABC

Calibrated ABC

Corollary

For noisy ABC, the ABC posterior converges onto a point mass onthe true parameter value as m→∞.

For standard ABC, not always the case (unless h goes to zero).

Strength of regularity conditions (c1) and (c2) in Bernardo& Smith, 1994?

[constraints on posterior]Some condition upon summary statistic?

Page 24: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Summary statistic selection

Optimal summary statistic

Loss motivated statistic

Under quadratic loss function,

Theorem

(i) The minimal posterior error E[L(θ, θ)|yobs] occurs whenθ = E(θ|yobs) (!)

(ii) When h→ 0, EABC(θ|sobs) converges to E(θ|yobs)

iii If S(yobs) = E[θ|yobs] then for θ = EABC[θ|sobs]

E[L(θ, θ)|yobs] = trace(AΣ) + h2

∫xTAxK(x)dx + o(h2).

Measure-theoretic difficulties?dependence of sobs on h makes me uncomfortableRelevant for choice of K?

Page 25: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Summary statistic selection

Optimal summary statistic

Optimal summary statistic

“We take a different approach, and weaken therequirement for πABC to be a good approximation toπ(θ|yobs). We argue for πABC to be a goodapproximation solely in terms of the accuracy of certainestimates of the parameters.” (p.5)

From this result, F&P derive their choice of summary statistic,

S(y) = E(θ|y)

[almost sufficient] andsuggest

h = O(N−1/(2+d)) and h = O(N−1/(4+d))

as optimal bandwidths for noisy and standard ABC.

Page 26: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Summary statistic selection

Optimal summary statistic

Optimal summary statistic

“We take a different approach, and weaken therequirement for πABC to be a good approximation toπ(θ|yobs). We argue for πABC to be a goodapproximation solely in terms of the accuracy of certainestimates of the parameters.” (p.5)

From this result, F&P derive their choice of summary statistic,

S(y) = E(θ|y)

[EABC[θ|S(yobs)] = E[θ|yobs]] andsuggest

h = O(N−1/(2+d)) and h = O(N−1/(4+d))

as optimal bandwidths for noisy and standard ABC.

Page 27: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Summary statistic selection

Optimal summary statistic

Caveat

Since E(θ|yobs) is most usually unavailable, F&P suggest

(i) use a pilot run of ABC to determine a region of non-negligibleposterior mass;

(ii) simulate sets of parameter values and data;

(iii) use the simulated sets of parameter values and data toestimate the summary statistic; and

(iv) run ABC with this choice of summary statistic.

Page 28: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Summary statistic selection

Optimal summary statistic

Approximating the summary statistic

As Beaumont et al. (2002) and Blum and Francois (2010), F&Puse a linear regression to approximate E(θ|yobs):

θi = β(i)0 + β(i)f(yobs) + εi

Page 29: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Summary statistic selection

Optimal summary statistic

Applications

The paper’s second half covers:

I g-and-k-distribution

I stochastic kinetic biochemical networks

I LotkaVolterra model

I Ricker map ecological model

I M/G/1-queue

I tuberculosis bacteria genotype data

Page 30: Discussion of Fearnhead and Prangle, RSS< Dec. 14, 2011

Semi-automatic ABCA Discussion

Summary statistic selection

Optimal summary statistic

Questions

I dependence on h and S(·) in the early stage

I reduction of Bayesian inference to point estimation

I approximation error in step (iii) not accounted for

I not parameterisation invariant

I practice shows that proper approximation to genuine posteriordistributions stems from using a (much) larger number ofsummary statistics than the dimension of the parameter

I the validity of the approximation to the optimal summarystatistic depends on the quality of the pilot run;

I important inferential issues like model choice are not coveredby this approach.