Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a...

86
Sequential Monte Carlo: An Introduction Arnaud Doucet Departments of Statistics & Computer Science University of British Columbia A.D. () 1 / 26

Transcript of Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a...

Page 1: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Sequential Monte Carlo: An Introduction

Arnaud DoucetDepartments of Statistics & Computer Science

University of British Columbia

A.D. () 1 / 26

Page 2: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Generic Problem

Consider a sequence of probability distributions fπngn�1 de�ned on asequence of (measurable) spaces f(En,Fn)gn�1 where E1 = E ,F1 = F and En = En�1 � E , Fn = Fn�1 �F .

Each distribution πn (dx1:n) = πn (x1:n) dx1:n is known up to anormalizing constant, i.e.

πn (x1:n) =γn (x1:n)

Zn

We want to estimate expectations of test functions ϕn : En ! R

Eπn (ϕn) =Z

ϕn (x1:n)πn (dx1:n)

and/or the normalizing constants Zn.

We want to do this sequentially; i.e. �rst π1 and/or Z1 at time 1then π2 and/or Z2 at time 2 and so on.

A.D. () 2 / 26

Page 3: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Generic Problem

Consider a sequence of probability distributions fπngn�1 de�ned on asequence of (measurable) spaces f(En,Fn)gn�1 where E1 = E ,F1 = F and En = En�1 � E , Fn = Fn�1 �F .Each distribution πn (dx1:n) = πn (x1:n) dx1:n is known up to anormalizing constant, i.e.

πn (x1:n) =γn (x1:n)

Zn

We want to estimate expectations of test functions ϕn : En ! R

Eπn (ϕn) =Z

ϕn (x1:n)πn (dx1:n)

and/or the normalizing constants Zn.

We want to do this sequentially; i.e. �rst π1 and/or Z1 at time 1then π2 and/or Z2 at time 2 and so on.

A.D. () 2 / 26

Page 4: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Generic Problem

Consider a sequence of probability distributions fπngn�1 de�ned on asequence of (measurable) spaces f(En,Fn)gn�1 where E1 = E ,F1 = F and En = En�1 � E , Fn = Fn�1 �F .Each distribution πn (dx1:n) = πn (x1:n) dx1:n is known up to anormalizing constant, i.e.

πn (x1:n) =γn (x1:n)

Zn

We want to estimate expectations of test functions ϕn : En ! R

Eπn (ϕn) =Z

ϕn (x1:n)πn (dx1:n)

and/or the normalizing constants Zn.

We want to do this sequentially; i.e. �rst π1 and/or Z1 at time 1then π2 and/or Z2 at time 2 and so on.

A.D. () 2 / 26

Page 5: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Generic Problem

Consider a sequence of probability distributions fπngn�1 de�ned on asequence of (measurable) spaces f(En,Fn)gn�1 where E1 = E ,F1 = F and En = En�1 � E , Fn = Fn�1 �F .Each distribution πn (dx1:n) = πn (x1:n) dx1:n is known up to anormalizing constant, i.e.

πn (x1:n) =γn (x1:n)

Zn

We want to estimate expectations of test functions ϕn : En ! R

Eπn (ϕn) =Z

ϕn (x1:n)πn (dx1:n)

and/or the normalizing constants Zn.

We want to do this sequentially; i.e. �rst π1 and/or Z1 at time 1then π2 and/or Z2 at time 2 and so on.

A.D. () 2 / 26

Page 6: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

We could use standard MCMC to sample from fπngn�1 but it is slow& it does not provide an estimate of fZngn�1.

SMC is a non-iterative alternative class of algorithms to MCMC.

Key idea: if πn�1 does not di¤er too much from πn then we shouldbe able to reuse our estimate of πn�1 to approximate πn.

A.D. () 3 / 26

Page 7: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

We could use standard MCMC to sample from fπngn�1 but it is slow& it does not provide an estimate of fZngn�1.SMC is a non-iterative alternative class of algorithms to MCMC.

Key idea: if πn�1 does not di¤er too much from πn then we shouldbe able to reuse our estimate of πn�1 to approximate πn.

A.D. () 3 / 26

Page 8: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

We could use standard MCMC to sample from fπngn�1 but it is slow& it does not provide an estimate of fZngn�1.SMC is a non-iterative alternative class of algorithms to MCMC.

Key idea: if πn�1 does not di¤er too much from πn then we shouldbe able to reuse our estimate of πn�1 to approximate πn.

A.D. () 3 / 26

Page 9: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Applications

Optimal estimation in non-linear non-Gaussian dynamic models.

Bayesian inference for complex statistical models.

Global optimization.

Counting problems.

Rare events simulation.

A.D. () 4 / 26

Page 10: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Applications

Optimal estimation in non-linear non-Gaussian dynamic models.

Bayesian inference for complex statistical models.

Global optimization.

Counting problems.

Rare events simulation.

A.D. () 4 / 26

Page 11: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Applications

Optimal estimation in non-linear non-Gaussian dynamic models.

Bayesian inference for complex statistical models.

Global optimization.

Counting problems.

Rare events simulation.

A.D. () 4 / 26

Page 12: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Applications

Optimal estimation in non-linear non-Gaussian dynamic models.

Bayesian inference for complex statistical models.

Global optimization.

Counting problems.

Rare events simulation.

A.D. () 4 / 26

Page 13: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Applications

Optimal estimation in non-linear non-Gaussian dynamic models.

Bayesian inference for complex statistical models.

Global optimization.

Counting problems.

Rare events simulation.

A.D. () 4 / 26

Page 14: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

State-Space Models

fXngn�1 latent/hidden Markov process with

X1 � µ (�) and Xn j (Xn�1 = x) � f ( �j x) .

fYngn�1 observation process such that observations are conditionallyindependent given fXngn�1 and

Yn j (Xn = x) � g ( �j x) .

Very wide class of statistical models also known as hidden Markovmodels with thousands of applications.

A.D. () 5 / 26

Page 15: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

State-Space Models

fXngn�1 latent/hidden Markov process with

X1 � µ (�) and Xn j (Xn�1 = x) � f ( �j x) .

fYngn�1 observation process such that observations are conditionallyindependent given fXngn�1 and

Yn j (Xn = x) � g ( �j x) .

Very wide class of statistical models also known as hidden Markovmodels with thousands of applications.

A.D. () 5 / 26

Page 16: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

State-Space Models

fXngn�1 latent/hidden Markov process with

X1 � µ (�) and Xn j (Xn�1 = x) � f ( �j x) .

fYngn�1 observation process such that observations are conditionallyindependent given fXngn�1 and

Yn j (Xn = x) � g ( �j x) .

Very wide class of statistical models also known as hidden Markovmodels with thousands of applications.

A.D. () 5 / 26

Page 17: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Examples

Linear Gaussian state-space model

X1 � N (m1,Σ1) , Xn = AXn�1 + BVn,Yn = CXn +DWn

where Vni.i.d.� N (0,Σv ) , Wn

i.i.d.� N (0,Σw ) .

Stochastic volatility model

X1 � N�0,

σ2

1� α2

�, Xn = αXn�1 + Vn,

Yn = β exp (Xn/2)Wn

where jαj < 1, Vn i.i.d.� N�0, σ2

�, Wn

i.i.d.� N (0, 1) .

A.D. () 6 / 26

Page 18: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Examples

Linear Gaussian state-space model

X1 � N (m1,Σ1) , Xn = AXn�1 + BVn,Yn = CXn +DWn

where Vni.i.d.� N (0,Σv ) , Wn

i.i.d.� N (0,Σw ) .Stochastic volatility model

X1 � N�0,

σ2

1� α2

�, Xn = αXn�1 + Vn,

Yn = β exp (Xn/2)Wn

where jαj < 1, Vn i.i.d.� N�0, σ2

�, Wn

i.i.d.� N (0, 1) .

A.D. () 6 / 26

Page 19: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Inference in State-Space Models

At time n, we have access to the observations are interested incomputing

p (x1:n j y1:n) =p (x1:n, y1:n)

p (y1:n)

and the (marginal) likelihood p (y1:n) where

p (x1:n, y1:n) = µ (x1)n

∏k=2

f (xk j xk�1)n

∏k=1

g (yk j xk ) ,

p (y1:n) =Z� � �

Zp (x1:n, y1:n) dx1:n.

In our SMC framework,

πn (x1:n) = p (x1:n j y1:n) , γn (x1:n) = p (x1:n, y1:n) , Zn = p (y1:n) .

A.D. () 7 / 26

Page 20: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Inference in State-Space Models

At time n, we have access to the observations are interested incomputing

p (x1:n j y1:n) =p (x1:n, y1:n)

p (y1:n)

and the (marginal) likelihood p (y1:n) where

p (x1:n, y1:n) = µ (x1)n

∏k=2

f (xk j xk�1)n

∏k=1

g (yk j xk ) ,

p (y1:n) =Z� � �

Zp (x1:n, y1:n) dx1:n.

In our SMC framework,

πn (x1:n) = p (x1:n j y1:n) , γn (x1:n) = p (x1:n, y1:n) , Zn = p (y1:n) .

A.D. () 7 / 26

Page 21: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

The Kalman Filter

For linear Gaussian models, all posteriors are Gaussian and we cancompute the likelihood exactly.

The marginal distributions fp (xn j y1:n)gn�1 andfp (yn j y1:n�1)gn�1can be computed through the celebrated Kalman�lter.

To obtain an estimate of the joint distribution, we have

p (x1:n j y1:n) = p (xn j y1:n)n�1∏k=1

p (xk j y1:n, xk+1)

= p (xn j y1:n)n�1∏k=1

p (xk j y1:k , xk+1)

where

p (xk j y1:k , xk+1) =f (xk+1j xk ) p (xk j y1:k )

p (xk+1j y1:k ).

A.D. () 8 / 26

Page 22: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

The Kalman Filter

For linear Gaussian models, all posteriors are Gaussian and we cancompute the likelihood exactly.

The marginal distributions fp (xn j y1:n)gn�1 andfp (yn j y1:n�1)gn�1can be computed through the celebrated Kalman�lter.

To obtain an estimate of the joint distribution, we have

p (x1:n j y1:n) = p (xn j y1:n)n�1∏k=1

p (xk j y1:n, xk+1)

= p (xn j y1:n)n�1∏k=1

p (xk j y1:k , xk+1)

where

p (xk j y1:k , xk+1) =f (xk+1j xk ) p (xk j y1:k )

p (xk+1j y1:k ).

A.D. () 8 / 26

Page 23: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

The Kalman Filter

For linear Gaussian models, all posteriors are Gaussian and we cancompute the likelihood exactly.

The marginal distributions fp (xn j y1:n)gn�1 andfp (yn j y1:n�1)gn�1can be computed through the celebrated Kalman�lter.

To obtain an estimate of the joint distribution, we have

p (x1:n j y1:n) = p (xn j y1:n)n�1∏k=1

p (xk j y1:n, xk+1)

= p (xn j y1:n)n�1∏k=1

p (xk j y1:k , xk+1)

where

p (xk j y1:k , xk+1) =f (xk+1j xk ) p (xk j y1:k )

p (xk+1j y1:k ).

A.D. () 8 / 26

Page 24: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Nonlinear Non-Gaussian Models

For nonlinear non-Gaussian models, there is no closed-form expression.

Standard approximations relie on functional approximations: EKF,UKF, Gaussian quadrature, mixture of Gaussians.

These functional approximations can be seriously unreliable and arenot widely applicable.

A.D. () 9 / 26

Page 25: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Nonlinear Non-Gaussian Models

For nonlinear non-Gaussian models, there is no closed-form expression.

Standard approximations relie on functional approximations: EKF,UKF, Gaussian quadrature, mixture of Gaussians.

These functional approximations can be seriously unreliable and arenot widely applicable.

A.D. () 9 / 26

Page 26: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Nonlinear Non-Gaussian Models

For nonlinear non-Gaussian models, there is no closed-form expression.

Standard approximations relie on functional approximations: EKF,UKF, Gaussian quadrature, mixture of Gaussians.

These functional approximations can be seriously unreliable and arenot widely applicable.

A.D. () 9 / 26

Page 27: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Quantum Monte Carlo

Finding the largest eigenvalue and eigenmeasure of a positive operator

Let K : E � E ! R+ be a positive kernel.

Find the largest eigenvalue λ (λ > 0) and associated eigenmeasure µ(R

µ (dx) = 1) of KZµ (x)K (y j x) dx = λµ (y) .

Basic Idea: the good old power method.

A.D. () 10 / 26

Page 28: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Quantum Monte Carlo

Finding the largest eigenvalue and eigenmeasure of a positive operator

Let K : E � E ! R+ be a positive kernel.

Find the largest eigenvalue λ (λ > 0) and associated eigenmeasure µ(R

µ (dx) = 1) of KZµ (x)K (y j x) dx = λµ (y) .

Basic Idea: the good old power method.

A.D. () 10 / 26

Page 29: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Quantum Monte Carlo

Finding the largest eigenvalue and eigenmeasure of a positive operator

Let K : E � E ! R+ be a positive kernel.

Find the largest eigenvalue λ (λ > 0) and associated eigenmeasure µ(R

µ (dx) = 1) of KZµ (x)K (y j x) dx = λµ (y) .

Basic Idea: the good old power method.

A.D. () 10 / 26

Page 30: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Quantum Monte Carlo

Finding the largest eigenvalue and eigenmeasure of a positive operator

Let K : E � E ! R+ be a positive kernel.

Find the largest eigenvalue λ (λ > 0) and associated eigenmeasure µ(R

µ (dx) = 1) of KZµ (x)K (y j x) dx = λµ (y) .

Basic Idea: the good old power method.

A.D. () 10 / 26

Page 31: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Power method: A p � p matrix with p linearly independenteigenvectors fVig associated to eigenvalues fλig such that jλ1j >jλ2j > ... > jλp j

U1 =p

∑i=1

αiVi ,

...

Un = An�1U1 =p

∑i=1

αiλn�1i Vi

We have

Unλn�11

= α1V1 +p

∑i=2

αi

�λiλ1

�n�1Vi ! α1V1 and

UTn YUTn�1Y

! λ1.

A.D. () 11 / 26

Page 32: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Power method: A p � p matrix with p linearly independenteigenvectors fVig associated to eigenvalues fλig such that jλ1j >jλ2j > ... > jλp j

U1 =p

∑i=1

αiVi ,

...

Un = An�1U1 =p

∑i=1

αiλn�1i Vi

We have

Unλn�11

= α1V1 +p

∑i=2

αi

�λiλ1

�n�1Vi ! α1V1 and

UTn YUTn�1Y

! λ1.

A.D. () 11 / 26

Page 33: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Consider the following arti�cial sequence of distributions de�nedthrough

γn (x1:n) = v (x1)n

∏k=2

K (xk j xk�1)

As n increases, we have

γn (xn) =Z

γn (x1:n) dx1:n�1 ∝ λn�1µ (xn) ,

and

πn (xn)! µ (xn) andZn+1Zn

! λ.

SMC methods are widely used to solve this problem.

A.D. () 12 / 26

Page 34: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Consider the following arti�cial sequence of distributions de�nedthrough

γn (x1:n) = v (x1)n

∏k=2

K (xk j xk�1)

As n increases, we have

γn (xn) =Z

γn (x1:n) dx1:n�1 ∝ λn�1µ (xn) ,

and

πn (xn)! µ (xn) andZn+1Zn

! λ.

SMC methods are widely used to solve this problem.

A.D. () 12 / 26

Page 35: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Consider the following arti�cial sequence of distributions de�nedthrough

γn (x1:n) = v (x1)n

∏k=2

K (xk j xk�1)

As n increases, we have

γn (xn) =Z

γn (x1:n) dx1:n�1 ∝ λn�1µ (xn) ,

and

πn (xn)! µ (xn) andZn+1Zn

! λ.

SMC methods are widely used to solve this problem.

A.D. () 12 / 26

Page 36: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Self-Avoiding Random Walk (SAW)

A 2D Self Avoiding Random Walk (SAW). Polymer of size n ischaracterized by a sequence x1:n on a �nite lattice such that xi 6= xjfor i 6= j .

One is interested in the uniform distribution

πn (x1:n) = Z�1n .1Dn (x1:n)

where

Dn = fx1:n 2 Enn xk � xk+1 and xk 6= xi for k 6= ig ,Zn = cardinal of Dn.

SMC allow us to simulate from the uniform distribution of SAW oflength n and to compute their number.

A.D. () 13 / 26

Page 37: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Self-Avoiding Random Walk (SAW)

A 2D Self Avoiding Random Walk (SAW). Polymer of size n ischaracterized by a sequence x1:n on a �nite lattice such that xi 6= xjfor i 6= j .One is interested in the uniform distribution

πn (x1:n) = Z�1n .1Dn (x1:n)

where

Dn = fx1:n 2 Enn xk � xk+1 and xk 6= xi for k 6= ig ,Zn = cardinal of Dn.

SMC allow us to simulate from the uniform distribution of SAW oflength n and to compute their number.

A.D. () 13 / 26

Page 38: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Self-Avoiding Random Walk (SAW)

A 2D Self Avoiding Random Walk (SAW). Polymer of size n ischaracterized by a sequence x1:n on a �nite lattice such that xi 6= xjfor i 6= j .One is interested in the uniform distribution

πn (x1:n) = Z�1n .1Dn (x1:n)

where

Dn = fx1:n 2 Enn xk � xk+1 and xk 6= xi for k 6= ig ,Zn = cardinal of Dn.

SMC allow us to simulate from the uniform distribution of SAW oflength n and to compute their number.

A.D. () 13 / 26

Page 39: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Particle Motion in Random Medium

A Markovian particle fXngn�1 evolves in a random medium

X1 � µ (�) , Xn+1jXn = x � f ( �j x) .

At time n, its probability to get killed is 1� g (Xn) where0 � g (x) � 1 for any x 2 E .One wants to approximate Pr (T > n) where T = Random time atwhich the particle is killed.

A.D. () 14 / 26

Page 40: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Particle Motion in Random Medium

A Markovian particle fXngn�1 evolves in a random medium

X1 � µ (�) , Xn+1jXn = x � f ( �j x) .

At time n, its probability to get killed is 1� g (Xn) where0 � g (x) � 1 for any x 2 E .

One wants to approximate Pr (T > n) where T = Random time atwhich the particle is killed.

A.D. () 14 / 26

Page 41: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Particle Motion in Random Medium

A Markovian particle fXngn�1 evolves in a random medium

X1 � µ (�) , Xn+1jXn = x � f ( �j x) .

At time n, its probability to get killed is 1� g (Xn) where0 � g (x) � 1 for any x 2 E .One wants to approximate Pr (T > n) where T = Random time atwhich the particle is killed.

A.D. () 14 / 26

Page 42: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

One has

Pr (T > n)

= Eµ [Proba. of not being killed at n given X1:n ]

=Z� � �

Zµ (x1)

n

∏k=2

f (xk j xk�1)n

∏k=1

g (xk )| {z }Probability to survive at n

dx1:n.

Consider

γn (x1:n) = µ (x1)n

∏k=2

f (xk j xk�1)n

∏k=1

g (xk ) ,

πn (x1:n) =γn (x1:n)

Znwhere Zn = Pr (T > n) .

SMC methods to compute Zn, the probability of not being killed attime n, and to approximate the distribution of the paths havingsurvived at time n.

A.D. () 15 / 26

Page 43: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

One has

Pr (T > n)

= Eµ [Proba. of not being killed at n given X1:n ]

=Z� � �

Zµ (x1)

n

∏k=2

f (xk j xk�1)n

∏k=1

g (xk )| {z }Probability to survive at n

dx1:n.

Consider

γn (x1:n) = µ (x1)n

∏k=2

f (xk j xk�1)n

∏k=1

g (xk ) ,

πn (x1:n) =γn (x1:n)

Znwhere Zn = Pr (T > n) .

SMC methods to compute Zn, the probability of not being killed attime n, and to approximate the distribution of the paths havingsurvived at time n.

A.D. () 15 / 26

Page 44: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

One has

Pr (T > n)

= Eµ [Proba. of not being killed at n given X1:n ]

=Z� � �

Zµ (x1)

n

∏k=2

f (xk j xk�1)n

∏k=1

g (xk )| {z }Probability to survive at n

dx1:n.

Consider

γn (x1:n) = µ (x1)n

∏k=2

f (xk j xk�1)n

∏k=1

g (xk ) ,

πn (x1:n) =γn (x1:n)

Znwhere Zn = Pr (T > n) .

SMC methods to compute Zn, the probability of not being killed attime n, and to approximate the distribution of the paths havingsurvived at time n.

A.D. () 15 / 26

Page 45: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Generic Sequence of Target Distributions

Consider the case where all the target distributions fπngn�1 arede�ned on En = E .

Examples

πn = π (e.g. Bayesian inference, rare events etc.)πn (x) ∝ [π (x)]γn where γn ! ∞ (global optimization)πn (x) = p (x j y1:n) (sequential Bayesian estimation)

SMC do not apply to this problem as it requires En = E n.

A.D. () 16 / 26

Page 46: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Generic Sequence of Target Distributions

Consider the case where all the target distributions fπngn�1 arede�ned on En = E .

Examples

πn = π (e.g. Bayesian inference, rare events etc.)πn (x) ∝ [π (x)]γn where γn ! ∞ (global optimization)πn (x) = p (x j y1:n) (sequential Bayesian estimation)

SMC do not apply to this problem as it requires En = E n.

A.D. () 16 / 26

Page 47: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Generic Sequence of Target Distributions

Consider the case where all the target distributions fπngn�1 arede�ned on En = E .

Examples

πn = π (e.g. Bayesian inference, rare events etc.)

πn (x) ∝ [π (x)]γn where γn ! ∞ (global optimization)πn (x) = p (x j y1:n) (sequential Bayesian estimation)

SMC do not apply to this problem as it requires En = E n.

A.D. () 16 / 26

Page 48: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Generic Sequence of Target Distributions

Consider the case where all the target distributions fπngn�1 arede�ned on En = E .

Examples

πn = π (e.g. Bayesian inference, rare events etc.)πn (x) ∝ [π (x)]γn where γn ! ∞ (global optimization)

πn (x) = p (x j y1:n) (sequential Bayesian estimation)

SMC do not apply to this problem as it requires En = E n.

A.D. () 16 / 26

Page 49: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Generic Sequence of Target Distributions

Consider the case where all the target distributions fπngn�1 arede�ned on En = E .

Examples

πn = π (e.g. Bayesian inference, rare events etc.)πn (x) ∝ [π (x)]γn where γn ! ∞ (global optimization)πn (x) = p (x j y1:n) (sequential Bayesian estimation)

SMC do not apply to this problem as it requires En = E n.

A.D. () 16 / 26

Page 50: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Generic Sequence of Target Distributions

Consider the case where all the target distributions fπngn�1 arede�ned on En = E .

Examples

πn = π (e.g. Bayesian inference, rare events etc.)πn (x) ∝ [π (x)]γn where γn ! ∞ (global optimization)πn (x) = p (x j y1:n) (sequential Bayesian estimation)

SMC do not apply to this problem as it requires En = E n.

A.D. () 16 / 26

Page 51: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Consider a new sequence of arti�cial distributions feπngn�1 de�ned onEn = E n such thatZ eπn (x1:n�1, xn) dx1:n�1 = πn (xn)

and apply standard SMC.

Example: eπn (x1:n�1, xn) = πn (xn) eπn (x1:n�1j xn)where eπn (x1:n�1j xn) is any conditional distribution on E n�1.How to design eπn optimally will be discussed later.

A.D. () 17 / 26

Page 52: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Consider a new sequence of arti�cial distributions feπngn�1 de�ned onEn = E n such thatZ eπn (x1:n�1, xn) dx1:n�1 = πn (xn)

and apply standard SMC.

Example: eπn (x1:n�1, xn) = πn (xn) eπn (x1:n�1j xn)where eπn (x1:n�1j xn) is any conditional distribution on E n�1.

How to design eπn optimally will be discussed later.

A.D. () 17 / 26

Page 53: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Consider a new sequence of arti�cial distributions feπngn�1 de�ned onEn = E n such thatZ eπn (x1:n�1, xn) dx1:n�1 = πn (xn)

and apply standard SMC.

Example: eπn (x1:n�1, xn) = πn (xn) eπn (x1:n�1j xn)where eπn (x1:n�1j xn) is any conditional distribution on E n�1.How to design eπn optimally will be discussed later.

A.D. () 17 / 26

Page 54: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

The Need for Monte Carlo Methods

Except in trivial cases, one can neither computeR

ϕn (x1:n)πn (dx1:n)nor Zn.

Deterministic numerical integration methods typically ine¢ cient forhigh-dimensional spaces.

Monte Carlo methods: simple and �exible.

Using Monte Carlo, it is very easy to make �rigourous" your intuition.

A.D. () 18 / 26

Page 55: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

The Need for Monte Carlo Methods

Except in trivial cases, one can neither computeR

ϕn (x1:n)πn (dx1:n)nor Zn.

Deterministic numerical integration methods typically ine¢ cient forhigh-dimensional spaces.

Monte Carlo methods: simple and �exible.

Using Monte Carlo, it is very easy to make �rigourous" your intuition.

A.D. () 18 / 26

Page 56: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

The Need for Monte Carlo Methods

Except in trivial cases, one can neither computeR

ϕn (x1:n)πn (dx1:n)nor Zn.

Deterministic numerical integration methods typically ine¢ cient forhigh-dimensional spaces.

Monte Carlo methods: simple and �exible.

Using Monte Carlo, it is very easy to make �rigourous" your intuition.

A.D. () 18 / 26

Page 57: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

The Need for Monte Carlo Methods

Except in trivial cases, one can neither computeR

ϕn (x1:n)πn (dx1:n)nor Zn.

Deterministic numerical integration methods typically ine¢ cient forhigh-dimensional spaces.

Monte Carlo methods: simple and �exible.

Using Monte Carlo, it is very easy to make �rigourous" your intuition.

A.D. () 18 / 26

Page 58: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Monte Carlo Methods

For the time being, just concentrate on estimating

Eπ [ϕ] =Z

ϕ (x)π (dx)

where

π (x) =γ (x)Z

with γ known pointwise/Z =Z

γ (x) dx unknown.

Draw a large number samples X (i )i.i.d.� π and build empirical measure

bπ (dx) = 1N

N

∑i=1

δX (i ) (dx) .

A.D. () 19 / 26

Page 59: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Monte Carlo Methods

For the time being, just concentrate on estimating

Eπ [ϕ] =Z

ϕ (x)π (dx)

where

π (x) =γ (x)Z

with γ known pointwise/Z =Z

γ (x) dx unknown.

Draw a large number samples X (i )i.i.d.� π and build empirical measure

bπ (dx) = 1N

N

∑i=1

δX (i ) (dx) .

A.D. () 19 / 26

Page 60: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Marginalization is straightforward. If x = (x1, . . . , xk )

bπ (dxp) = Z bπ (dx1:p�1, dxp+1:k ) =1N

N

∑i=1

δX (i )p(dx) .

Integration is straightforward. Monte Carlo estimates of Eπ (ϕ)

Ebπ (ϕ) =Z

ϕ (x) bπ (dx) = 1N

N

∑i=1

ϕ�X (i )

�.

Samples concentrate themselves automatically in regions of highprobability mass whatever being the dimension of the space; e.g.E = R106 .

A.D. () 20 / 26

Page 61: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Marginalization is straightforward. If x = (x1, . . . , xk )

bπ (dxp) = Z bπ (dx1:p�1, dxp+1:k ) =1N

N

∑i=1

δX (i )p(dx) .

Integration is straightforward. Monte Carlo estimates of Eπ (ϕ)

Ebπ (ϕ) =Z

ϕ (x) bπ (dx) = 1N

N

∑i=1

ϕ�X (i )

�.

Samples concentrate themselves automatically in regions of highprobability mass whatever being the dimension of the space; e.g.E = R106 .

A.D. () 20 / 26

Page 62: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Marginalization is straightforward. If x = (x1, . . . , xk )

bπ (dxp) = Z bπ (dx1:p�1, dxp+1:k ) =1N

N

∑i=1

δX (i )p(dx) .

Integration is straightforward. Monte Carlo estimates of Eπ (ϕ)

Ebπ (ϕ) =Z

ϕ (x) bπ (dx) = 1N

N

∑i=1

ϕ�X (i )

�.

Samples concentrate themselves automatically in regions of highprobability mass whatever being the dimension of the space; e.g.E = R106 .

A.D. () 20 / 26

Page 63: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Basic results

E [Ebπ (ϕ)] = Eπ (ϕ) unbiased,

V [Ebπ (ϕ)] =1N

�(ϕ�Eπ (ϕ))

2�

Rate of convergence to zero INDEPENDENT of space E ! It breaksthe curse of dimensionality... sometimes.

Central limit theorempN (Ebπ (ϕ)�Eπ (ϕ))) N

�0,Eπ

�(ϕ�Eπ (ϕ))

2��

Problem: how do you obtain samples from an arbitary highdimensional distribution???

Answer: No general answer, typically approximation required.

A.D. () 21 / 26

Page 64: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Basic results

E [Ebπ (ϕ)] = Eπ (ϕ) unbiased,

V [Ebπ (ϕ)] =1N

�(ϕ�Eπ (ϕ))

2�

Rate of convergence to zero INDEPENDENT of space E ! It breaksthe curse of dimensionality... sometimes.

Central limit theorempN (Ebπ (ϕ)�Eπ (ϕ))) N

�0,Eπ

�(ϕ�Eπ (ϕ))

2��

Problem: how do you obtain samples from an arbitary highdimensional distribution???

Answer: No general answer, typically approximation required.

A.D. () 21 / 26

Page 65: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Basic results

E [Ebπ (ϕ)] = Eπ (ϕ) unbiased,

V [Ebπ (ϕ)] =1N

�(ϕ�Eπ (ϕ))

2�

Rate of convergence to zero INDEPENDENT of space E ! It breaksthe curse of dimensionality... sometimes.

Central limit theorempN (Ebπ (ϕ)�Eπ (ϕ))) N

�0,Eπ

�(ϕ�Eπ (ϕ))

2��

Problem: how do you obtain samples from an arbitary highdimensional distribution???

Answer: No general answer, typically approximation required.

A.D. () 21 / 26

Page 66: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Basic results

E [Ebπ (ϕ)] = Eπ (ϕ) unbiased,

V [Ebπ (ϕ)] =1N

�(ϕ�Eπ (ϕ))

2�

Rate of convergence to zero INDEPENDENT of space E ! It breaksthe curse of dimensionality... sometimes.

Central limit theorempN (Ebπ (ϕ)�Eπ (ϕ))) N

�0,Eπ

�(ϕ�Eπ (ϕ))

2��

Problem: how do you obtain samples from an arbitary highdimensional distribution???

Answer: No general answer, typically approximation required.

A.D. () 21 / 26

Page 67: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Basic results

E [Ebπ (ϕ)] = Eπ (ϕ) unbiased,

V [Ebπ (ϕ)] =1N

�(ϕ�Eπ (ϕ))

2�

Rate of convergence to zero INDEPENDENT of space E ! It breaksthe curse of dimensionality... sometimes.

Central limit theorempN (Ebπ (ϕ)�Eπ (ϕ))) N

�0,Eπ

�(ϕ�Eπ (ϕ))

2��

Problem: how do you obtain samples from an arbitary highdimensional distribution???

Answer: No general answer, typically approximation required.

A.D. () 21 / 26

Page 68: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Standard Monte Carlo Methods

Sampling from standard distributions (Gaussian, Gamma, Poisson...)can be done exactly (see articles by germans) using inverse method,accept/reject etc.

Sampling approximately from non standard high dimensionaldistributions typically done by Markov chain Monte Carlo (e.g.Metropolis-Hastings).

Basic (bright) idea: Build an ergodic Markov chain whose stationarydistribution is the distribution of interest; i.e.Z

π (x)K (y j x) dx = π (y) .

Iterative algorithm to sample from one distribution, not adapted toour problems.

Alternative (not that bright) idea: Importance sampling ) Noniterative, can be understood in one minute.

A.D. () 22 / 26

Page 69: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Standard Monte Carlo Methods

Sampling from standard distributions (Gaussian, Gamma, Poisson...)can be done exactly (see articles by germans) using inverse method,accept/reject etc.

Sampling approximately from non standard high dimensionaldistributions typically done by Markov chain Monte Carlo (e.g.Metropolis-Hastings).

Basic (bright) idea: Build an ergodic Markov chain whose stationarydistribution is the distribution of interest; i.e.Z

π (x)K (y j x) dx = π (y) .

Iterative algorithm to sample from one distribution, not adapted toour problems.

Alternative (not that bright) idea: Importance sampling ) Noniterative, can be understood in one minute.

A.D. () 22 / 26

Page 70: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Standard Monte Carlo Methods

Sampling from standard distributions (Gaussian, Gamma, Poisson...)can be done exactly (see articles by germans) using inverse method,accept/reject etc.

Sampling approximately from non standard high dimensionaldistributions typically done by Markov chain Monte Carlo (e.g.Metropolis-Hastings).

Basic (bright) idea: Build an ergodic Markov chain whose stationarydistribution is the distribution of interest; i.e.Z

π (x)K (y j x) dx = π (y) .

Iterative algorithm to sample from one distribution, not adapted toour problems.

Alternative (not that bright) idea: Importance sampling ) Noniterative, can be understood in one minute.

A.D. () 22 / 26

Page 71: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Standard Monte Carlo Methods

Sampling from standard distributions (Gaussian, Gamma, Poisson...)can be done exactly (see articles by germans) using inverse method,accept/reject etc.

Sampling approximately from non standard high dimensionaldistributions typically done by Markov chain Monte Carlo (e.g.Metropolis-Hastings).

Basic (bright) idea: Build an ergodic Markov chain whose stationarydistribution is the distribution of interest; i.e.Z

π (x)K (y j x) dx = π (y) .

Iterative algorithm to sample from one distribution, not adapted toour problems.

Alternative (not that bright) idea: Importance sampling ) Noniterative, can be understood in one minute.

A.D. () 22 / 26

Page 72: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Standard Monte Carlo Methods

Sampling from standard distributions (Gaussian, Gamma, Poisson...)can be done exactly (see articles by germans) using inverse method,accept/reject etc.

Sampling approximately from non standard high dimensionaldistributions typically done by Markov chain Monte Carlo (e.g.Metropolis-Hastings).

Basic (bright) idea: Build an ergodic Markov chain whose stationarydistribution is the distribution of interest; i.e.Z

π (x)K (y j x) dx = π (y) .

Iterative algorithm to sample from one distribution, not adapted toour problems.

Alternative (not that bright) idea: Importance sampling ) Noniterative, can be understood in one minute.

A.D. () 22 / 26

Page 73: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Importance Sampling

Importance Sampling (IS) identity. For any distribution q suchthat π (x) > 0) q (x) > 0

π (x) =w (x) q (x)Rw (x) q (x) dx

where w (x) =γ (x)q (x)

.

q is called importance distribution and w importance weight.

q can be chosen arbitrarily, in particular easy to sample from

X (i )i.i.d.� q (�)) bq (dx) = 1

N

N

∑i=1

δX (i ) (dx)

A.D. () 23 / 26

Page 74: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Importance Sampling

Importance Sampling (IS) identity. For any distribution q suchthat π (x) > 0) q (x) > 0

π (x) =w (x) q (x)Rw (x) q (x) dx

where w (x) =γ (x)q (x)

.

q is called importance distribution and w importance weight.

q can be chosen arbitrarily, in particular easy to sample from

X (i )i.i.d.� q (�)) bq (dx) = 1

N

N

∑i=1

δX (i ) (dx)

A.D. () 23 / 26

Page 75: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Plugging this expression in IS identity

bπ (dx) =w (x) bq (dx)Rw (x) bq (dx) = N�1 ∑N

i=1 w�X (i )

�δX (i ) (dx)

N�1 ∑Ni=1 w

�X (i )

�=

N

∑i=1W (i )δX (i ) (dx)

where

W (i ) ∝ w�X (i )

�and

N

∑i=1W (i ) = 1.

π (x) now approximated by weighted sum of delta-masses ) Weightscompensate for discrepancy between π and q.

A.D. () 24 / 26

Page 76: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Plugging this expression in IS identity

bπ (dx) =w (x) bq (dx)Rw (x) bq (dx) = N�1 ∑N

i=1 w�X (i )

�δX (i ) (dx)

N�1 ∑Ni=1 w

�X (i )

�=

N

∑i=1W (i )δX (i ) (dx)

where

W (i ) ∝ w�X (i )

�and

N

∑i=1W (i ) = 1.

π (x) now approximated by weighted sum of delta-masses ) Weightscompensate for discrepancy between π and q.

A.D. () 24 / 26

Page 77: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Now we can approximate Eπ [ϕ] by

Ebπ [ϕ] =Z

ϕ (x) bπ (dx) = N

∑i=1W (i )ϕ

�X (i )

�.

Statistics for N � 1

E [Ebπ [ϕ]] = Eπ [ϕ]�N�1π E [W (X ) (ϕ (X )�Eπ [ϕ])]| {z }negligible bias

,

V [Ebπ [ϕ]] = N�1π EhW (X ) (ϕ (X )�Eπ [ϕ])

2i.

Estimate of normalizing constant

bZ = Zγ (x)q (x)

bq (dx) = 1N

N

∑i=1

γ�X (i )

�q�X (i )

�and E

hbZi = Z , VhbZi = N�1 �Eq

��γ(X )q(X ) � Z

�2��.

A.D. () 25 / 26

Page 78: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Now we can approximate Eπ [ϕ] by

Ebπ [ϕ] =Z

ϕ (x) bπ (dx) = N

∑i=1W (i )ϕ

�X (i )

�.

Statistics for N � 1

E [Ebπ [ϕ]] = Eπ [ϕ]�N�1π E [W (X ) (ϕ (X )�Eπ [ϕ])]| {z }negligible bias

,

V [Ebπ [ϕ]] = N�1π EhW (X ) (ϕ (X )�Eπ [ϕ])

2i.

Estimate of normalizing constant

bZ = Zγ (x)q (x)

bq (dx) = 1N

N

∑i=1

γ�X (i )

�q�X (i )

�and E

hbZi = Z , VhbZi = N�1 �Eq

��γ(X )q(X ) � Z

�2��.

A.D. () 25 / 26

Page 79: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

Now we can approximate Eπ [ϕ] by

Ebπ [ϕ] =Z

ϕ (x) bπ (dx) = N

∑i=1W (i )ϕ

�X (i )

�.

Statistics for N � 1

E [Ebπ [ϕ]] = Eπ [ϕ]�N�1π E [W (X ) (ϕ (X )�Eπ [ϕ])]| {z }negligible bias

,

V [Ebπ [ϕ]] = N�1π EhW (X ) (ϕ (X )�Eπ [ϕ])

2i.

Estimate of normalizing constant

bZ = Zγ (x)q (x)

bq (dx) = 1N

N

∑i=1

γ�X (i )

�q�X (i )

�and E

hbZi = Z , VhbZi = N�1 �Eq

��γ(X )q(X ) � Z

�2��.

A.D. () 25 / 26

Page 80: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

For a given ϕ, importance distribution minimizing V [Ebπ [ϕ]] isqopt (x) =

jϕ (x)�Eπ [ϕ]jπ (x)Rjϕ (x)�Eπ [ϕ]jπ (x) dx

.

Useless as sampling from qopt as complex as solving the originalproblem.

In applications we are interested in, there is typically no speci�c ϕ ofinterest.

Practical recommendations

Select q as close to π as possible.Ensure

w (x) =π (x)q (x)

< ∞.

IS methods typically used for problems of limited dimension; sayE = R25 ) For more complex problems, MCMC are favoured.

A.D. () 26 / 26

Page 81: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

For a given ϕ, importance distribution minimizing V [Ebπ [ϕ]] isqopt (x) =

jϕ (x)�Eπ [ϕ]jπ (x)Rjϕ (x)�Eπ [ϕ]jπ (x) dx

.

Useless as sampling from qopt as complex as solving the originalproblem.

In applications we are interested in, there is typically no speci�c ϕ ofinterest.

Practical recommendations

Select q as close to π as possible.Ensure

w (x) =π (x)q (x)

< ∞.

IS methods typically used for problems of limited dimension; sayE = R25 ) For more complex problems, MCMC are favoured.

A.D. () 26 / 26

Page 82: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

For a given ϕ, importance distribution minimizing V [Ebπ [ϕ]] isqopt (x) =

jϕ (x)�Eπ [ϕ]jπ (x)Rjϕ (x)�Eπ [ϕ]jπ (x) dx

.

Useless as sampling from qopt as complex as solving the originalproblem.

In applications we are interested in, there is typically no speci�c ϕ ofinterest.

Practical recommendations

Select q as close to π as possible.Ensure

w (x) =π (x)q (x)

< ∞.

IS methods typically used for problems of limited dimension; sayE = R25 ) For more complex problems, MCMC are favoured.

A.D. () 26 / 26

Page 83: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

For a given ϕ, importance distribution minimizing V [Ebπ [ϕ]] isqopt (x) =

jϕ (x)�Eπ [ϕ]jπ (x)Rjϕ (x)�Eπ [ϕ]jπ (x) dx

.

Useless as sampling from qopt as complex as solving the originalproblem.

In applications we are interested in, there is typically no speci�c ϕ ofinterest.

Practical recommendations

Select q as close to π as possible.Ensure

w (x) =π (x)q (x)

< ∞.

IS methods typically used for problems of limited dimension; sayE = R25 ) For more complex problems, MCMC are favoured.

A.D. () 26 / 26

Page 84: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

For a given ϕ, importance distribution minimizing V [Ebπ [ϕ]] isqopt (x) =

jϕ (x)�Eπ [ϕ]jπ (x)Rjϕ (x)�Eπ [ϕ]jπ (x) dx

.

Useless as sampling from qopt as complex as solving the originalproblem.

In applications we are interested in, there is typically no speci�c ϕ ofinterest.

Practical recommendations

Select q as close to π as possible.

Ensure

w (x) =π (x)q (x)

< ∞.

IS methods typically used for problems of limited dimension; sayE = R25 ) For more complex problems, MCMC are favoured.

A.D. () 26 / 26

Page 85: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

For a given ϕ, importance distribution minimizing V [Ebπ [ϕ]] isqopt (x) =

jϕ (x)�Eπ [ϕ]jπ (x)Rjϕ (x)�Eπ [ϕ]jπ (x) dx

.

Useless as sampling from qopt as complex as solving the originalproblem.

In applications we are interested in, there is typically no speci�c ϕ ofinterest.

Practical recommendations

Select q as close to π as possible.Ensure

w (x) =π (x)q (x)

< ∞.

IS methods typically used for problems of limited dimension; sayE = R25 ) For more complex problems, MCMC are favoured.

A.D. () 26 / 26

Page 86: Sequential Monte Carlo: An Introductionarnaud/samsi/samsi_lec1.pdf · Generic Problem Consider a sequence of probability distributions fπ ng n 1 de–ned on a sequence of (measurable)

For a given ϕ, importance distribution minimizing V [Ebπ [ϕ]] isqopt (x) =

jϕ (x)�Eπ [ϕ]jπ (x)Rjϕ (x)�Eπ [ϕ]jπ (x) dx

.

Useless as sampling from qopt as complex as solving the originalproblem.

In applications we are interested in, there is typically no speci�c ϕ ofinterest.

Practical recommendations

Select q as close to π as possible.Ensure

w (x) =π (x)q (x)

< ∞.

IS methods typically used for problems of limited dimension; sayE = R25 ) For more complex problems, MCMC are favoured.

A.D. () 26 / 26