Fields Institute for Research in Mathematical Sciences - … · 2011-04-28 · proposal...

15
Approximating the Likelihood for the Hyper-parameters in Gaussian Process Regression Advisor: Professor Radford Neal Chunyi Wang Department of Statistics, University of Toronto Graudate Student Research Day April 28th, 2011

Transcript of Fields Institute for Research in Mathematical Sciences - … · 2011-04-28 · proposal...

Page 1: Fields Institute for Research in Mathematical Sciences - … · 2011-04-28 · proposal distribution S(xjx)), accept the proposal with probability ... and Bis an n mmatrix whose columns

Approximating the Likelihood for theHyper-parameters in Gaussian Process

Regression

Advisor: Professor Radford Neal

Chunyi WangDepartment of Statistics,

University of Toronto

Graudate Student Research DayApril 28th, 2011

Page 2: Fields Institute for Research in Mathematical Sciences - … · 2011-04-28 · proposal distribution S(xjx)), accept the proposal with probability ... and Bis an n mmatrix whose columns

Gaussian Process Regression: Model

We observe n training cases (x1, y1), ..., (xn, yn) where xi is a vector ofinputs of length p, and yi is the corresponding scalar response, whichwe assume is a function of the inputs plus some noise:

yi = f(xi) + εi

where εiiid∼ N(0, σ2)

In a Gaussian Process Regression model, the prior mean of thefunction f is 0, and the covariance of the response is

Cov(yi, yj) = k(xi, xj) + σ2δij

Page 3: Fields Institute for Research in Mathematical Sciences - … · 2011-04-28 · proposal distribution S(xjx)), accept the proposal with probability ... and Bis an n mmatrix whose columns

Gaussian Process Regression: Covariance Function

Any covariance function that leads to non-negative definite covariancematrices is allowed, such as the squared exponential:

k(xi, xj) = η2 exp(−β2||xi − xj ||2)

η, β are unknown parameters that are estimated from the data.

Illustration of GP data with different hyper-parameter values:

0 1 2 3 4 5−6

−4

−2

0

2

4

6η=5,β=0.5,σ=0.5

x

y

0 1 2 3 4 5−5

0

5

10

15

20η=5,β=2,σ=0.5

x

y

0 1 2 3 4 51

1.5

2

2.5

3

3.5

4

4.5

5η=0.5,β=2,σ=0.5

xy

Page 4: Fields Institute for Research in Mathematical Sciences - … · 2011-04-28 · proposal distribution S(xjx)), accept the proposal with probability ... and Bis an n mmatrix whose columns

Gaussian Process Regression: Prediction

We wish to predict the response y∗, for a test case x∗ based on thetraining cases.The predictive distribution for the response y∗ isGaussian:

E[y∗|y] = kTC−1y

V ar[y∗|y] = v − kTC−1k

where C is the covariance matrix for the training responses, k is thevector of covariances between y∗ and each of yi, and v is the priorvariance of y∗, [i.e. Cov(y∗, y∗)].

To do this in the Bayesian framework, we obtain a random samplefrom the posterior density for the hyper-parameter θ:

π(θ|y) ∝ (2π)−n2 det(C)−1 exp

(−1

2yTC−1y

)π(θ)

where π(θ) is the prior for θ.

Page 5: Fields Institute for Research in Mathematical Sciences - … · 2011-04-28 · proposal distribution S(xjx)), accept the proposal with probability ... and Bis an n mmatrix whose columns

Complexity for the GP Regression Model

The posterior density is

π(θ|y) ∝ (2π)−n2 det(C)−1 exp

(−1

2yTC−1y

)π(θ)

The time needed to perform the following major computations are(asymptotically, with an implementation-specific constant coefficient):

C pn2

det(C) n3

C−1 n3

yTC−1y n2

In practice we compute C (pn2), and the Cholesky decomposition ofC (n3), then we can cheaply obtain det(C) and yTC−1y.

Page 6: Fields Institute for Research in Mathematical Sciences - … · 2011-04-28 · proposal distribution S(xjx)), accept the proposal with probability ... and Bis an n mmatrix whose columns

Markov Chain Monte Carlo Methods

We construct a ergodic Markov Chain with transition T (x′|x) whichleaves the target distribution π(x) invariant, i.e.∫

π(x)T (x′|x)dx = π(x′)

Metropolis algorithm: propose to move from x to x∗ (according to aproposal distribution S(x∗|x)), accept the proposal with probabilitymin[1, π(x∗)/π(x)]. This satisfies the detailed balance condition

π(x)T (x′|x) = π(x′)T (x|x′)

and thus the chain (called reversible) will leave the target distributionπ invariant.

Page 7: Fields Institute for Research in Mathematical Sciences - … · 2011-04-28 · proposal distribution S(xjx)), accept the proposal with probability ... and Bis an n mmatrix whose columns

MCMC with Temporary Mapping

We can combine three stochastic mappings T , T and T to form thetransition T (x′|x), as follows:

xT−→ y

T−→ y′T−→ x′

where x ∈ X is the original sample space and y ∈ Y is a temporaryspace.To leave the target distribution π invariant these mappings have tosatisfy ∫

π(x)T (y|x)dx = ρ(y)∫ρ(y)T (y′|y)dy = ρ(y′)∫

ρ(y′)T (x′|y′)dy′ = π(x′)

Page 8: Fields Institute for Research in Mathematical Sciences - … · 2011-04-28 · proposal distribution S(xjx)), accept the proposal with probability ... and Bis an n mmatrix whose columns

Mapping to a Discretizing Chain

Suppose we have a Markov Chain which leaves a distribution π∗

invariant. We can map to a space of realizations of such a chain. Thecurrent state x is mapped to a chain with one time step (whose valueis x) ‘marked’.

x

y

T

X

Y

We don’t actually compute everything beforehand, but simulate newstates (and save them for future re-use) when needed.

Page 9: Fields Institute for Research in Mathematical Sciences - … · 2011-04-28 · proposal distribution S(xjx)), accept the proposal with probability ... and Bis an n mmatrix whose columns

Mapping to a Discretizing Chain - Continued

We then attempt to “move” the marker along the chain to anotherstate (whose value is x′), with acceptance probability

min[1, π(x′)/π∗(x′)π(x)/π∗(x) ]. We can do multiple such updates in this space

before mapping back to the original space.

x

x′ y′

T

TX

T

Y

(Solid line segments are the updates that are actually simulated,while the dashed segments are not).

Page 10: Fields Institute for Research in Mathematical Sciences - … · 2011-04-28 · proposal distribution S(xjx)), accept the proposal with probability ... and Bis an n mmatrix whose columns

Approximation: Dimension Reduction

There are mainly two classes of approximation methods. One class ofapproximations is based on reducing the dimension of the data.

I Subset of data (SoD): π∗ is the “posterior” given only a subset(of m observations) of (x1, y1), ..., (xn, yn). Need timeproportional to pm2 to compute C∗, and m3 to invert C∗.

I Linear combination of responses: Let y = Ay where A is of rankm. y is also Gaussian, with lower dimension. π∗ is the posteriorbased on the covariance matrix for y, C = ACAT , of rank m.

I Others: SoR, Bayesian Committe Machine, etc...

Page 11: Fields Institute for Research in Mathematical Sciences - … · 2011-04-28 · proposal distribution S(xjx)), accept the proposal with probability ... and Bis an n mmatrix whose columns

Approximation: Diagonal Plus Low Rank

The other class is based on approximating the covariance matrix C bythe sum of a diagonal matrix and a matrix of low rank.

C is usually of the form σ2I + C0, where C0 is non-negative definite.If C0 can be approximated by some lower rank matrix C0, then withthe matrix inversion lemma and the matrix determinant lemma:

(D + UWV T )−1 = D−1 −D−1U(W−1 + V TD−1U)−1V TD−1

det(D + UWV T ) = det(W−1 + V TD−1U) det(W ) det(D)

the computation can be reduced. Thus we can approximate thelikelihood by substitute C with C = C0 + σ2I in the posterior:

(2π)−n2 det(C)−1 exp

(−1

2yT C−1y

)

Page 12: Fields Institute for Research in Mathematical Sciences - … · 2011-04-28 · proposal distribution S(xjx)), accept the proposal with probability ... and Bis an n mmatrix whose columns

Approximation: Diagonal Plus Low Rank - Continued

I Eigen-method: C = σ2I +BΛmBT , where Λm is the diagonal

matrix with eigenvalues λ1 ≥ λ2, ...,≥ λm of C on its diagonal,and B is an n×m matrix whose columns are the correspondingorthonormal eigenvectors. Need to compute C (pn2) and the firstm eigenvalues and eigenvectors of C (mn2, with a large constantfactor).

I Nystrom methods: C = σ2I + C0(n,m)[C

0(m,m)]

−1C0(m,n) where

C0(n,m) is a n×m matrix, whose m columns are m randomly

selected columns from C0. Need to compute C0(n,m) (pmn), then

find the Cholesky decomposition of some m×m matrix, (m3).

Page 13: Fields Institute for Research in Mathematical Sciences - … · 2011-04-28 · proposal distribution S(xjx)), accept the proposal with probability ... and Bis an n mmatrix whose columns

Example: Use SoD to form the π∗

We generate a synthetic dataset as follows:

y = 3 sin(x2) + 2 sin(1.5x+ 1) + ε

where x ∼ Unif(0, 3) and ε ∼ N(0, 0.52). We generated 500observations as the training set, and another 300 for the testing set.

We use the a squared exponential co-variance function:

102 + η2 exp

(−

(x− x′)2

β2

)+ δ·,′σ

2

and the priors are

log η2 ∼ N(3, 32)

log β2 ∼ N(2, 32)

log σ2 ∼ N(0, 32)

0 0.5 1 1.5 2 2.5 3−6

−4

−2

0

2

4

6

x

y

Training Set

Page 14: Fields Institute for Research in Mathematical Sciences - … · 2011-04-28 · proposal distribution S(xjx)), accept the proposal with probability ... and Bis an n mmatrix whose columns

Example: Use SoD to form the π∗ - Continued

The first 50 observations are used as the subset to form the π∗ to implement the

MCMC (with “mapping”), and compare the results to a Metropolis MCMC. The

sample ACFs are adjusted so that they reflect the same amount of evaluations of

π(x).

0 0.5 1 1.5 2 2.5 3−6

−4

−2

0

2

4

6

x

y

Predictions

testing casesmetropolismapping

0 20 40 60 80 100−0.5

0

0.5

1

Lag

Sam

ple

Aut

ocor

rela

tion

Sample ACF − Metropolis

0 5 10 15 20 25 30−0.5

0

0.5

1

Lag

Sam

ple

Aut

ocor

rela

tion

Sample ACF − Mapping with SoD

Page 15: Fields Institute for Research in Mathematical Sciences - … · 2011-04-28 · proposal distribution S(xjx)), accept the proposal with probability ... and Bis an n mmatrix whose columns

References

1. Neal, R. M. (1998) Constructing Efficient MCMC Methods UsingTemporary Mapping and Caching, Talk at Columbia University,December 2006

2. Neal, R. M. (1998) Regression and Classification Using GaussianProcess Priors Bayesian Statistics 6, pp. 475-501 Oxford UniversityPress

3. Neal, R. M. (2008) Approximate Gaussian Process Regression UsingMatrix Approximations and Linear Response Combinations Tech.Report (Draft), Dept. of Statistics, University of Toronto

4. Quinonero-Candela, J., Rasmussen, C.E. and Williams, C. K. I. (2007)Approximation Methods for Gaussian Process Regression Tech. ReportMSR-TR-2007-124, Microsoft Research

5. Rasmussen, C. E. and Williams, C. K. I. (2006) Gaussian Processesfor Machine Learning, The MIT Press.