# 12 : Variational Inference I 1 Kalman...

date post

19-Oct-2020Category

## Documents

view

0download

0

Embed Size (px)

### Transcript of 12 : Variational Inference I 1 Kalman...

10-708: Probabilistic Graphical Models 10-708, Spring 2016

12 : Variational Inference I

Lecturer: Eric P. Xing Scribes: Jing Chen, Yulan Huang,Yu-Fang Chang

1 Kalman Filtering

From last lecture, we have introduced Kalman Filtering as a recursive procedure to update the belief state.In each iteration, there are two steps: Predict Step and Update Step. In Predict Step, we compute latentstate distribution P (X

t+1|y1:t) from prior belief P (Xt|y1:t) and dynamic model p(Xt+1|Xt). This step is alsocalled time update. In Update Step, we compute new belief of the latent state distribution p(X

t+1|y1:t+1)from prediction p(X

t+1|y1:t) and observation yt+1 by using the observation model p(yt+1|Xt+1). The stepis also called measurement update since its using the measured information y

t+1. The reason for doing so isthat under a joint multivariant gaussian distribution, we can compute the conditional influences, marginalinfluences easily. Since all distributions are gaussian, their linear combinations are also gaussian. Hence wejust need the mean and covariance, which can be computed easily in this case, to describe the influence.

1.1 Derivation

Our goal is to compute p(xt+1|y1:t+1) We first utilize the dynamic model for finding the parameters of the

distribution of p(Xt+1|y1:t). With the dynamic model, we define xt+1:

x

t+1 = Axt +Gwt

where W is the noise model with zero mean and covariance matrix Q. We can then predict the expectationx̂t+1|t and covariance Pt+1|t of the distribution p(Xt+1|y1:t) as following.

x̂t+1|t = E[xt+1|y1:t] = E[Axt +Gwt+1|y1:t] = Ax̂t|t + 0 = Ax̂t|t

Pt+1|t = E[(xt+1 � x̂t+1|t)(xt+1 � x̂t+1|t)T |y1:t]

= E[(Axt

+Gwt

� x̂t+1|t)(Axt +Gwt � x̂t+1|t)T |y1:t]

= E[(Axt

+Gwt

�Ax̂t|t)(Axt +Gwt �Ax̂t|t)T |y1:t]

= E[(Axt

�Ax̂t|t)(Axt �Ax̂t|t)T + (Gwt(Axt �Ax̂t|t)T +Axt �Ax̂t|t)GwT

t

+Gwt

GwTt

]

= APt|tA

T + 0 + 0 +GQGT

= APt|tA

T +GQGT

For observation model, we have

yt

= Cxt

+ vt

vt

⇠ N (0;R)

1

2 12 : Variational Inference I

We can then derive the mean and variance of observation and state variables.

E[Yt+1|y1:t] = E[Cxt + wt] = Cx̂

t+1|t

V ar(yt+1|y1:t) = E[(Yt+1 � ŷ

t+1|t)(Yt+1 � ŷt+1|t)T |y1:t]= E[(CX

t+1 + vt � Cx̂t+1|t)(CXt+1 + vt � Cx̂t+1|t)T |y1:t]

= CPt+1|tC

T +R

Cov(yt+1, xt+1|y1:t) = E[(Yt+1 � ŷ

t+1|t)(Xt+1 � x̂t+1|t)T |y1:t]= E[(CX

t+1 + vt � Cx̂t+1|t)(Xt+1 + x̂t+1|t)

T |y1:t]= CP

t+1|T

Cov(xt+1, yt+1|y1:t) = Cov(yt+1, xt+1|y1:t)T = P

t+1|tCT

Next, we can combine the above result and get the joint distribution p(Xt+1, Yt+1|y1:t) ⇠ N (mt+1, Vt+1)

with

mt+1 =

x̂t+1|t

Cx̂t+1|t

�, V

t+1 =

Pt+1|t Pt+1|tC

T

CPt+1|t CPt+1|tC

T +R

�

We can see that V ar(yt+1|y1:t) is similar to P

t+1|t, so we introduce Kalman gain matrix K to replace somerepeated step as following:

K = Cov(xt+1, yt+1|y1:t)V ar(yt+1|y1:t)�1

= Pt+1|tC

T (CPt+1|tC

T +R)�1

Since K doesn’t require a new observation, in other words, independent of the data, it can be precomputed.For the measurements update, by the formula for conditional Faussian distribution, we have

x̂t+1|t+1 = x̂t+1|t + Cov(xt+1, yt+1|y1:t)V ar(yt+1|y1:t)�1(yt+1 � ŷt+1|t)

= x̂t+1|t +K(yt+1 � Cx̂t+1|t)

Pt+1|t+1 = Pt+1|t �KCPt+1|t

and we finally done with the derivation.

1.2 Example

Now we look at a simple example of the Kaulman Filter. We consider noisy observations of a 1D particlemoving randomly.

xt|t�1 = xt�1 + w, w ⇠ N (0,�x)

zt

= xt

+ v, v ⇠ N (0,�z

)

and then we can plugged into the Kalman Filter equation that gives us the new mean and variance.

Pt+1|t = APt|tA+GQG

T = �t

+ �x

x̂t+1|t = Ax̂t|t = x̂t|t

K = Pt+1|tC

T (CPt+1|tC

T +R)�1 = (�t

+ �x

)(�t

+ �x

+ �z

)

x̂t+1|t+1 = x̂t+1|t +Kt+1(zt+1 � Cx̂t+1|t) =

(�t

+ �x

)zt

+ �z

x̂t|t

�t

+ �x

+ �z

Pt+1|t+1 = Pt+1|t �KCPt+1|t =

(�t

+ �x

)�z

�t

+ �x

+ �z

12 : Variational Inference I 3

We can see that, with the initial setting, the noise is not going to generate a non-zero shift on the mean sincethe noise is centered. But the variance is changing. Let’s begin with the P (x0) and base on the transitionmodel, we can now predict the distribution of the next timestamp. It turns out a distribution with widergaussian variance. This is because that the point move due to the noise, which increase the uncertainty ofthe point. Once we have a new point z

t+1, the mean is shifted as the previous one add the transformationone. The noise is reduced due to our choice which reduce the uncertainty. And now it induce to a new beliefof the distribution of the point. Then we come to the intuition that once we have an observation, we canupdate the mean as following:

x̂t+1|t+1 = x̂t+1|t +Kt+1(zt+1 � Cx̂t+1|t) =

(�t

+ �x

)zt

+ �z

x̂t|t

�t

+ �x

+ �z

which the term (zt+1 � Cx̂

t+1|t) is called the innovation.

2 Background Review

2.1 Inference Problem

In this lecture, we look deeply into variational inference in graphical models. Given a graphical model Gand corresponding distribution P defined over a set of variables (nodes) V , the general inference probleminvolves the computation of:

• the likelihood of observed data

• the marginal distribution p(xA

) over a particular subset of nodes A ⇢ V

• the conditional distribution p(xA

|xB

) for disjoint subsets A and B

• a mode of the density x̂ = argmaxx2�m p(x)

Previous lectures have covered several exact inference algorithms like brute force enumeration, variableelimination, sum-product and junction tree algorithms. More specifically, while brute force enumerationsum over all variable excluding the query nodes, variable elimination fully utilize the graphical structureby taking ordered summation over variables. Both algorithm treat individual computations independently,which is to some degree wasteful. On the contrary, message passing approaches such as sum-product andbelief propagation are considered more e�cient as a result of sharing intermediate terms.

However, the above approaches typically apply to trees and can hardly converge on non-tree structures, inother words, loopy graphs. One exception is the Junction tree algorithm, which provides a way to convertany arbitrary loopy graph to a clique trees and then perform exact inference on the clique trees by passingmessages. Nevertheless few people actually use this approach because though junction is locally and globallyconsistent, it is to expensive with computational cost exponential to the maximum number of nodes in theeach clique.

In order to solve the inference problem on loopy graphs, we introduce approximate inference approaches,and focus on loopy belief propagation in this lecture.

2.2 Review of Belief Propagation

We first give a brief review for the Belief Propagation (BP), one classical message passing algorithm. Figure 1ademonstrates the procedure of massage update rules in BP, where node i passes a message to each of its

4 12 : Variational Inference I

neighboring nodes j once receiving messages from its neighbors excluding j. The message update rule canbe formulated as below:

mi!j(xj) /

X

xi

ij

(xi

, xj

) i

(xi

)Y

k2N(i)\j

mj!i(xi)

where ij

(xi

, xj

) is called Compatibilities (interactions) and i

(xi

) is called external evidence.

Further, the marginal probability of each node in graph can then be computed in a procedure as Figure ??shows, which can be formulated as:

bi

(xi

) / i

(xi

)Y

k2N(i)

mk

(xk

)

Particularly, it is worth noticing that BP on trees always converges to exact marginals as Junction Treealgorithm reveals.

(a) BP message passing procedure

(b) BP node marginal

Figure 1: BP Message-update Rules

Moreover, we can generalize BP model on factor graph, where the square nodes denote factors and circlenodes denote variables as Figure 2b. Similarly, we can compute the marginal probabilities for variable i andfactor a as Figure ?? shows:

bi

(xi

) / fi

(xi

)Y

a2N(i)

ma!i(xi)

ba

(Xa

) / fa

(xa

)Y

i2N(a)

mi!a(xi)

where we call bi

(xi

) “beliefs” and ma!i “messages”.

Therefore, messages are passed in two ways: (1) from variable i to factor a; (2) from factor a to variable i,which is written as:

mi!a(xi) =

Y

c2N(i)\a

mc!i(xi)

ma!i(xi) =

X

Xa\xi

fa

(Xa

)Y

j2N(a)\i

mj!a(xj)

12 : Variational Inference I 5

(a)

(b)

Figure 2: BP on Factor Graph

3 Loopy Belief Propagation

Now we consider the inference on an arbitrary graph. As is mentioned, most previously discussed algorithmsapply on tree-structured graphs and even some can converge on arbitrary graphs, say, loopy graphs, theysu↵er from the problem of expensive computational cost. Also, we can hardly prove the correctness of thesealgorithms, e.g. Junction Tree algorithm.

Therefore, the Loopy Belief Propagation (LBP) algorithm is proposed. LBP can be viewed as a fixed-pointiterative procedure that tries to minimize F

bethe

. More specifically, LBP starts with random initialization ofmessages and belief, and iterate the following steps until convergence.

bi

(xi

) /Y

a2N(i)

ma!i(xi)

ba

(Xa

) / fa

(Xa

)Y

i2N(a)

mi!a(xi)

mnewi!a(xi) =

Y

a2N(i)\a

mc!i(xi)

mnewa!i(xi) =

X

Xa\xi

fa

(Xa

)Y

j2N(a)\i

mj!a(xj)

However, it is not clear whether such procedure will converge to a correct solution. In late 90’s, there wasmuch research devotion trying to investigate the theory lying behind this algorithm. Murphy et. al (1999)has revealed empirically that a good approximation is still achievable if:

• stop after fixed number of iterations

• stop when no significant change in beliefs

• of solution is not oscillatory but converges, it usually is a good approximation

However, whether the good performance of LBP is a dirty hack ? In the following sections, we try tounderstand the theoretical characteristics of this algorithm and show that LBP can lead to an almost optimalapproximation to the actual distribution.

6 12 : Variational Inference I

3.1 Approximating a Probability Distribution

Let us denote the actual distribution of an arbitrary graph G as P :

P (X) =1

Z

Y

fa2Ffa

(Xa

)

Then we wish to find a distribution Q such that Q is a “good” approximation to P . To achieve this, we firstrecall the definition of KL-divergence:

KL(Q1||Q2) =X

X

Q1(X) log(Q1(X)

Q2(X))

satisfying:

KL(Q1||Q2) � 0

KL(Q1||Q2) = 0 () Q1 = Q2

KL(Q1||Q2) 6= KL(Q2||Q1)

Therefore, our goal of finding an optimal approximation can be converted to minimizing the KL-divergencebetween P and Q. However, the computation of KL(P ||Q) requires inference steps on P while KL(Q||P )not. Thus we adopt KL(Q||P ):

KL(Q||P ) =X

X

Q(X) log(Q(X)

P (X))

=X

X

Q(X) logQ(X)�X

X

Q(X) logP (X)

= �HQ

(X)� EQ

log(1

Z

Y

fa2Ffa

(Xa

))

= �HQ

(X)�X

fa2FE

Q

log fa

(Xa

) + logZ

Therefore we can formulate the optimization function as:

KL(Q||P ) = �HQ

(X)�X

fa2FE

Q

log fa

(Xa

) + logZ

where HQ

denotes the entropy of distribution Q, and F (P,Q) is called “free energy”:

F (P,Q) = �HQ

(X)�X

fa2FE

Q

log fa

(Xa

)

Particularly, we have F (P, P ) = � logZ and F (P,Q) � F (P, P ). More specifically, we see that whilePfa2F EQ log fa(Xa) can be computed based on marginals over each fa, HQ = �

PX

Q(X) logQ(X) ishard since it requires summation over all possible values of X. This makes the computation of F hard. Onepossible solution is to approximate F (P,Q) with some F̂ (P,Q) that is easy to compute.

12 : Variational Inference I 7

Figure 3: A tree graph

3.2 Deriving the Bethe Approximation

As shown in Figure 3, for a tree-structured graph, the joint probability can be written as b(x) =Q

a

ba

(xa

)Q

i

bi

(xi

)1�di ,where a enumerates all edges in the graph, i enumerates all nodes in the graph and d

i

are the degree of thevertex i. We can then calculate their entropy as well as the free energy:

Htree

= �X

a

X

xa

ba

(xa

)lnba

(xa

) +X

i

(di

� 1)X

xi

bi

(xi

)lnbi

(xi

)

FTree

=X

a

X

xa

ba

(xa

)lnba

(xa

)

fa

(xa

)+X

i

(1� di

)X

xi

bi

(xi

)lnbi

(xi

)

Since the entropy and free energy only involves summation over edges and vertices, it is easy to compute.

Figure 4: An arbitrary graph

However, for an arbitrary graph as Figure 4, the entropy and free energy is hard to write down. Therefore,we use the Bethe approximation which has the exact same formula as free energy for a tree-structured graph:

HBethe

= �X

a

X

xa

ba

(xa

)lnba

(xa

) +X

i

(di

� 1)X

xi

bi

(xi

)lnbi

(xi

)

FBethe

=X

a

X

xa

ba

(xa

)lnba

(xa

)

fa

(xa

)+X

i

(1� di

)X

xi

bi

(xi

)lnbi

(xi

)

As we can see, the advantage of Bethe approximation is that it is easy to compute, since the entropy termonly involves sum over pairwise and single variables. However, the approximation F̂ (P,Q) may or may not

8 12 : Variational Inference I

be well connect to the true F (P,Q). There is no guarantee that it will be greater, equal or less than thetrue F (P,Q).

To find the belief b(xa

) and b(xi

) that minimize the KL divergence, while still satisfying the local consistency.For the discrete case, the local consistency is:

8i, xi

X

xi

bi

(xi

) = 1

8a, i 2 N(a), xi

X

xa|xa=xi,xj

ba

(xa

) = bi

(xi

)

where N(a) is all neighbors of xi

. Thus, using the Lagrangian multiplier, the objective function becomes:

L = FBethe

+X

i

�i

[1�X

xi

bi

(xi

)] +X

a

X

i2N(a)

X

xi

�ai

(xi

)[bi

(xi

)�X

Xa\xi

ba

(Xa

)]

We can find the stationary points by setting the derivative to zero:

@L

@bi

(xi

)= 0

) bi

(xi

) / exp( 1di

� 1X

a2N(i)

�ai

(xi

))

@L

@ba

(xa

)= 0

) ba

(xa

) / exp(�logfa

(Xa

) +X

i2N(a)

�ai

(xi

))

3.3 Bethe Energy Minimization using Belief Propagation

By setting the derivative of the objective function, in the previous section we obtain the update formula forbi

(xi

) and ba

(Xa

) in a factor graph. For the variables in the factor graph, with �ai

(xi

) = log(mi!a(xi)) =

logQ

b2N(i) 6=a mb!i(xi), we have:

bi

(xi

) / fi

(xi

)Y

a2N(i)

ma!i(xi)

For the factors, we have:

ba

(Xa

) / fa

(Xa

)Y

i2N(a)

Y

c2N(i)\a

mc!i(xi)

Using ba!i(xi) =

PXa\xi ba(Xa), we get

ma!i(xi) =

X

Xa\xi

fa

(Xa

)Y

j2N(a)\i

Y

b2N(j)\a

mb!j(xj)

As we can see, the Belief Propagation-update is in a sum product form and is easy to compute.

4 Theory Behind Loopy Belief Propagation

For a distribution p(X|✓) associated with a complex graph, computing the marginal (or conditional) proba-bility of arbitrary random variable(s) is intractable. Thus, instead the variational methods optimize over an

12 : Variational Inference I 9

easier (tractable) distribution q. It want to find

q⇤ = argminq2S{FBetha(p, q)}

Now, we can just optimize Hq

. However, optimizing Hq

is still di�cult, so instead we do not optimize theq(X) explicitly, but relax the optimization problem to the approximate objective, the Bethe free energy ofthe beliefs F (b), and a loose feasible set with local constrains. The Loopy belief propagation can be viewedas a fixed point iteration procedure that want to optimize F(b). Loopy belief propagation often not convergeto the correct solution, although empirically it often performs well.

5 Generalized Belief Propagation

Instead of considering single node in normal loopy belief propagation, the Generalized Belief Propagationconsiders regions of graph. This enables us to use more accurate H

q

for approximation and achieve betterresults. In the generalized belief propagation, instead of using Bethe free energy, it uses Gibbs free energywhich is more generalized and achieves better results in the cost of an increased computational complexity.

More precisely, the Generalized Belief Propagation defines the belief in a region as the product of thelocal information (factors in region), messages from parent regions and messages into descendant regionsfrom parents who are not descendants. Moreover, the message-update roles are obtained by enforcingmarginalization constrains.

Figure 5: Example: Generalized Belief Propagation

There is an example for this region integration: As shown in Figure 5, the graph contains four regions, eachof which contains four nodes.

When we compute beliefs, we integrate them hierarchically, which increases the computational cost. Asshown in Figure 6 and Figure ??, when we want to compute belief for a particular region, we consider nodesthat share regions and calculate their energies from their parent regions. As we can see, more regions canmake the approximation more accurate, but also increases the computational cost.

10 12 : Variational Inference I

Figure 6: Example: Hierarchically compute belief in Generalized Belief Propagation

Kalman FilteringDerivationExample

Background ReviewInference ProblemReview of Belief Propagation

Loopy Belief PropagationApproximating a Probability DistributionDeriving the Bethe ApproximationBethe Energy Minimization using Belief Propagation

Theory Behind Loopy Belief PropagationGeneralized Belief Propagation