Lecture 18 Expectation Maximization

22
Lecture 18 Expectation Maximization Machine Learning

description

Lecture 18 Expectation Maximization . Machine Learning. Last Time. Expectation Maximization Gaussian Mixture Models. Term Project. Projects may use existing machine learning software weka , libsvm , liblinear , mallet, crf ++, etc. But must experiment with Type of data - PowerPoint PPT Presentation

Transcript of Lecture 18 Expectation Maximization

Page 1: Lecture 18 Expectation  Maximization

Lecture 18Expectation Maximization

Machine Learning

Page 2: Lecture 18 Expectation  Maximization

Last Time

• Expectation Maximization• Gaussian Mixture Models

Page 3: Lecture 18 Expectation  Maximization

Term Project

• Projects may use existing machine learning software– weka, libsvm, liblinear, mallet, crf++, etc.

• But must experiment with– Type of data– Feature Representations– a variety of training styles – amount of data,

classifiers.– Evaluation

Page 4: Lecture 18 Expectation  Maximization

Gaussian Mixture Model

• Mixture Models.– How can we combine many probability density

functions to fit a more complicated distribution?

Page 5: Lecture 18 Expectation  Maximization

Gaussian Mixture Model

• Fitting Multimodal Data

• Clustering

Page 6: Lecture 18 Expectation  Maximization

Gaussian Mixture Model

• Expectation Maximization.

• E-step– Assign points.

• M-step– Re-estimate model parameters.

Page 7: Lecture 18 Expectation  Maximization

Today

• EM Proof– Jensen’s Inequality

• Clustering sequential data– EM over HMMs

Page 8: Lecture 18 Expectation  Maximization

Gaussian Mixture Models

Page 9: Lecture 18 Expectation  Maximization

How can we be sure GMM/EM works?

• We’ve already seen that there are multiple clustering solutions for the same data.– Non-convex optimization problem

• Can we prove that we’re approaching some maximum, even if many exist.

Page 10: Lecture 18 Expectation  Maximization

Bound maximization

• Since we can’t optimize the GMM parameters directly, maybe we can find the maximum of a lower bound.

• Technically: optimize a convex lower bound of the initial non-convex function.

Page 11: Lecture 18 Expectation  Maximization

EM as a bound maximization problem

• Need to define a function Q(x,Θ) such that– Q(x,Θ) ≤ l(x,Θ) for all x,Θ– Q(x,Θ) = l(x,Θ) at a single

point– Q(x,Θ) is concave

Page 12: Lecture 18 Expectation  Maximization

EM as bound maximization

• Claim: – for GMM likelihood

– The GMM MLE estimate is a convex lower bound

Page 13: Lecture 18 Expectation  Maximization

EM Correctness Proof

• Prove that l(x,Θ) ≥ Q(x,Θ)Likelihood function

Introduce hidden variable (mixtures in GMM)

A fixed value of θt

Jensen’s Inequality (coming soon…)

Page 14: Lecture 18 Expectation  Maximization

EM Correctness Proof

GMM Maximum Likelihood Estimation

Page 15: Lecture 18 Expectation  Maximization

The missing link: Jensen’s Inequality

• If f is concave (or convex down):

• Incredibly important tool for dealing with mixture models.

if f(x) = log(x)

Page 16: Lecture 18 Expectation  Maximization

Generalizing EM from GMM

• Notice, the EM optimization proof never introduced the exact form of the GMM

• Only the introduction of a hidden variable, z.• Thus, we can generalize the form of EM to

broader types of latent variable models

Page 17: Lecture 18 Expectation  Maximization

General form of EM

• Given a joint distribution over observed and latent variables:

• Want to maximize:

1. Initialize parameters2. E Step: Evaluate:

3. M-Step: Re-estimate parameters (based on expectation of complete-data log likelihood)

4. Check for convergence of params or likelihood

Page 18: Lecture 18 Expectation  Maximization

Applying EM to Graphical Models

• Now we have a general form for learning parameters for latent variables.– Take a Guess– Expectation: Evaluate likelihood– Maximization: Reestimate parameters– Check for convergence

Page 19: Lecture 18 Expectation  Maximization

Clustering over sequential data

• HMMs

• What if you believe the data is sequential, but you can’t observe the state.

Page 20: Lecture 18 Expectation  Maximization

Training latent variables in Graphical Models

• Now consider a general Graphical Model with latent variables.

Page 21: Lecture 18 Expectation  Maximization

EM on Latent Variable Models

• Guess– Easy, just assign random values to parameters

• E-Step: Evaluate likelihood.– We can use JTA to evaluate the likelihood.– And marginalize expected parameter values

• M-Step: Re-estimate parameters.– Based on the form of the models generate new expected

parameters • (CPTs or parameters of continuous distributions)

• Depending on the topology this can be slow

Page 22: Lecture 18 Expectation  Maximization

Break