Crash Course on Machine Learning Part VI
description
Transcript of Crash Course on Machine Learning Part VI
Crash Course on Machine LearningPart VI
Several slides from Derek Hoiem, Ben Taskar, Christopher Bishop, Lise Getoor
Graphical Models• Representations
– Bayes Nets– Markov Random Fields– Factor Graphs
• Inference– Exact
• Variable Elimination• BP for trees
– Approximate inference• Sampling• Loopy BP• Optimization
• Learning (today)
SP2-3
Summary of inference methodsChain (online) Low treewidth High treewidth
Discrete BP = forwardsBoyen-Koller (ADF), beam search
VarElim, Jtree, recursive conditioning
Loopy BP, mean field, structured variational, EP, graphcutsGibbs
Gaussian BP = Kalman filter Jtree = sparse linear algebra
Loopy BPGibbs
Other EKF, UKF, moment matching (ADF)Particle filter
EP, EM, VB, NBP, Gibbs
EP, variational EM, VB, NBP, Gibbs
Exact Deterministic approximation Stochastic approximationBP=belief propagation, EP = expectation propagation, ADF = assumed density filtering, EKF = extended Kalman filter, UKF = unscented Kalman filter, VarElim = variable elimination, Jtree= junction tree, EM = expectation maximization, VB = variational Bayes, NBP = non-parametric BPSlide Credit: Kevin Murphy
Variable EliminationB
A
E
M J
The Sum-Product Algorithm
Objective:i. to obtain an efficient, exact inference algorithm
for finding marginals;ii. in situations where several marginals are
required, to allow computations to be shared efficiently.
Key idea: Distributive Law
The Sum-Product Algorithm
The Sum-Product Algorithm
The Sum-Product Algorithm
• Initialization
The Sum-Product Algorithm
To compute local marginals:• Pick an arbitrary node as root• Compute and propagate messages from the leaf nodes
to the root, storing received messages at every node.• Compute and propagate messages from the root to
the leaf nodes, storing received messages at every node.
• Compute the product of received messages at each node for which the marginal is required, and normalize if necessary.
Loopy Belief Propagation
• Sum-Product on general graphs.• Initial unit messages passed across all links,
after which messages are passed around until convergence (not guaranteed!).
• Approximate but tractable for large graphs.• Sometime works well, sometimes not at all.
Learning Bayesian networks
InducerData + Prior information
E
R
B
A
C .9 .1e
b
e
.7 .3
.99 .01.8 .2
be
b
b
e
BE P(A | E,B)
Known Structure -- Complete Data E, B, A<Y,N,N><Y,Y,Y><N,N,Y><N,Y,Y> . .<N,Y,Y>
Inducer
E B
A.9 .1
e
b
e
.7 .3
.99 .01.8 .2
be
b
b
e
BE P(A | E,B)
? ?e
b
e
? ?
? ?? ?
be
b
b
e
BE P(A | E,B) E B
A
• Network structure is specified– Inducer needs to estimate parameters
• Data does not contain missing values
Unknown Structure -- Complete Data E, B, A<Y,N,N><Y,Y,Y><N,N,Y><N,Y,Y> . .<N,Y,Y>
Inducer
E B
A.9 .1
e
b
e
.7 .3
.99 .01.8 .2
be
b
b
e
BE P(A | E,B)
? ?e
b
e
? ?
? ?? ?
be
b
b
e
BE P(A | E,B) E B
A
• Network structure is not specified– Inducer needs to select arcs & estimate parameters
• Data does not contain missing values
Known Structure -- Incomplete Data
Inducer
E B
A.9 .1
e
b
e
.7 .3
.99 .01.8 .2
be
b
b
e
BE P(A | E,B)
? ?e
b
e
? ?
? ?? ?
be
b
b
e
BE P(A | E,B) E B
A
• Network structure is specified• Data contains missing values
– We consider assignments to missing values
E, B, A<Y,N,N><Y,?,Y><N,N,Y><N,Y,?> . .<?,Y,Y>
Known Structure / Complete Data
• Given a network structure G– And choice of parametric family for P(Xi|Pai)
• Learn parameters for network
Goal• Construct a network that is “closest” to
probability that generated the data
Learning Parameters for a Bayesian NetworkE B
A
C
][][][][
]1[]1[]1[]1[
MCMAMBME
CABE
D
• Training data has the form:
Learning Parameters for a Bayesian NetworkE B
A
C
• Since we assume i.i.d. samples,likelihood function is
m
mCmAmBmEPDL ):][],[],[],[():(
Learning Parameters for a Bayesian NetworkE B
A
C
• By definition of network, we get
][][][][
]1[]1[]1[]1[
MCMAMBME
CABE
Learning Parameters for a Bayesian NetworkE B
A
C
• Rewriting terms, we get
m
m
m
m
m
mAmCP
mEmBmAP
mBP
mEP
mCmAmBmEPDL
):][|][(
):][],[|][(
):][(
):][(
):][],[],[],[():(
][][][][
]1[]1[]1[]1[
MCMAMBME
CABE
General Bayesian Networks
Generalizing for any Bayesian network:
• The likelihood decomposes according to the structure of the network.
iii
i miii
m iiii
mn
DL
mPamxP
mPamxP
mxmxPDL
):(
):][|][(
):][|][(
):][,],[():( 1 i.i.d.
samples
Network factorization
General Bayesian Networks (Cont.)
Decomposition Independent Estimation Problems
If the parameters for each family are not related, then they can be estimated
independently of each other.
From Binomial to Multinomial• For example, suppose X can have the values
1,2,…,K• We want to learn the parameters 1, 2. …, K
Sufficient statistics:• N1, N2, …, NK - the number of times each
outcome is observedLikelihood function:
MLE:
K
k
Nk
kDL1
):(
N
Nkkˆ
Likelihood for Multinomial Networks• When we assume that P(Xi | Pai ) is
multinomial, we get further decomposition:
i i
ii
ii
i i
ii
i ii
pa x
paxNpax
pa x
paxNiii
pa pamPamiii
miiiii
paxP
pamxP
mPamxPDL
),(|
),(][,
):|(
):|][(
):][|][():(
m
iiiii mPamxPDL ):][|][():(
i i
ii
i ii
pa x
paxNiii
pa pamPamiii
miiiii
paxP
pamxP
mPamxPDL
),(][,
):|(
):|][(
):][|][():(
i iipa pamPamiii
miiiii
pamxP
mPamxPDL
][,):|][(
):][|][():(
Likelihood for Multinomial Networks• When we assume that P(Xi | Pai ) is
multinomial, we get further decomposition:
• For each value pai of the parents of Xi we get an independent multinomial problem
• The MLE is
i i
ii
iipa x
paxNpaxii DL ),(
|):(
)(),(ˆ |
i
iipax paN
paxNii
Bayesian Approach: Dirichlet Priors
• Recall that the likelihood function is
• A Dirichlet prior with hyperparameters 1,…,K is defined as
for legal 1,…, K
Then the posterior has the same form, with
hyperparameters 1+N 1,…,K +N K
K
1k
Nk
kDL ):(
K
kk
kP1
1)(
K
k
Nk
K
k
Nk
K
kk
kkkkDPPDP1
1
11
1)|()()|(
Dirichlet Priors (cont.)• We can compute the prediction on a new event
in closed form: • If P() is Dirichlet with hyperparameters 1,
…,K then
• Since the posterior is also Dirichlet, we get
kk d)(P)k]1[X(P
)N(
Nd)D|(P)D|k]1M[X(P kkk
Bayesian Prediction(cont.)• Given these observations, we can compute the
posterior for each multinomial Xi | pai independently– The posterior is Dirichlet with parameters
(Xi=1|pai)+N (Xi=1|pai),…, (Xi=k|pai)+N (Xi=k|pai)
• The predictive distribution is then represented by the parameters
)pa(N)pa(
)pa,x(N)pa,x(~ii
iiiipa|x ii
Learning Parameters: Summary• Estimation relies on sufficient statistics
– For multinomial these are of the form N (xi,pai) – Parameter estimation
• Bayesian methods also require choice of priors• Both MLE and Bayesian are asymptotically equivalent and
consistent• Both can be implemented in an on-line manner by
accumulating sufficient statistics
)()(),(),(~
|ii
iiiipax paNpa
paxNpaxii
)(
),(ˆ |i
iipax paN
paxNii
MLE Bayesian (Dirichlet)
Learning Structure from Complete Data
Benefits of Learning Structure
• Efficient learning -- more accurate models with less data– Compare: P(A) and P(B) vs. joint P(A,B)
• Discover structural properties of the domain– Ordering of events– Relevance
• Identifying independencies faster inference• Predict effect of actions
– Involves learning causal relationship among variables
Why Struggle for Accurate Structure?
• Increases the number of parameters to be fitted
• Wrong assumptions about causality and domain structure
• Cannot be compensated by accurate fitting of parameters
• Also misses causality and domain structure
Earthquake Alarm Set
Sound
BurglaryEarthquake Alarm Set
Sound
Burglary
Earthquake Alarm Set
Sound
Burglary
Adding an arc Missing an arc
Approaches to Learning Structure
• Constraint based– Perform tests of conditional independence– Search for a network that is consistent with the observed
dependencies and independencies
• Pros & Cons Intuitive, follows closely the construction of BNs Separates structure learning from the form of the
independence tests Sensitive to errors in individual tests
Approaches to Learning Structure• Score based
– Define a score that evaluates how well the (in)dependencies in a structure match the observations
– Search for a structure that maximizes the score• Pros & Cons
Statistically motivated Can make compromises Takes the structure of conditional probabilities into
account Computationally hard
Likelihood Score for Structures
First cut approach: – Use likelihood function
• Since we know how to maximize parameters from now we assume
m ii,G
Gii
mGn1G
),G:]m[Pa|]m[x(P
),G:]m[x,],m[x(P)D:,G(L
):,(max):( DGLDGL GG
Likelihood Score for Structure (cont.)
Rearranging terms:
where• H(X) is the entropy of X • I(X;Y) is the mutual information between X and
Y– I(X;Y) measures how much “information” each variables provides about the other– I(X;Y) 0 – I(X;Y) = 0 iff X and Y are independent– I(X;Y) = H(X) iff X is totally predictable given Y
Likelihood Score for Structure (cont.)
Good news:• Intuitive explanation of likelihood score:
– The larger the dependency of each variable on its parents, the higher the score
• Likelihood as a compromise among dependencies, based on their strength
Likelihood Score for Structure (cont.)
Bad news:• Adding arcs always helps
– I(X;Y) I(X;Y,Z)– Maximal score attained by fully connected
networks– Such networks can overfit the data ---
parameters capture the noise in the data
Avoiding Overfitting“Classic” issue in learning. Approaches:• Restricting the hypotheses space
– Limits the overfitting capability of the learner– Example: restrict # of parents or # of parameters
• Minimum description length– Description length measures complexity– Prefer models that compactly describes the training data
• Bayesian methods– Average over all possible parameter values– Use prior knowledge
Bayesian Inference
• Bayesian Reasoning---compute expectation over unknown G
• Assumption: Gs are mutually exclusive and exhaustive
• We know how to compute P(x[M+1]|G,D)– Same as prediction with fixed structure
• How do we compute P(G|D)?
G
)D|G(P)G,D|]1M[x(P)D|]1M[x(P
Marginal likelihood
Prior over structures
)()()|()|(
DPGPGDPDGP
Using Bayes rule:
P(D) is the same for all structures GCan be ignored when comparing structures
Probability of Data
Posterior Score
Marginal Likelihood
• By introduction of variables, we have that
• This integral measures sensitivity to choice of parameters
dGPGDPGDP )|(),|()|(
Likelihood Prior over
parameters
Marginal Likelihood for General Network
The marginal likelihood has the form:
where • N(..) are the counts from the data• (..) are the hyperparameters for each family given G
i pa xG
i
Gi
Gi
GG
G
Gi i i
ii
ii
i
paxpaxNpax
paNpapa
GDP )),(()),(),((
)()()()|(
Dirichlet Marginal LikelihoodFor the sequence of values of Xi when
Xi’s parents have a particular value
Bayesian ScoreTheorem: If the prior P( |G) is “well-
behaved”, then
Asymptotic Behavior: Consequences
• Bayesian score is consistent– As M the “true” structure G* maximizes the score
(almost surely)– For sufficiently large M, the maximal scoring structures are
equivalent to G*• Observed data eventually overrides prior information
– Assuming that the prior assigns positive probability to all cases
Asymptotic Behavior
• This score can also be justified by the Minimal Description Length (MDL) principle
• This equation explicitly shows the tradeoff between– Fitness to data --- likelihood term– Penalty for complexity --- regularization term
Priors
• Over network structure– Minor role– Uniform,
• Over parameters– Separate prior for each structure is not feasible– Fixed dirichlet distributions– BDe Prior
BDe Score
Possible solution: The BDe prior• Represent prior using two elements M0, B0
– M0 - equivalent sample size– B0 - network representing the prior probability of
events
BDe Score
Intuition: M0 prior examples distributed by B0
• Set (xi,pai) = M0 P(xi,ai ) – Compute P(xi,pai
G| B0) using standard inference procedures
• Such priors have desirable theoretical properties– Equivalent networks are assigned the same score
Scores -- Summary
• Likelihood, MDL, (log) BDe have the form
• BDe requires assessing prior network.It can naturally incorporate prior knowledge and previous experience
• BDe is consistent and asymptotically equivalent (up to a constant) to MDL
• All are score-equivalent– G equivalent to G’ Score(G) = Score(G’)
Optimization Problem
Input:– Training data– Scoring function (including priors, if needed)– Set of possible structures
• Including prior knowledge about structure
Output:– A network (or networks) that maximize the score
Key Property:– Decomposability: the score of a network is a sum of terms.
Heuristic Search• Hard problem with tractable solutions for
– Tree structures– Known orders
• heuristic search
• Define a search space:– nodes are possible structures– edges denote adjacency of structures
• Traverse this space looking for high-scoring structures• Search techniques:
– Greedy hill-climbing– Best first search– Simulated Annealing– ...
Heuristic Search (cont.)• Typical operations:
S C
E
D
S C
E
D
Reverse C E
Delete C E
Add C D
S C
E
D
S C
E
D
Exploiting Decomposability in Local Search
• Caching: To update the score of after a local change, we only need to re-score the families that were changed in the last move
S C
E
D
S C
E
D
S C
E
D
S C
E
D
Greedy Hill-Climbing• Simplest heuristic local search
– Start with a given network• empty network• best tree • a random network
– At each iteration• Evaluate all possible changes• Apply change that leads to best improvement in score• Reiterate
– Stop when no modification improves score• Each step requires evaluating approximately n new
changes
Greedy Hill-Climbing: Possible Pitfalls
• Greedy Hill-Climbing can get struck in:– Local Maxima:
• All one-edge changes reduce the score– Plateaus:
• Some one-edge changes leave the score unchanged• Happens because equivalent networks received the same
score and are neighbors in the search space
• Both occur during structure search• Standard heuristics can escape both
– Random restarts– TABU search
Search: Summary• Discrete optimization problem• In general, NP-Hard
– Need to resort to heuristic search– In practice, search is relatively fast (~100 vars in ~10
min):• Decomposability• Sufficient statistics
• In some cases, we can reduce the search problem to an easy optimization problem– Example: learning trees
Learning in Graphical Models
Structure Observability Method
Known Full ML or MAP estimation
Known Partial EM algorithm
Unknown Full Model selection or model averaging
Unknown Partial EM + model sel. or model aver.
Graphical Models Summary• Representations
– Graphs are cool way to put constraints on distributions, so that you can say lots of stuff without even looking at the numbers!
• Inference– GM let you compute all kinds of different probabilities
efficiently
• Learning– You can even learn them auto-magically!
Graphical Models
• Pros and cons– Very powerful if dependency structure is sparse
and known– Modular (especially Bayesian networks)– Flexible representation (but not as flexible as
“feature passing”)– Many inference methods– Recent development in learning Markov network
parameters, but still tricky