Sequence Modelling with Features: Linear-Chain Conditional ...jcheung/teaching/fall-2016/...Logistic...

Post on 27-Sep-2020

0 views 0 download

Transcript of Sequence Modelling with Features: Linear-Chain Conditional ...jcheung/teaching/fall-2016/...Logistic...

Sequence Modelling with Features:

Linear-Chain Conditional Random

Fields

COMP-599

Oct 3, 2016

OutlineSequence modelling: review and miscellaneous notes

Hidden Markov models: shortcomings

Generative vs. discriminative models

Linear-chain CRFs

β€’ Inference and learning algorithms with linear-chain CRFs

2

Hidden Markov ModelGraph specifies how join probability decomposes

𝑃(𝑢,𝑸) = 𝑃 𝑄1

𝑑=1

π‘‡βˆ’1

𝑃(𝑄𝑑+1|𝑄𝑑)

𝑑=1

𝑇

𝑃(𝑂𝑑|𝑄𝑑)

3

𝑄1

𝑂1

𝑄2

𝑂2

𝑄3

𝑂3

𝑄4

𝑂4

𝑄5

𝑂5

Initial state probability

State transition probabilities

Emission probabilities

EM and Baum-WelchEM finds a local optimum in 𝑃 𝑢 πœƒ .

You can show that after each step of EM:

𝑃 𝑢 πœƒπ‘˜+1 > 𝑃(𝑢|πœƒπ‘˜)

However, this is not necessarily a global optimum.

4

Dealing with Local OptimaRandom restarts

β€’ Train multiple models with different random initializations

β€’ Model selection on development set to pick the best one

Biased initialization

β€’ Bias your initialization using some external source of knowledge (e.g., external corpus counts or clustering procedure, expert knowledge about domain)

β€’ Further training will hopefully improve results

5

CaveatsBaum-Welch with no labelled data generally gives poor results, at least for linguistic structure (~40% accuracy, according to Johnson, (2007))

Semi-supervised learning: combine small amounts of labelled data with larger corpus of unlabelled data

6

In PracticePer-token (i.e., per word) accuracy results on WSJ corpus:

Most frequent tag baseline ~90β€”94%

HMMs (Brants, 2000) 96.5%

Stanford tagger (Manning, 2011) 97.32%

7

Other Sequence Modelling TasksChunking (a.k.a., shallow parsing)

β€’ Find syntactic chunks in the sentence; not hierarchical

[NPThe chicken] [Vcrossed] [NPthe road] [Pacross] [NPthe lake].

Named-Entity Recognition (NER)

β€’ Identify elements in text that correspond to some high level categories (e.g., PERSON, ORGANIZATION, LOCATION)

[ORGMcGill University] is located in [LOCMontreal, Canada].

β€’ Problem: need to detect spans of multiple words

8

First AttemptSimply label words by their category

ORG ORG ORG - ORG - - - LOC

McGill, UQAM, UdeM, and Concordia are located in Montreal.

What is the problem with this scheme?

9

IOB TaggingLabel whether a word is inside, outside, or at the beginning of a span as well.

For n categories, 2n+1 total tags.

BORG IORG O O O BLOC ILOC

McGill University is located in Montreal, Canada

BORG BORG BORG O BORG O O O BLOC

McGill, UQAM, UdeM, and Concordia are located in Montreal.

10

11

Sequence Modelling with Linguistic

Features

Shortcomings of Standard HMMsHow do we add more features to HMMs?

Might be useful for POS tagging:

β€’ Word position within sentence (1st, 2nd, last…)

β€’ Capitalization

β€’ Word prefix and suffixes (-tion, -ed, -ly, -er, re-, de-)

β€’ Features that depend on more than the current word or the previous words.

12

Possible to Do with HMMsAdd more emissions at each timestep

Clunky

Is there a better way to do this?

13

𝑄1

𝑂12𝑂11 𝑂13 𝑂14

Word identity Capitalization Prefix feature Word position

Discriminative ModelsHMMs are generative models

β€’ Build a model of the joint probability distribution 𝑃(𝑢,𝑸),

β€’ Let’s rename the variables

β€’ Generative models specify 𝑃(𝑋, π‘Œ; πœƒgen)

If we are only interested in POS tagging, we can instead train a discriminative model

β€’ Model specifies 𝑃(π‘Œ|𝑋; πœƒdisc)

β€’ Now a task-specific model for sequence labelling; cannot use it for generating new samples of word and POS sequences

14

Generative or Discriminative?Naive Bayes

𝑃 𝑦 π‘₯ = 𝑃 𝑦 𝑖 𝑃 π‘₯𝑖 𝑦 / 𝑃( π‘₯)

Logistic regression

𝑃(𝑦| π‘₯) =1

π‘π‘’π‘Ž1π‘₯1 + π‘Ž2π‘₯2 + … +π‘Žπ‘›π‘₯𝑛 + 𝑏

15

Discriminative Sequence ModelThe parallel to an HMM in the discriminative case: linear-chain conditional random fields (linear-chain CRFs) (Lafferty et al., 2001)

𝑃 π‘Œ 𝑋 =1

𝑍 𝑋exp

𝑑

π‘˜

πœƒπ‘˜π‘“π‘˜(𝑦𝑑 , π‘¦π‘‘βˆ’1, π‘₯𝑑)

Z(X) is a normalization constant:

𝑍 𝑋 =

π’š

exp

𝑑

π‘˜

πœƒπ‘˜π‘“π‘˜(𝑦𝑑 , π‘¦π‘‘βˆ’1, π‘₯𝑑)

16

sum over all features

sum over all time-steps

sum over all possible sequences of hidden states

IntuitionStandard HMM: product of probabilities; these probabilities are defined over the identity of the states and words

β€’ Transition from state DT to NN: 𝑃(𝑦𝑑+1 = 𝑁𝑁|𝑦𝑑 = 𝐷𝑇)

β€’ Emit word the from state DT: 𝑃(π‘₯𝑑 = π‘‘β„Žπ‘’|𝑦𝑑 = 𝐷𝑇)

Linear-chain CRF: replace the products by numbers that are NOT probabilities, but linear combinations of weights and feature values.

17

Features in CRFsStandard HMM probabilities as CRF features:

β€’ Transition from state DT to state NN𝑓𝐷𝑇→𝑁𝑁(𝑦𝑑 , π‘¦π‘‘βˆ’1, π‘₯𝑑) = 𝟏(π‘¦π‘‘βˆ’1 = 𝐷𝑇) 𝟏(𝑦𝑑 = 𝑁𝑁)

β€’ Emit the from state DTπ‘“π·π‘‡β†’π‘‘β„Žπ‘’(𝑦𝑑 , π‘¦π‘‘βˆ’1, π‘₯𝑑) = 𝟏(𝑦𝑑 = 𝐷𝑇) 𝟏(π‘₯𝑑 = π‘‘β„Žπ‘’)

β€’ Initial state is DT𝑓𝐷𝑇 𝑦1, π‘₯1 = 𝟏 𝑦1 = 𝐷𝑇

Indicator function:

Let 𝟏 π‘π‘œπ‘›π‘‘π‘–π‘‘π‘–π‘œπ‘› = 1 if π‘π‘œπ‘›π‘‘π‘–π‘‘π‘–π‘œπ‘› is true0 otherwise

18

Features in CRFsAdditional features that may be useful

β€’ Word is capitalizedπ‘“π‘π‘Žπ‘(𝑦𝑑 , π‘¦π‘‘βˆ’1, π‘₯𝑑) = 𝟏(𝑦𝑑 = ? )𝟏(π‘₯𝑑 is capitalized)

β€’ Word ends in –edπ‘“βˆ’π‘’π‘‘(𝑦𝑑 , π‘¦π‘‘βˆ’1, π‘₯𝑑) = 𝟏(𝑦𝑑 = ? )𝟏(π‘₯𝑑 ends with 𝑒𝑑)

β€’ Exercise: propose more features

19

Inference with LC-CRFsDynamic programming still works – modify the forward and the Viterbi algorithms to work with the weight-feature products.

20

HMM LC-CRF

Forward algorithm 𝑃 𝑋 πœƒ 𝑍(𝑋)

Viterbi algorithm argmaxπ‘Œ

𝑃(𝑋, π‘Œ|πœƒ) argmaxπ‘Œ

𝑃(π‘Œ|𝑋, πœƒ)

Forward Algorithm for HMMsCreate trellis 𝛼𝑖(𝑑) for 𝑖 = 1…𝑁, 𝑑 = 1…𝑇

𝛼𝑗 1 = πœ‹π‘—π‘π‘—(𝑂1) for j = 1 … N

for t = 2 … T:

for j = 1 … N:

𝛼𝑗 𝑑 =

𝑖=1

𝑁

𝛼𝑖 𝑑 βˆ’ 1 π‘Žπ‘–π‘—π‘π‘—(𝑂𝑑)

𝑃 𝑢 πœƒ =

𝑗=1

𝑁

𝛼𝑗(𝑇)

Runtime: O(𝑁2𝑇)

21

Forward Algorithm for LC-CRFsCreate trellis 𝛼𝑖(𝑑) for 𝑖 = 1…𝑁, 𝑑 = 1…𝑇

𝛼𝑗 1 = exp π‘˜ πœƒπ‘˜π‘–π‘›π‘–π‘‘π‘“π‘˜

𝑖𝑛𝑖𝑑(𝑦1 = 𝑗, π‘₯1) for j = 1 … N

for t = 2 … T:

for j = 1 … N:

𝛼𝑗 𝑑 =

𝑖=1

𝑁

𝛼𝑖 𝑑 βˆ’ 1 exp

π‘˜

πœƒπ‘˜π‘“π‘˜(𝑦𝑑 = 𝑗, π‘¦π‘‘βˆ’1, π‘₯𝑑)

𝑍(𝑋) =

𝑗=1

𝑁

𝛼𝑗(𝑇)

Runtime: O(𝑁2𝑇)

Having 𝑍(𝑋) allows us to compute 𝑃(π‘Œ|𝑋)

22

Transition and emission probabilities replacedby exponent of weighted sums of features.

Viterbi Algorithm for HMMsCreate trellis 𝛿𝑖(𝑑) for 𝑖 = 1…𝑁, 𝑑 = 1…𝑇

𝛿𝑗 1 = πœ‹π‘—π‘π‘—(𝑂1) for j = 1 … N

for t = 2 … T:

for j = 1 … N:𝛿𝑗 𝑑 = max

𝑖𝛿𝑖 𝑑 βˆ’ 1 π‘Žπ‘–π‘—π‘π‘—(𝑂𝑑)

Take max𝑖

𝛿𝑖 𝑇

Runtime: O(𝑁2𝑇)

23

Viterbi Algorithm for LC-CRFsCreate trellis 𝛿𝑖(𝑑) for 𝑖 = 1…𝑁, 𝑑 = 1…𝑇

𝛿𝑗 1 = exp π‘˜ πœƒπ‘˜π‘–π‘›π‘–π‘‘π‘“π‘˜

𝑖𝑛𝑖𝑑(𝑦1 = 𝑗, π‘₯1)for j = 1 … N

for t = 2 … T:

for j = 1 … N:

𝛿𝑗 𝑑 = max𝑖

𝛿𝑖 𝑑 βˆ’ 1 exp

π‘˜

πœƒπ‘˜π‘“π‘˜(𝑦𝑑 = 𝑗, π‘¦π‘‘βˆ’1, π‘₯𝑑)

Take max𝑖

𝛿𝑖 𝑇

Runtime: O(𝑁2𝑇)

Remember that we need backpointers.

24

Training LC-CRFsUnlike for HMMs, no analytic MLE solution

Use iterative method to improve data likelihood

Gradient descent

A version of Newton’s method to find where the gradient is 0

25

ConvexityFortunately, 𝑙 πœƒ is a concave function (equivalently, its negation is a convex function). That means that we will find the global maximum of 𝑙 πœƒ with gradient descent.

26

Gradient AscentWalk in the direction of the gradient to maximize 𝑙(πœƒ)

β€’ a.k.a., gradient descent on a loss function

πœƒnew = πœƒold + 𝛾𝛻𝑙(πœƒ)

𝛾is a learning rate that specifies how large a step to take.

There are more sophisticated ways to do this update:

β€’ Conjugate gradient

β€’ L-BFGS (approximates using second derivative)

27

Gradient Descent SummaryDescent vs ascent

Convention: think about the problem as a minimization problem

Minimize the negative log likelihood

Initialize πœƒ = πœƒ1, πœƒ2, … , πœƒπ‘˜ randomly

Do for a while:

Compute 𝛻𝑙(πœƒ), which will require dynamic programming (i.e., forward algorithm)

πœƒ ← πœƒ βˆ’ 𝛾𝛻𝑙(πœƒ)

28

Gradient of Log-LikelihoodFind the gradient of the log likelihood of the training corpus:

𝑙 πœƒ = log

𝑖

𝑃(π‘Œ 𝑖 |𝑋 𝑖 )

29

Interpretation of GradientOverall gradient is the difference between:

𝑖

𝑑

π‘“π‘˜(𝑦𝑑𝑖, π‘¦π‘‘βˆ’1

𝑖, π‘₯𝑑

𝑖)

the empirical distribution of feature π‘“π‘˜ in the training corpus

and:

𝑖

𝑑

𝑦,𝑦′

π‘“π‘˜ 𝑦, 𝑦′, π‘₯𝑑𝑖 𝑃(𝑦, 𝑦′|𝑋 𝑖 )

the expected distribution of π‘“π‘˜ as predicted by the current model

30

Interpretation of GradientWhen the corpus likelihood is maximized, the gradient is zero, so the difference is zero.

Intuitively, this means that finding parameter estimate by gradient descent is equivalent to telling our model to predict the features in such a way that they are found in the same distribution as in the gold standard.

31

RegularizationTo avoid overfitting, we can encourage the weights to be close to zero.

Add term to corpus log likelihood:

π‘™βˆ— πœƒ = log

𝑖

𝑃(π‘Œ 𝑖 |𝑋 𝑖 ) βˆ’ exp

π‘˜

πœƒπ‘˜2

2𝜎2

𝜎 controls the amount of regularization

Results in extra term in gradient:

βˆ’πœƒπ‘˜πœŽ2

32

Stochastic Gradient DescentIn the standard version of the algorithm, the gradient is computed over the entire training corpus.

β€’ Weight update only once per iteration through training corpus.

Alternative: calculate gradient over a small mini-batch of the training corpus and update weights then

β€’ Many weight updates per iteration through training corpus

β€’ Usually results in much faster convergence to final solution, without loss in performance

33

Stochastic Gradient DescentInitialize πœƒ = πœƒ1, πœƒ2, … , πœƒπ‘˜ randomly

Do for a while:

Randomize order of samples in training corpus

For each mini-batch in the training corpus:

Compute π›»βˆ—π‘™(πœƒ) over this mini-batchπœƒ ← πœƒ βˆ’ π›Ύπ›»βˆ—π‘™(πœƒ)

34