NLP new words

85
Lecture 1, 7/21/2005 Natural Language Processing 1 CS60057 Speech &Natural Language Processing Autumn 2007 Lecture 5 2 August 2007

description

sdf

Transcript of NLP new words

Page 1: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 1

CS60057Speech &Natural Language

Processing

Autumn 2007

Lecture 5

2 August 2007

Page 2: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 2

WORDSThe Building Blocks of Language

Page 3: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 3

Language can be divided up into pieces of varying sizes, ranging from morphemes to paragraphs.

Words -- the most fundamental level for NLP.

Page 4: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 4

Tokens, Types and Texts

This process of segmenting a string of characters into words is known as tokenization.

>>> sentence = "This is the time -- and this is the record of the time." >>> sentence = "This is the time -- and this is the record of the time." >>> words = sentence.split() >>> words = sentence.split() >>> len(words) >>> len(words) 13

Compile a list of the unique vocabulary items in a string by using set() to eliminate duplicates

>>> len(set(words)) >>> len(set(words)) 10 10

A word token is an individual occurrence of a word in a concrete context.A word type is what we're talking about when we say that the three occurrences

of the in sentence are "the same word."

Page 5: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 5

>>> set(words) set(['and', 'this', 'record', 'This', 'of', 'is', '--', 'time.', 'time', 'the']

Extracting text from files >>> f = open('corpus.txt', 'rU') >>> f.read() 'Hello World!\nThis is a test file.\n'

We can also read a file one line at a time using the for loop construct: >>> f = open('corpus.txt', 'rU') >>> for line in f: ... print line[:-1] Hello world! This is a test file.

Here we use the slice [:-1] to remove the newline character at the end of the input line.

Page 6: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 6

Extracting text from the Web

>>> from urllib import urlopen >>> page = urlopen("http://news.bbc.co.uk/").read() >>> print page[:60] <!doctype html public "-//W3C//DTD HTML 4.0 Transitional//EN"

Web pages are usually in HTML format. To extract the text, we need to strip out the HTML markup, i.e. remove all material enclosed in angle brackets. Let's digress briefly to consider how to carry out this task using regular expressions. Our first attempt might look as follows:

>>> line = '<title>BBC NEWS | News Front Page</title>‘>>> new = re.sub(r'<.*>', '', line) >>> new ‘ '

Page 7: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 7

What has happened here? 1. The wildcard '.' matches any character other than '\n', so it will match '>'

and '<'. 2. The '*' operator is "greedy", it matches as many characters as it can. In the

above example, '.*' will return not the shortest match, namely 'title', but the longest match, 'title>BBC NEWS | News Front Page</title'. To get the shortest match we have to use the '*?' operator. We will also normalise whitespace, replacing any sequence of one or more spaces, tabs or newlines (these are all matched by '\s+') with a single space character:

>>> page = re.sub('<.*?>', '', page) >>> page = re.sub('\s+', ' ', page) >>> print page[:60] BBC NEWS | News Front Page News Sport Weather World Service

Page 8: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 8

Extracting text from NLTK Corpora

NLTK is distributed with several corpora and corpus samples and many are supported by the corpus package.

>>> corpus.gutenberg.items ['austen-emma', 'austen-persuasion', 'austen-sense', 'bible-kjv', 'blake-

poems', 'blake-songs', 'chesterton-ball', 'chesterton-brown', 'chesterton-thursday', 'milton-paradise', 'shakespeare-caesar', 'shakespeare-hamlet', 'shakespeare-macbeth', 'whitman-leaves']

Next we iterate over the text content to find the number of word tokens: >>> count = 0 >>> for word in corpus.gutenberg.read('whitman-leaves'): ... count += 1 >>> print count 154873

Page 9: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 9

Brown Corpus

The Brown Corpus was the first million-word, part-of-speech tagged electronic corpus of English, created in 1961 at Brown University. Each of the sections a through r represents a different genre.

>>> corpus.brown.items ['a', 'b', 'c', 'd', 'e', 'f', 'g', 'h', 'j', 'k', 'l', 'm', 'n', 'p', 'r'] >>> corpus.brown.documents['a'] 'press: reportage' We can extract individual sentences (as lists of words) from the corpus

using the read() function. Here we will specify section a, and indicate that only words (and not part-of-speech tags) should be produced.

>>> a = corpus.brown.tokenized('a') >>> a[0] ['The', 'Fulton', 'County', 'Grand', 'Jury', 'said', 'Friday', 'an', 'investigation',

'of', "Atlanta's", 'recent', 'primary', 'election', 'produced', '``', 'no', 'evidence', "''", 'that', 'any', 'irregularities', 'took', 'place', '.']

Page 10: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 10

Page 11: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 11

Corpus Linguistics

1. Text-corpora: Brown corpus. One million words, tagged, representative of American English.

2. Text-corpora: Project Gutenberg. 17,000 uncopyrighted literary texts (Tom Sawyer, etc.)

3. Text-corpora: OMIM: Comprehensive list of medical conditions. 4. Word frequencies. 5. Zipf's First Law.

Page 12: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 12

What’s a word?

I have a can opener; but I can’t open these cans. how many words?

Word form inflected form as it appears in the text can and cans ... different word forms

Lemma a set of lexical forms having the same stem, same POS and same meaning can and cans … same lemma

Word token: an occurrence of a word I have a can opener; but I can’t open these cans. 11 word tokens (not counting

punctuation)

Word type: a different realization of a word I have a can opener; but I can’t open these cans. 10 word types (not counting

punctuation)

Page 13: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 13

Another example

Mark Twain’s Tom Sawyer 71,370 word tokens 8,018 word types tokens/type ratio = 8.9 (indication of text complexity)

Complete Shakespeare work 884,647 word tokens 29,066 word types tokens/type ratio = 30.4

Page 14: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 14

Some Useful Empirical Observations

A small number of events occur with high frequency A large number of events occur with low frequency You can quickly collect statistics on the high frequency events You might have to wait an arbitrarily long time to get valid

statistics on low frequency events Some of the zeroes in the table are really zeros But others are

simply low frequency events you haven't seen yet. How to address?

Page 15: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 15

Common words in Tom Sawyer

but words in NL have an uneven distribution…

Page 16: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 16

Text properties (formalized)Sample word frequency data

Page 17: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 17

Frequency of frequencies most words are rare

3993 (50%) word types appear only once they are called happax legomena (read

only once)

but common words are very common 100 words account for 51% of all tokens

(of all text)

Page 18: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 18

Zipf’s Law

1. Count the frequency of each word type in a large corpus2. List the word types in order of their frequency Let:

f = frequency of a word type r = its rank in the list

Zipf’s Law says: f 1/r In other words:

there exists a constant k such that: f × r = k The 50th most common word should occur with 3 times the

frequency of the 150th most common word.

Page 19: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 19

Zipf’s Law

If probability of word of rank r is pr and N is the total number of word occurrences:

1.0 const. indp. corpusfor ArA

Nfpr

Page 20: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 20

Zipf curve

Page 21: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 21

Predicting Occurrence Frequencies By Zipf, a word appearing n times has rank rn=AN/n If several words may occur n times, assume rank rn applies to the last of these. Therefore, rn words occur n or more times and rn+1 words occur n+1 or more times. So, the number of words appearing exactly n times is:

)1(11

nn

ANnAN

nANrrI nnn

Fraction of words with frequency n is:

Fraction of words appearing only once is therefore ½.)1(

1

nnD

In

Page 22: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 22

Explanations for Zipf’s Law

- Zipf’s explanation was his “principle of least effort.” Balance between speaker’s desire for a small vocabulary and hearer’s desire for a large one.

Page 23: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 23

Zipf’s First Law

1. f 1/r∝ , f = word-frequency, r = word-frequency rank, m = number of meetings per word.

2. There exists a k such that f × r = k. 3. Alternatively, log f = log k - log r. 4. English literature, Johns Hopkins Autopsy Resource, German,

and Chinese. 5. Most famous of Zipf’s Laws.

Page 24: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 24

Zipf’s Second Law

1. Meanings, m √f∝ 2. There exists a k such that k × f = m2. 3. Corollary: m 1/√r∝

Page 25: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 25

Zipf’s Third Law

1. Frequency ∝ 1/wordlength: 2. There exists a k such that f × wordlength = k. 3. Many other minor laws stated.

Page 26: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 26

Zipf’s Law Impact on Language Analysis

Good News: Stopwords will account for a large fraction of text so eliminating them greatly reduces size of vocabulary in a text

Bad News: For most words, gathering sufficient data for meaningful statistical analysis (e.g. for correlation analysis for query expansion) is difficult since they are extremely rare.

Page 27: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 27

Vocabulary Growth

How does the size of the overall vocabulary (number of unique words) grow with the size of the corpus?

This determines how the size of the inverted index will scale with the size of the corpus.

Vocabulary not really upper-bounded due to proper names, typos, etc.

Page 28: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 28

Heaps’ Law

If V is the size of the vocabulary and the n is the length of the corpus in words:

Typical constants: K 10100 0.40.6 (approx. square-root)

10 , constants with KKnV

Page 29: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 29

Heaps’ Law Data

Page 30: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 30

Word counts are interesting...

As an indication of a text’s style As an indication of a text’s author

But, because most words appear very infrequently, it is hard to predict much about the behavior of words

(if they do not occur often in a corpus) --> Zipf’s Law

Page 31: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 31

Zipf’s Law on Tom Saywer

k ≈ 8000-9000 except for

The 3 most frequent wordsWords of frequency ≈ 100

Page 32: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 32

Plot of Zipf’s LawOn chap. 1-3 of Tom Sawyer (≠ numbers from p. 25&26)

f×r = k

Zipf

0

50

100

150

200

250

300

350

0 500 1000 1500 2000

Rank

Freq

Page 33: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 33

Plot of Zipf’s Law (con’t)On chap. 1-3 of Tom Sawyer f×r = k ==> log(f×r) = log(k) ==> log(f)+log(r) = log(k)

Zipf's Law

0

1

2

3

4

5

6

0 1 2 3 4 5 6 7 8

log(rank)

log(

freq

)

Page 34: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 34

Zipf’s Law, so what?

There are: A few very common words A medium number of medium frequency words A large number of infrequent words

Principle of Least effort: Tradeoff between speaker and hearer’s effort Speaker communicates with a small vocabulary of common words (less

effort) Hearer disambiguates messages through a large vocabulary of rare

words (less effort)

Significance of Zipf’s Law for us: For most words, our data about their use will be very sparse Only for a few words will we have a lot of examples

Page 35: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 35

N-Grams and Corpus Linguistics

Page 36: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 36

A bad language model

N-grams & Language Modeling

Page 37: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 37

A bad language model

Page 38: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 38

A bad language model

Herm

an is reprinted with perm

ission from LaughingStock Licensing Inc., O

ttawa C

anada. A

ll rights reserved.

Page 39: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 39

What’s a Language Model

A Language model is a probability distribution over word sequences

P(“And nothing but the truth”) 0.001

P(“And nuts sing on the roof”) 0

Page 40: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 40

What’s a language model for?

Speech recognition Handwriting recognition Spelling correction Optical character recognition Machine translation

(and anyone doing statistical modeling)

Page 41: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 41

Next Word Prediction

From a NY Times story... Stocks ... Stocks plunged this …. Stocks plunged this morning, despite a cut in interest rates Stocks plunged this morning, despite a cut in interest rates by

the Federal Reserve, as Wall ... Stocks plunged this morning, despite a cut in interest rates by

the Federal Reserve, as Wall Street began

Page 42: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 42

Stocks plunged this morning, despite a cut in interest rates by the Federal Reserve, as Wall Street began trading for the first time since last …

Stocks plunged this morning, despite a cut in interest rates by the Federal Reserve, as Wall Street began trading for the first time since last Tuesday's terrorist attacks.

Page 43: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 43

Human Word Prediction

Clearly, at least some of us have the ability to predict future words in an utterance.

How? Domain knowledge Syntactic knowledge Lexical knowledge

Page 44: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 44

Claim

A useful part of the knowledge needed to allow Word Prediction can be captured using simple statistical techniques

In particular, we'll rely on the notion of the probability of a sequence (a phrase, a sentence)

Page 45: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 45

Applications

Why do we want to predict a word, given some preceding words? Rank the likelihood of sequences containing various

alternative hypotheses, e.g. for ASRTheatre owners say popcorn/unicorn sales have

doubled... Assess the likelihood/goodness of a sentence, e.g. for

text generation or machine translationThe doctor recommended a cat scan.El doctor recommendó una exploración del gato.

Page 46: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 47

Simple N-Grams

Assume a language has V word types in its lexicon, how likely is word x to follow word y? Simplest model of word probability: 1/V Alternative 1: estimate likelihood of x occurring in new text based

on its general frequency of occurrence estimated from a corpus (unigram probability)

popcorn is more likely to occur than unicorn Alternative 2: condition the likelihood of x occurring in the

context of previous words (bigrams, trigrams,…)mythical unicorn is more likely than mythical popcorn

Page 47: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 48

N-grams

A simple model of language Computes a probability for observed input. Probability is the likelihood of the observation being generated by

the same source as the training data Such a model is often called a language model

Page 48: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 49

Computing the Probability of a Word Sequence

P(w1, …, wn) =

P(w1).P(w2|w1).P(w3|w1,w2). … P(wn|w1, …,wn-1)

P(the mythical unicorn) = P(the) P(mythical|the) P(unicorn|the mythical) The longer the sequence, the less likely we are to find it in a training

corpus P(Most biologists and folklore specialists believe that in fact the

mythical unicorn horns derived from the narwhal) Solution: approximate using n-grams

Page 49: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 50

Bigram Model

Approximate by

P(unicorn|the mythical) by P(unicorn|mythical)

Markov assumption: the probability of a word depends only on the probability of a limited history

Generalization: the probability of a word depends only on the probability of the n previous words trigrams, 4-grams, … the higher n is, the more data needed to train backoff models

)11|( nn wwP )|( 1nn wwP

Page 50: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 51

Using N-Grams For N-gram models

P(wn-1,wn) = P(wn | wn-1) P(wn-1) By the Chain Rule we can decompose a joint

probability, e.g. P(w1,w2,w3)P(w1,w2, ...,wn) = P(w1|w2,w3,...,wn) P(w2|w3, ...,wn) … P(wn-1|

wn) P(wn)For bigrams then, the probability of a sequence is just the product

of the conditional probabilities of its bigramsP(the,mythical,unicorn) = P(unicorn|mythical) P(mythical|

the) P(the|<start>)

)11|( nn wwP )1

1|(

nNnn wwP

n

kkkn wwPwP

111 )|()(

Page 51: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 52

The n-gram ApproximationAssume each word depends only on the previous (n-1) words (n words

total)

For example for trigrams (3-grams): P(“the|… whole truth and nothing but”)

P(“the|nothing but”)

P(“truth|… whole truth and nothing but the”) P(“truth|but the”)

Page 52: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 53

n-grams, continued

How do we find probabilities?

Get real text, and start counting! P(“the | nothing but”) C(“nothing but the”) / C(“nothing but”)

Page 53: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 54

Unigram probabilities (1-gram) http://www.wordcount.org/main.php Most likely to transition to “the”, least likely to transition

to “conquistador”.

Bigram probabilities (2-gram) Given “the” as the last word, more likely to go to

“conquistador” than to “the” again.

Page 54: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 55

N-grams for Language Generation C. E. Shannon, ``A mathematical theory of communication,'' Bell System Technical Journal, vol. 27, pp.

379-423 and 623-656, July and October, 1948.

Unigram:5. …Here words are chosen independently but with their appropriate frequencies.

REPRESENTING AND SPEEDILY IS AN GOOD APT OR COME CAN DIFFERENT NATURAL HERE HE THE A IN CAME THE TO OF TO EXPERT GRAY COME TO FURNISHES THE LINE MESSAGE HAD BE THESE.

Bigram:6. Second-order word approximation. The word transition probabilities are correct but no further structure is included.

THE HEAD AND IN FRONTAL ATTACK ON AN ENGLISH WRITER THAT THE CHARACTER OF THIS POINT IS THEREFORE ANOTHER METHOD FOR THE LETTERS THAT THE TIME OF WHO EVER TOLD THE PROBLEM FOR AN UNEXPECTED.

Page 55: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 56

N-Gram Models of Language

Use the previous N-1 words in a sequence to predict the next word

Language Model (LM) unigrams, bigrams, trigrams,…

How do we train these models? Very large corpora

Page 56: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 57

Counting Words in Corpora

What is a word? e.g., are cat and cats the same word? September and Sept? zero and oh? Is _ a word? * ? ‘(‘ ? How many words are there in don’t ? Gonna ? In Japanese and Chinese text -- how do we identify a

word?

Page 57: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 58

Terminology Sentence: unit of written language Utterance: unit of spoken language Word Form: the inflected form that appears in the corpus Lemma: an abstract form, shared by word forms having the

same stem, part of speech, and word sense Types: number of distinct words in a corpus (vocabulary size) Tokens: total number of words

Page 58: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 59

Corpora

Corpora are online collections of text and speech Brown Corpus Wall Street Journal AP news Hansards DARPA/NIST text/speech corpora (Call Home, ATIS,

switchboard, Broadcast News, TDT, Communicator) TRAINS, Radio News

Page 59: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 60

Simple N-Grams

Assume a language has V word types in its lexicon, how likely is word x to follow word y? Simplest model of word probability: 1/V Alternative 1: estimate likelihood of x occurring in new text based

on its general frequency of occurrence estimated from a corpus (unigram probability)

popcorn is more likely to occur than unicorn Alternative 2: condition the likelihood of x occurring in the

context of previous words (bigrams, trigrams,…)mythical unicorn is more likely than mythical popcorn

Page 60: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 61

Computing the Probability of a Word Sequence

Compute the product of component conditional probabilities? P(the mythical unicorn) = P(the) P(mythical|the) P(unicorn|the

mythical) The longer the sequence, the less likely we are to find it in a training

corpus P(Most biologists and folklore specialists believe that in fact the

mythical unicorn horns derived from the narwhal) Solution: approximate using n-grams

Page 61: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 62

Bigram Model

Approximate by

P(unicorn|the mythical) by P(unicorn|mythical)

Markov assumption: the probability of a word depends only on the probability of a limited history

Generalization: the probability of a word depends only on the probability of the n previous words trigrams, 4-grams, … the higher n is, the more data needed to train backoff models

)11|( nn wwP )|( 1nn wwP

Page 62: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 63

Using N-Grams

For N-gram models P(wn-1,wn) = P(wn | wn-1) P(wn-1) By the Chain Rule we can decompose a joint

probability, e.g. P(w1,w2,w3)P(w1,w2, ...,wn) = P(w1|w2,w3,...,wn) P(w2|w3, ...,wn) … P(wn-1|

wn) P(wn)For bigrams then, the probability of a sequence is just the product

of the conditional probabilities of its bigramsP(the,mythical,unicorn) = P(unicorn|mythical) P(mythical|

the) P(the|<start>)

)11|( nn wwP )1

1|(

nNnn wwP

n

kkkn wwPwP

111 )|()(

Page 63: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 64

Training and Testing

N-Gram probabilities come from a training corpus overly narrow corpus: probabilities don't generalize overly general corpus: probabilities don't reflect task or domain

A separate test corpus is used to evaluate the model, typically using standard metrics held out test set; development test set cross validation results tested for statistical significance

Page 64: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 65

A Simple Example

P(I want to each Chinese food) = P(I | <start>) P(want | I) P(to | want) P(eat | to)

P(Chinese | eat) P(food | Chinese)

Page 65: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 66

A Bigram Grammar Fragment from BERP

.001Eat British.03Eat today

.007Eat dessert.04Eat Indian

.01Eat tomorrow.04Eat a

.02Eat Mexican.04Eat at

.02Eat Chinese.05Eat dinner

.02Eat in.06Eat lunch

.03Eat breakfast.06Eat some

.03Eat Thai.16Eat on

Page 66: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 67

.01British lunch.05Want a

.01British cuisine.65Want to

.15British restaurant.04I have

.60British food.08I don’t

.02To be.29I would

.09To spend.32I want

.14To have.02<start> I’m

.26To eat.04<start> Tell

.01Want Thai.06<start> I’d

.04Want some.25<start> I

Page 67: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 68

P(I want to eat British food) = P(I|<start>) P(want|I) P(to|want) P(eat|to) P(British|eat) P(food|British) = .25*.32*.65*.26*.001*.60 = .000080

vs. I want to eat Chinese food = .00015 Probabilities seem to capture ``syntactic'' facts, ``world

knowledge'' eat is often followed by an NP British food is not too popular

N-gram models can be trained by counting and normalization

Page 68: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 69

BERP Bigram Counts

0100004Lunch

000017019Food

112000002Chinese

522190200Eat

12038601003To

686078603Want

00013010878I

lunchFoodChineseEatToWantI

Page 69: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 70

BERP Bigram Probabilities Normalization: divide each row's counts by appropriate unigram

counts for wn-1

Computing the bigram probability of I I C(I,I)/C(all I) p (I|I) = 8 / 3437 = .0023

Maximum Likelihood Estimation (MLE): relative frequency of e.g.

4591506213938325612153437

LunchFoodChineseEatToWantI

)()(

1

2,1

wfreqwwfreq

Page 70: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 71

What do we learn about the language?

What's being captured with ... P(want | I) = .32 P(to | want) = .65 P(eat | to) = .26 P(food | Chinese) = .56 P(lunch | eat) = .055

What about... P(I | I) = .0023 P(I | want) = .0025 P(I | food) = .013

Page 71: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 72

P(I | I) = .0023 I I I I want P(I | want) = .0025 I want I want P(I | food) = .013 the kind of food I want is ...

Page 72: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 73

Approximating Shakespeare

As we increase the value of N, the accuracy of the n-gram model increases, since choice of next word becomes increasingly constrained

Generating sentences with random unigrams... Every enter now severally so, let Hill he late speaks; or! a more to leg less first you enter

With bigrams... What means, sir. I confess she? then all sorts, he is trim,

captain. Why dost stand forth thy canopy, forsooth; he is this palpable hit

the King Henry.

Page 73: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 74

Trigrams Sweet prince, Falstaff shall die. This shall forbid it should be branded, if renown

made it empty. Quadrigrams

What! I will go seek the traitor Gloucester. Will you not tell me who I am?

Page 74: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 75

There are 884,647 tokens, with 29,066 word form types, in about a one million word Shakespeare corpus

Shakespeare produced 300,000 bigram types out of 844 million possible bigrams: so, 99.96% of the possible bigrams were never seen (have zero entries in the table)

Quadrigrams worse: What's coming out looks like Shakespeare because it is Shakespeare

Page 75: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 76

N-Gram Training Sensitivity

If we repeated the Shakespeare experiment but trained our n-grams on a Wall Street Journal corpus, what would we get?

This has major implications for corpus selection or design

Page 76: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 77

Some Useful Empirical Observations

A small number of events occur with high frequency A large number of events occur with low frequency You can quickly collect statistics on the high frequency events You might have to wait an arbitrarily long time to get valid statistics on

low frequency events Some of the zeroes in the table are really zeros But others are simply

low frequency events you haven't seen yet. How to address?

Page 77: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 78

Smoothing Techniques

Every n-gram training matrix is sparse, even for very large corpora (Zipf’s law)

Solution: estimate the likelihood of unseen n-grams Problems: how do you adjust the rest of the corpus to

accommodate these ‘phantom’ n-grams?

Page 78: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 79

Smoothing Techniques Every n-gram training matrix is sparse, even for very large

corpora (Zipf’s law) Solution: estimate the likelihood of unseen n-grams Problems: how do you adjust the rest of the corpus to

accommodate these ‘phantom’ n-grams?

Page 79: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 80

Add-one Smoothing

For unigrams: Add 1 to every word (type) count Normalize by N (tokens) /(N (tokens) +V (types)) Smoothed count (adjusted for additions to N) is

Normalize by N to get the new unigram probability:

For bigrams: Add 1 to every bigram c(wn-1 wn) + 1 Incr unigram count by vocabulary size c(wn-1) + V

VNNci

1

VNc

ipi

1*

Page 80: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 81

Discount: ratio of new counts to old (e.g. add-one smoothing changes the BERP bigram (to|want) from 786 to 331 (dc=.42) and p(to|want) from .65 to .28)

But this changes counts drastically: too much weight given to unseen ngrams in practice, unsmoothed bigrams often work better!

Page 81: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 82

A zero ngram is just an ngram you haven’t seen yet…but every ngram in the corpus was unseen once…so... How many times did we see an ngram for the first time? Once

for each ngram type (T) Est. total probability of unseen bigrams as

View training corpus as series of events, one for each token (N) and one for each new type (T)TN

T

Witten-Bell Discounting

Page 82: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 83

We can divide the probability mass equally among unseen bigrams….or we can condition the probability of an unseen bigram on the first word of the bigram

Discount values for Witten-Bell are much more reasonable than Add-One

Page 83: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 84

Re-estimate amount of probability mass for zero (or low count) ngrams by looking at ngrams with higher counts Estimate

E.g. N0’s adjusted count is a function of the count of ngrams that occur once, N1

Assumes: word bigrams follow a binomial distribution We know number of unseen bigrams (VxV-seen)

NcNccc 11*

Good-Turing Discounting

Page 84: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 85

Backoff methods (e.g. Katz ‘87)

For e.g. a trigram model Compute unigram, bigram and trigram probabilities In use:

Where trigram unavailable back off to bigram if available, o.w. unigram probability

E.g An omnivorous unicorn

Page 85: NLP new words

Lecture 1, 7/21/2005 Natural Language Processing 86

Summary

N-gram probabilities can be used to estimate the likelihood Of a word occurring in a context (N-1) Of a sentence occurring at all

Smoothing techniques deal with problems of unseen words in a corpus