Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information...

25
Language Models for Information Retrieval References: 1. W. B. Croft and J. Lafferty (Editors). Language Modeling for Information Retrieval. July 2003 2. T. Hofmann. Unsupervised Learning by Probabilistic Latent Semantic Analysis. Machine Learning, January- February 2001 3. Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze, Introduction to Information Retrieval, Cambridge University Press, 2008. (Chapter 12) 4. D. A. Grossman, O. Frieder, Information Retrieval: Algorithms and Heuristics, Springer, 2004 (Chapter 2) Berlin Chen Department of Computer Science & Information Engineering National Taiwan Normal University

Transcript of Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information...

Page 1: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

Language Models for Information Retrieval

References:1. W. B. Croft and J. Lafferty (Editors). Language Modeling for Information Retrieval. July 20032. T. Hofmann. Unsupervised Learning by Probabilistic Latent Semantic Analysis. Machine Learning, January-

February 20013. Christopher D. Manning, Prabhakar Raghavan and Hinrich Schütze, Introduction to Information Retrieval,

Cambridge University Press, 2008. (Chapter 12)4. D. A. Grossman, O. Frieder, Information Retrieval: Algorithms and Heuristics, Springer, 2004 (Chapter 2)

Berlin ChenDepartment of Computer Science & Information Engineering

National Taiwan Normal University

Page 2: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 2

Taxonomy of Classic IR Models

Non-Overlapping ListsProximal Nodes

Structured Models

Retrieval: AdhocFiltering

Browsing

User

Task

Classic Models

BooleanVectorProbabilistic

Set Theoretic

FuzzyExtended Boolean

Probabilistic

Inference Network Belief Network

Language Model-Probabilistic LSI-Topical Mixture Model

Algebraic

Generalized VectorNeural Networks

Browsing

FlatStructure GuidedHypertext

Page 3: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 3

Statistical Language Models (1/2)

• A probabilistic mechanism for “generating” a piece of text– Defines a distribution over all possible word sequences

• What is LM Used for ?– Speech recognition– Spelling correction– Handwriting recognition– Optical character recognition– Machine translation– Document classification and routing– Information retrieval …

NwwwW K21 =

( ) ?=WP

Page 4: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 4

Statistical Language Models (2/2)

• (Statistical) language models (LM) have been widely used for speech recognition and language (machine) translation for more than twenty years

• However, their use for use for information retrieval started only in 1998 [Ponte and Croft, SIGIR 1998]

Page 5: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 5

Query Likelihood Language Models

• Documents are ranked based on Bayes (decision) rule

– is the same for all documents, and can be ignored

– might have to do with authority, length, genre, etc.• There is no general way to estimate it• Can be treated as uniform across all documents

• Documents can therefore be ranked based on

– The user has a prototype (ideal) document in mind, and generates a query based on words that appear in this document

– A document is treated as a model to predict (generate) the query

( ) ( ) ( )( )QP

DPDQPQDP =

( )QP( )DP

( ) ( )( ) M as denotedor DQPDQP

D DM

Page 6: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 6

Schematic Depiction

D1

D2

D3

.

.

.

DocumentCollection

.

.

.

MD1

MD2

MD3

query (Q)

P(Q|MD1 )

P(Q|MD3)

DocumentModels

Page 7: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 7

n-grams• Multiplication (Chain) rule

– Decompose the probability of a sequence of events into the probability of each successive events conditioned on earlier events

• n-gram assumption– Unigram

• Each word occurs independently of the other words• The so-called “bag-of-words” model

– Bigram

– Most language-modeling work in IR has used unigram language models

• IR does not directly depend on the structure of sentences

( ) ( ) ( ) ( ) ( )12121312121 .... −= NNN wwwwPwwwPwwPwPwwwP KL

( ) ( ) ( ) ( ) ( )NN wPwPwPwPwwwP L32121 .... =

( ) ( ) ( ) ( ) ( )12312121 .... −= NNN wwPwwPwwPwPwwwP L

Page 8: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 8

Unigram Model (1/4)

• The likelihood of a query given a document

– Words are conditionally independent of each other given the document

– How to estimate the probability of a (query) word given the document ?

• Assume that words follow a multinomial distributiongiven the document

( ) ( ) ( ) ( )( )∏=

=

=Ni Di

DNDDD

wP

wPwPwPQP

1

21

M

MM MM L

NwwwQ .... 21=D

( ) ( )( ) ( )( )( )( )

( )

( )( ) ∑ ==

∏∏

∑=

=

==

=

Vi wDiw

i

Vi

wCwV

i i

Vj j

DV

ii

ii

wP

wCwCwC

wCwCP

1

11

11

1 ,M

occurs worda timesofnumber the: where!!

M,...,

λλ

λ

permutation is considered here

( ) M DwP

Page 9: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 9

Unigram Model (2/4)

• Use each document itself a sample for estimating its corresponding unigram (multinomial) model– If Maximum Likelihood Estimation (MLE) is adopted

wa

wa

wa

wb

wa

wbwb

wc

Doc D

wc

P(wb|MD)=0.3

wd

P(wc |MD)=0.2P(wd |MD)=0.1

P(we |MD)=0.0P(wf |MD)=0.0

( ) ( )

( )( ) DDwCDD

DwDwC

DDwCwP

i i

ii

iDi

=∑

=

, , oflength :in occurs timesofnumber :,

where

,Mˆ

The zero-probability problemIf we and wf do not occur in Dthen P(we |MD)= P(wf |MD)=0

This will cause a problem in predicting the query likelihood (See the equation for the query likelihood in the preceding slide)

Page 10: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 10

Unigram Model (3/4)

• Smooth the document-specific unigram model with a collection model (a mixture of two multinomials)

• The role of the collection unigram model– Help to solve zero-probability problem– Help to differentiate the contributions of different missing terms in

a document (global information like IDF ? )

• The collection unigram model can be estimated in a similar way as what we do for the document-specific unigram model

( ) ( ) ( ) ( )[ ]∏ ⋅−+⋅= =Ni CiDiD wPwPQP 1 M1MM λλ

( )CiwP M

Page 11: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 11

Unigram Model (4/4)

• An evaluation on the Topic Detection and Tracking (TDT) corpra– Language Model

– Vector Space Model

mAP Unigram Unigram+Bigram

TQ/TD 0.6327 0.5427

TDT2 TQ/SD 0.5658 0.4803

TQ/TD 0.6569 0.6141

TDT3 TQ/SD 0.6308 0.5808

mAP Unigram Unigram+Bigram

TQ/TD 0.5548 0.5623

TDT2 TQ/SD 0.5122 0.5225

TQ/TD 0.6505 0.6531

TDT3 TQ/SD 0.6216 0.6233

( )( )[ ( ) ( )]

1

1 CiNi Di

DUnigram

MwPMwP

MQP

⋅−∏ +⋅= = λλ

( )( ) ( )[( )

( ) ( )]Cii

Dii

Ni CiDi

DBigramUnigram

MwwP

MwwP

MwPMwP

MQP

,1

,

1321

13

1 21

=

+

⋅−−−

+⋅

∏ ⋅+⋅=

λλλ

λ

λλ

Page 12: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 12

Maximum Mutual Information

• Documents can be ranked based their mutual information with the query

• Document ranking by mutual information (MI) is equivalent that by likelihood

( ) ( )( ) ( )( ) ( )QPDQP

DPQPDQPDQMI

loglog

,log,

−=

=

being the same for all documents, and hence can be ignored

( ) ( )DQPDQMIDDmaxarg,maxarg =

Page 13: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 13

Probabilistic Latent Semantic Analysis (PLSA)

• Also called The Aspect Model, Probabilistic Latent Semantic Indexing (PLSI)– Graphical Model Representation (a kind of Bayesian Networks)

Reference:1. T. Hofmann. Unsupervised Learning by Probabilistic Latent Semantic Analysis. Machine Learning, January-February 2001

Thomas Hofmann 1999

D T Q( )DP ( )DTP k ( )kTwP

D Q( )DP ( )DwP

Language model

PLSA

( ) ( ) ( ) ( )( )

( ) ( )( )

( ) ( ) ( )[ ] ( )∏ ⋅−+⋅=

==

∈Qw

QwCCD MwPMwP

DQP

DPDQPQPDPDQP

QDPDQsim

,1

,

λλ

( ) ( ) ( ) ( )

( )( )

( ) ( )( )QwC

Qw

K

kkk

QwC

Qw

K

kk

Qw

QwC

DTPTwP

DTwP

DwPDQPDQsim

,

1

,

1

,

,

,

∏ ⎥⎦⎤

⎢⎣⎡∑=

∏ ⎥⎦⎤

⎢⎣⎡∑=

∏==

∈ =

∈ =

The latent variables=>The unobservable class variables Tk(topics or domains)

Page 14: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 14

PLSA: Formulation

• Definition– : the prob. when selecting a doc

– : the prob. when pick a latent class for the doc

– : the prob. when generating a word from the class

( )DP D

D( )DTP k kT

( )kTwP kTw

Page 15: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 15

PLSA: Assumptions• Bag-of-words: treat docs as memoryless source, words

are generated independently

• Conditional independent: the doc and word are independent conditioned on the state of the associated latent variable

D

kT

w

( ) ( ) ( )kkk TDPTwPTDwP ≈,

( ) ( ) ( ) ( )∏==w

QwCDwPDQPDQsim ,,

( ) ( ) ( )( )

( ) ( )( )

( ) ( ) ( )( )

( ) ( )( )

( ) ( )∑=

∑=∑=

∑=∑=∑=

=

==

===

K

kkk

K

k

kkK

k

kkk

K

k

kkK

k

kK

kk

DTPTwP

DPDTPTwP

DPTPTDPTwP

DPTPTDwP

DPTDwPDTwPDwP

1

11

111

,

,,,,

Page 16: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 16

PLSA: Training (1/2)

• Probabilities are estimated by maximizing the collection likelihood using the Expectation-Maximization (EM) algorithm

( ) ( )

( ) ( ) ( )∑ ∑ ⎥⎦

⎤⎢⎣

⎡∑=

∑ ∑=

D w Tkk

D wC

k

DTPTwPDwC

DwPDwCL

log,

log,

EM tutorial:- Jeff A. Bilmes "A Gentle Tutorial of the EM Algorithm and its Application

to Parameter Estimation for Gaussian Mixture and Hidden Markov Models," U.C. Berkeley TR-97-021

Page 17: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 17

PLSA: Training (2/2)

• E (expectation) step

• M (Maximization) step

( ) ( ) ( )( ) ( )∑ ∑

∑=

w D k

D kk DwTPDwC

DwTPDwCTwP

,,,,ˆ

( ) ( ) ( )( )∑ ′

∑=

′w

kwik DwC

DwTPDwCDTP

,,,ˆ

( ) ( ) ( )( ) ( )∑

==kT kk

kkk DTPTwP

DTPTwPDwTP ,

Page 18: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 18

PLSA: Latent Probability Space (1/2)

image sequenceanalysis

medical imagingcontext of contourboundary detection

phonetic segmentation

( ) ( ) ( ) ( )

( ) ( ) ( )kikT

kj

ikT

ikjT

ikjij

TDPTPTwP

DTPDTwPDTwPDwP

k

kk

∑∑=

==

,,,,,

( )( )kjkj TwP

,:U ( )( )kkTPdiag:Σ ( )( )

kiki TDP,

:V( )DWP , .= .

Dimensionality K=128 (latent classes)

Page 19: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 19

PLSA: Latent Probability Space (2/2)

=

D1 D2 Di Dn

mxn

kxk

mxk

rxn

P Umxk

Σk VTkxn

w1w2

wj

wm

t1…tk… tK

( ) ( ) ( ) ( )kikT

kjij TDPTPTwPDwPk

∑=,

( )kj TwP

( )kTPw1w2

wj

wm

( )ki TDP

( )ij DwP ,

D1 D2 Di Dn

Page 20: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 20

PLSA: One more example on TDT1 dataset

aviation space missions family love Hollywood love

Page 21: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 21

PLSA: Experiment Results (1/4)

• Experimental Results – Two ways to smoothen empirical distribution with PLSA

• Combine the cosine score with that of the vector space model (so does LSA)PLSA-U* (See next slide)

• Combine the multinomials individually PLSA-Q*

Both provide almost identical performance– It’s not known if PLSA was used alone

)|()1()|()|(* DwPDwPDwP PLSAEmpiricalQPLSA ⋅−+⋅=− λλ

( )( )DcDwcDwPEmpirical

,)|( =

( ) ( ) ( )∑==

K

kkkPLSA DTPTwPDwP

1

( ) ( )DwcQw

PLSAEmpiricalQPLSA DwPDwPDQP ,* )|()1()|()|( ∏ ⋅−+⋅=

∈− λλ

Page 22: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 22

PLSA: Experiment Results (2/4)

PLSA-U*• Use the low-dimensional representation and

(be viewed in a k-dimensional latent space) to evaluate relevance by means of cosine measure

• Combine the cosine score with that of the vector space model

• Use the ad hoc approach to re-weight the different model components (dimensions) by

)|( QTP k )|( DTP k

( ) ( )

( ) ( )∑∑

∑=−

kk

kk

kkk

UPLSADTPQTP

DTPQTPDQR

22* ),(( )

( ) ( )( )

,

,, where,

∑ ′

∑=

∈′

Qw

Qwk

k QwC

QwTPQwCQTP

( ) ),(1),(),(~** DQRDQRDQR VSMUPLSAUPLSA

rv⋅−+⋅= −− λλ

online folded-in

Page 23: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 23

PLSA: Experiment Results (3/4)

• Why ?

– Reminder that in LSA, the relations between any two docs can be formulated as

– PLSA mimics LSA in similarity measure

( ) ( )

( ) ( )∑∑

∑=−

kik

kk

kikk

iQPLSIDTPQTP

DTPQTPDQR

22* ),(

A’TA’=(U’Σ’V’T)T ’(U’Σ’V’T) =V’Σ’T’UT U’Σ’V’T=(V’Σ’)(V’Σ’)T

Di

Ds

( ) ( ) ( ) ( )

( ) ( )[ ] ( ) ( )[ ]( ) ( ) ( ) ( )

( ) ( )[ ] ( ) ( )[ ]( ) ( )

( ) ( )∑∑

∑=

∑∑

∑=

∑∑

∑=−

ksk

kik

kskik

kssk

kiik

ksskiik

kkki

kkki

kkskkki

siQPLSI

DTPDTP

DTPDTP

DPDTPDPDTP

DPDTPDPDTP

TPTDPTPTDP

TDPTPTPTDPDDR

22

22

22*

),(

( )

vectorsrow are ˆ and ˆ

ˆˆˆˆ

)ˆ,ˆ(,2

si

si

Tsi

sisi

DD

DDDDDDcoineDDsimΣΣ

Σ=ΣΣ=

( ) ( ) ( ) ( )iikkki DPDTPTPTDP =

Page 24: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 24

PLSA: Experiment Results (4/4)

Page 25: Language Models for Information Retrieval - NTNUberlin.csie.ntnu.edu.tw/Courses/Information Retrieval and Extraction... · Language Models for Information Retrieval References: 1.

IR – Berlin Chen 25

PLSA vs. LSA

• Decomposition/Approximation– LSA: least-squares criterion measured on the L2- or Frobenius

norms of the word-doc matrices– PLSA: maximization of the likelihoods functions based on the

cross entropy or Kullback-Leibler divergence between the empirical distribution and the model

• Computational complexity– LSA: SVD decomposition– PLSA: EM training, is time-consuming for iterations ?