INFO 4300 / CS4300 Information Retrieval [0.5cm] slides ... · More Administrativa (Omitted last...

42
INFO 4300 / CS4300 Information Retrieval slides adapted from Hinrich Sch¨utze’s, linked from http://informationretrieval.org/ IR 2: The term vocabulary and postings lists Paul Ginsparg Cornell University, Ithaca, NY 1 Sep 2009 1 / 42

Transcript of INFO 4300 / CS4300 Information Retrieval [0.5cm] slides ... · More Administrativa (Omitted last...

INFO 4300 / CS4300Information Retrieval

slides adapted from Hinrich Schutze’s,linked from http://informationretrieval.org/

IR 2: The term vocabulary and postings lists

Paul Ginsparg

Cornell University, Ithaca, NY

1 Sep 2009

1 / 42

Administrativa

Course Webpage:http://www.infosci.cornell.edu/Courses/info4300/2009fa/

Lectures: Tuesday and Thursday 11:40-12:55, Hollister B14

Instructor: Paul Ginsparg, [email protected], 255-7371,Cornell Information Science, 301 College Avenue

Instructor’s Assistant: Corinne Russell, [email protected],255-5925, Cornell Information Science, 301 College Avenue

Instructor’s Office Hours: Wed 1-2pm or e-mail instructor toschedule an appointment

Teaching Assistant TBD, @cornell.eduThe Teaching Assistant does not have scheduled office hoursbut is available to help you by email.

2 / 42

More Administrativa

(Omitted last time, and no one asked until afterwards . . .)

During this course there will be four assignments which requireprogramming (due last year 21 Sep, 11 Oct, 9 Nov, 5 Dec), andtwo examinations: a mid-term and a final.

The course grade will be based on course assignments,participation in the discussions, and the examinations.

The weightings given to these components are expected to be asfollows (but may be changed):Assignments 40%, Participation 20%, Examinations 40%.

Course text at: http://informationretrieval.org/

3 / 42

Overview

1 Recap

2 The term vocabularyGeneral + Non-EnglishEnglish

4 / 42

Outline

1 Recap

2 The term vocabularyGeneral + Non-EnglishEnglish

5 / 42

Major Steps

1. Collect documents

2. Tokenize text

3. linguistic preprocessing

4. Index documents

6 / 42

Inverted index

For each term t, we store a list of all documents that contain t.

Brutus −→ 1 2 4 11 31 45 173 174

Caesar −→ 1 2 4 5 6 16 57 132 . . .

Calpurnia −→ 2 31 54 101

...

︸ ︷︷ ︸ ︸ ︷︷ ︸

dictionary postings

7 / 42

Intersecting two postings lists

Brutus −→ 1 → 2 → 4 → 11 → 31 → 45 → 173 → 174

Calpurnia −→ 2 → 31 → 54 → 101

Intersection =⇒ 2 → 31

Linear in the length of the postings lists.

8 / 42

Constructing the inverted index: Sort postingsterm docID

I 1did 1enact 1julius 1caesar 1I 1was 1killed 1i’ 1the 1capitol 1brutus 1killed 1me 1so 2let 2it 2be 2with 2caesar 2the 2noble 2brutus 2hath 2told 2you 2caesar 2was 2ambitious 2

=⇒

term docID

ambitious 2be 2brutus 1brutus 2capitol 1caesar 1caesar 2caesar 2did 1enact 1hath 1I 1I 1i’ 1it 2julius 1killed 1killed 1let 2me 1noble 2so 2the 1the 2told 2you 2was 1was 2with 2

9 / 42

Westlaw: Example queries

Information need: Information on the legal theories involved inpreventing the disclosure of trade secrets by employees formerlyemployed by a competing company Query: “trade secret” /s

disclos! /s prevent /s employe! Information need: Requirements for

disabled people to be able to access a workplace Query: disab! /p

access! /s work-site work-place (employment /3 place) Information

need: Cases about a host’s responsibility for drunk guests Query:

host! /p (responsib! liab!) /p (intoxicat! drunk!) /p guest

10 / 42

Outline

1 Recap

2 The term vocabularyGeneral + Non-EnglishEnglish

11 / 42

Terms and documents

Last lecture: Simple Boolean retrieval system

Our assumptions were:

We know what a document is.We know what a term is.

Both issues can be complex in reality.

We’ll look a little bit at what a document is.

But mostly at terms: How do we define and process thevocabulary of terms of a collection?

12 / 42

Parsing a document

Before we can even start worrying about terms . . .

. . . need to deal with format and language of each document.

What format is it in? pdf, word, excel, html etc.

What language is it in?

What character set is in use?

Each of these is a classification problem, which we will studylater in this course

Alternative: use heuristics

13 / 42

Format/Language: Complications

A single index usually contains terms of several languages.

Sometimes a document or its components contain multiplelanguages/formats.French email with Spanish pdf attachment

What is the document unit for indexing?

A file?

An email?

An email with 5 attachments?

A group of files (ppt or latex in HTML)?

Issues with books (again precision vs. recall)

14 / 42

What is a document?

Take-away: potentially non-trivial; in many cases requiressome design decisions.

15 / 42

Outline

1 Recap

2 The term vocabularyGeneral + Non-EnglishEnglish

16 / 42

Definitions

Word – A delimited string of characters as it appears in thetext.

Term – A “normalized” word (case, morphology, spelling etc);an equivalence class of words.

Token – An instance of a word or term occurring in adocument.

Type – The same as a term in most cases: an equivalenceclass of tokens.

17 / 42

Type/token distinction: Example

In June, the dog likes to chase the cat in the barn.

How many word tokens? How many word types?

18 / 42

Recall: Inverted index construction

Input:

Friends, Romans, countrymen. So let it be with Caesar . . .

Output:

friend roman countryman so . . .

Each token is a candidate for a postings entry.

What are valid tokens to emit?

19 / 42

Why tokenization is difficult – even in English

Example: Mr. O’Neill thinks that the boys’ stories about Chile’s

capital aren’t amusing.

Tokenize this sentence

(neill , oneill , o’neill , o’ neill , o neill)(aren’t , arent , are n’t , aren t)

choices determine which Boolean queries will match

20 / 42

One word or two? (or several)

Hewlett-Packard

State-of-the-art

co-education

the hold-him-back-and-drag-him-away maneuver

data base

San Francisco

Los Angeles-based company

cheap San Francisco-Los Angeles fares

York University vs. New York University

21 / 42

Numbers

3/20/91

20/3/91

Mar 20, 1991

B-52

100.2.86.144

(800) 234-2333

800.234.2333

Older IR systems may not index numbers . . .

. . . but generally it’s a useful feature.

22 / 42

Chinese: No whitespace莎拉波娃 � 在居住在美国� 南部的佛 � 里 � 。今年4月9日,莎拉波娃在美国第一大城市 � � 度 � 了18 � 生日。生日派� 上,莎拉波娃露出了甜美的微笑。23 / 42

Ambiguous segmentation in Chinese和尚The two characters can be treated as one word meaning ‘monk’ oras a sequence of two words meaning ‘and’ and ‘still’.

24 / 42

Other cases of “no whitespace”

Compounds in Dutch and German

Computerlinguistik → Computer + Linguistik

Lebensversicherungsgesellschaftsangestellter

→ leben + versicherung + gesellschaft + angestellter

Inuit: tusaatsiarunnanngittualuujunga (I can’t hear very well.)

Swedish, Finnish, Greek, Urdu, many other languages

25 / 42

Japanese � � � � � � � � � � � � � � � � � � � � � ! " � # $% & ' ( ( ) * + ) * , - � . � / 0 1 2 � 3 4 5 6 7 8 9 2 � �: � ; < = > ?@ / 4 A B � C D C E � F G � H I J K L � 6 M N?A B � C D C E 2 O P 3 Q R � 3 C % S 2 T 4 U V W H X Y % Z [\ ] � ^ _ _ ` a b / c d W H 2 $ 4 e f D g h 4 � i = j 4 kD l � m n 3 o _ p q _ 6 H r W s t u v w � C J x � � � y W > 4z _ { | } ~ / � � � 2 Z � � � q � / � � � � � V H I J4

different “alphabets”: Chinese characters, hiragana syllabary forinflectional endings and function words, katakana syllabary fortranscription of foreign words and other uses, and latin. No spaces(as in Chinese). End user can express query entirely in hiragana!

26 / 42

Arabic script

ك ت ا ب ⇐ آ��ب un b ā t i k /kitābun/ ‘a book’

27 / 42

Arabic script: Bidirectionality

�ل ا������132 ��� 1962ا���� ا��ا�� �� ��� �� . #"!" ! ا� ← → ← → ← START ‘Algeria achieved its independence in 1962 after 132 years of French occupation.’ Bidirectionality is not a problem if text is coded in Unicode.

28 / 42

Accents and diacritics

Accents: resume vs. resume (simple omission of accent)

Umlauts: Universitat vs. Universitaet (substitution withspecial letter sequence “ae”)

Most important criterion: How are users likely to write theirqueries for these words?

Even in languages that standardly have accents, users oftendo not type them. (Polish?)

29 / 42

Outline

1 Recap

2 The term vocabularyGeneral + Non-EnglishEnglish

30 / 42

Normalization

Need to “normalize” terms in indexed text as well as queryterms into the same form.

Example: We want to match U.S.A. and USA

We most commonly implicitly define equivalence classes ofterms.

Alternatively: do asymmetric expansion

window → window, windowswindows → Windows, windowsWindows (no expansion)

More powerful, but less efficient

Why don’t you want to put window, Window, windows, andWindows in the same equivalence class?

31 / 42

Normalization: Other languages

Normalization and language detection interact.

PETER WILL NICHT MIT. → MIT = mit

He got his PhD from MIT. → MIT 6= mit

32 / 42

Case folding

Reduce all letters to lower case

Possible exceptions: capitalized words in mid-sentence

MIT vs. mit

Fed vs. fed

It’s often best to lowercase everything since users will uselowercase regardless of correct capitalization.

33 / 42

Stop words

stop words = extremely common words which would appearto be of little value in helping select documents matching auser need

Examples: a, an, and, are, as, at, be, by, for, from, has, he, in,

is, it, its, of, on, that, the, to, was, were, will, with

Stop word elimination used to be standard in older IR systems.

But you need stop words for phrase queries, e.g. “King ofDenmark”

Most web search engines index stop words.

34 / 42

More equivalence classing

Soundex: (phonetic equivalence, Muller = Mueller)

Thesauri: (semantic equivalence, car = automobile)

35 / 42

Lemmatization

Reduce inflectional/variant forms to base form

Example: am, are, is → be

Example: car, cars, car’s, cars’ → car

Example: the boy’s cars are different colors

→ the boy car be different color

Lemmatization implies doing “proper” reduction to dictionaryheadword form (the lemma).

Inflectional morphology (cutting → cut) vs. derivationalmorphology (destruction → destroy)

36 / 42

Stemming

Definition of stemming: Crude heuristic process that chops offthe ends of words in the hope of achieving what “principled”lemmatization attempts to do with a lot of linguisticknowledge.

Language dependent

Often inflectional and derivational

Example for derivational: automate, automatic, automation

all reduce to automat

37 / 42

Porter algorithm

Most common algorithm for stemming English

Results suggest that it is at least as good as other stemmingoptions

Conventions + 5 phases of reductions

Phases are applied sequentially

Each phase consists of a set of commands.

Sample command: Delete final ement if what remains is longerthan 1 characterreplacement → replaccement → cement

Sample convention: Of the rules in a compound command,select the one that applies to the longest suffix.

38 / 42

Porter stemmer: A few rules

Rule Example

SSES → SS caresses → caressIES → I ponies → poniSS → SS caress → caressS → cats → cat

39 / 42

Three stemmers: A comparison

Sample text: Such an analysis can reveal features that are not easilyvisible from the variations in the individual genes and canlead to a picture of expression that is more biologicallytransparent and accessible to interpretation

Porter stemmer: such an analysi can reveal featur that ar not easili visiblfrom the variat in the individu gene and can lead to a picturof express that is more biolog transpar and access to interpret

Lovins stemmer: such an analys can reve featur that ar not eas vis from thvari in th individu gen and can lead to a pictur of expres thatis mor biolog transpar and acces to interpres

Paice stemmer: such an analys can rev feat that are not easy vis from thevary in the individ gen and can lead to a pict of express thatis mor biolog transp and access to interpret

40 / 42

Does stemming improve effectiveness?

In general, stemming increases effectiveness for some queries,and decreases effectiveness for others.

Porter Stemmer equivalence class oper contains all of operate

operating operates operation operative operatives operational.

Queries where stemming hurts: “operational AND research”,“operating AND system”, “operative AND dentistry”

41 / 42

What does Google do?

Stop words

Normalization

Tokenization

Lowercasing

Stemming

Non-latin alphabets

Umlauts

Compounds

Numbers

42 / 42