Some crawling algorithms 5 th march lecture. Robot (8) Efficient Crawling through URL Ordering [Cho...

73
Some crawling algorithms 5 th march lecture
  • date post

    21-Dec-2015
  • Category

    Documents

  • view

    217
  • download

    0

Transcript of Some crawling algorithms 5 th march lecture. Robot (8) Efficient Crawling through URL Ordering [Cho...

Some crawling algorithms

5th march lecture

Robot (8)

Efficient Crawling through URL Ordering [Cho 98]

• Default ordering is based on breadth-first search.

• Efficient crawling fetches important pages first.

Importance Definition

• Similarity of a page to a driving query

• Backlink count of a page

• Forward link of a page

• PageRank of a page

• Domain of a page (.edu is better than .com)

• Combination of the above.– w1*Apples+w2*Oranges+…..

Robot (9)

A method for fetching pages related to a driving query first [Cho 98].

• Suppose the query is “computer”.

• A page is related (hot) if “computer” appears in the title or appears 10 times in the body of the page.

• Some heuristics for finding a hot page:

– The anchor of its URL contains “computer”.

– Its URL contains “computer”.

– Its URL is within 3 links from a hot page.

Call the above URL as a hot URL.

Robot (10)

Crawling Algorithm

hot_queue = url_queue = empty; /* initialization */

/* hot_queue stores hot URL and url_queue stores other URL */

enqueue(url_queue, starting_url);

while (hot_queue or url_queue is not empty)

{ url = dequeue2(hot_queue, url_queue);

/* dequeue hot_queue first if it is not empty */

page = fetch(url);

if (page is hot) then hot[url] = true;

enqueue(crawled_urls, url);

url_list = extract_urls(page);

for each u in url_list

if (u not in url_queue and u not in hot_queue and

u is not in crawled_urls) /* If u is a new URL */

if (u is a hot URL)

enqueue(hot_queue, u);

else enqueue(url_queue, u);

}

Focused Crawling

• Classifier: Is crawled page P relevant to the topic?– Algorithm that maps page to relevant/irrelevant

• Semi-automatic• Based on page vicinity..

• Distiller:is crawled page P likely to lead to relevant pages?– Algorithm that maps page to likely/unlikely

• Could be just A/H computation, and taking HUBS

• Distiller determines the priority of following links off of P

Measuring Crawler efficiency

Indexing and Retrieval Issues

Efficient Retrieval (1)

• Document-term matrix t1 t2 . . . tj . . . tm nf

d1 w11 w12 . . . w1j . . . w1m 1/|d1| d2 w21 w22 . . . w2j . . . w2m 1/|d2|

. . . . . . . . . . . . . .

di wi1 wi2 . . . wij . . . wim 1/|di|

. . . . . . . . . . . . . . dn wn1 wn2 . . . wnj . . . wnm 1/|dn|

• wij is the weight of term tj in document di

• Most wij’s will be zero.

Naïve retrievalConsider query q = (q1, q2, …, qj, …, qn), nf = 1/|q|.

How to evaluate q (i.e., compute the similarity between q and every document)?

Method 1: Compare q with every document directly.

• document data structure:

di : ((t1, wi1), (t2, wi2), . . ., (tj, wij), . . ., (tm, wim ), 1/|di|)

– Only terms with positive weights are kept.

– Terms are in alphabetic order.

• query data structure:

q : ((t1, q1), (t2, q2), . . ., (tj, qj), . . ., (tm, qm ), 1/|q|)

Naïve retrieval

Method 1: Compare q with documents directly (cont.)

• Algorithm

initialize all sim(q, di) = 0;

for each document di (i = 1, …, n)

{ for each term tj (j = 1, …, m)

if tj appears in both q and di

sim(q, di) += qj wij;

sim(q, di) = sim(q, di) (1/|q|) (1/|di|); }

sort documents in descending similarities and

display the top k to the user;

Inverted Files

Observation: Method 1 is not efficient as most non-zero entries in the document-term matrix need to be accessed.

Method 2: Use Inverted File Index

Several data structures:

1. For each term tj, create a list (inverted file list) that

contains all document ids that have tj.

I(tj) = { (d1, w1j), (d2, w2j), …, (di, wij), …, (dn, wnj) }

• di is the document id number of the ith document.

• Only entries with non-zero weights should be kept.

Inverted files

Method 2: Use Inverted File Index (continued)

Several data structures:

2. Normalization factors of documents are pre-computed and stored in an array: nf[i] stores 1/|di|.

3. Create a hash table for all terms in the collection.

. . . . . .

tj pointer to I(tj) . . . . . .

• Inverted file lists are typically stored on disk.• The number of distinct terms is usually very large.

How Inverted Files are CreatedDictionary PostingsTerm Doc # Freq

a 2 1aid 1 1all 1 1and 2 1come 1 1country 1 1country 2 1dark 2 1for 1 1good 1 1in 2 1is 1 1it 2 1manor 2 1men 1 1midnight 2 1night 2 1now 1 1of 1 1past 2 1stormy 2 1the 1 2the 2 2their 1 1time 1 1time 2 1to 1 2was 2 2

Doc # Freq2 11 11 12 11 11 12 12 11 11 12 11 12 12 11 12 12 11 11 12 12 11 22 21 11 12 11 22 2

Term N docs Tot Freqa 1 1aid 1 1all 1 1and 1 1come 1 1country 2 2dark 1 1for 1 1good 1 1in 1 1is 1 1it 1 1manor 1 1men 1 1midnight 1 1night 1 1now 1 1of 1 1past 1 1stormy 1 1the 2 4their 1 1time 2 2to 1 2was 1 2

Retrieval using Inverted files

Algorithm

initialize all sim(q, di) = 0;

for each term tj in q { find I(t) using the hash table;

for each (di, wij) in I(t)

sim(q, di) += qj wij; } for each document di

sim(q, di) = sim(q, di) (1/|q|) nf[i]; sort documents in descending similarities and

display the top k to the user;

Use something like this

as part of your

Project..

Retrieval using inverted indices

Some observations about Method 2:

• If a document d does not contain any term of a given query q, then d will not be involved in the evaluation of q.

• Only non-zero entries in the columns in the document-term matrix corresponding to the query terms are used to evaluate the query.

• computes the similarities of multiple documents simultaneously (w.r.t. each query word)

Efficient Retrieval (8)

Example (Method 2): Suppose

q = { (t1, 1), (t3, 1) }, 1/|q| = 0.7071

d1 = { (t1, 2), (t2, 1), (t3, 1) }, nf[1] = 0.4082

d2 = { (t2, 2), (t3, 1), (t4, 1) }, nf[2] = 0.4082

d3 = { (t1, 1), (t3, 1), (t4, 1) }, nf[3] = 0.5774

d4 = { (t1, 2), (t2, 1), (t3, 2), (t4, 2) }, nf[4] = 0.2774

d5 = { (t2, 2), (t4, 1), (t5, 2) }, nf[5] = 0.3333

I(t1) = { (d1, 2), (d3, 1), (d4, 2) }

I(t2) = { (d1, 1), (d2, 2), (d4, 1), (d5, 2) }

I(t3) = { (d1, 1), (d2, 1), (d3, 1), (d4, 2) }

I(t4) = { (d2, 1), (d3, 1), (d4, 1), (d5, 1) }

I(t5) = { (d5, 2) }

Efficient Retrieval (9)After t1 is processed:

sim(q, d1) = 2, sim(q, d2) = 0, sim(q, d3) = 1

sim(q, d4) = 2, sim(q, d5) = 0

After t3 is processed:

sim(q, d1) = 3, sim(q, d2) = 1, sim(q, d3) = 2

sim(q, d4) = 4, sim(q, d5) = 0

After normalization:

sim(q, d1) = .87, sim(q, d2) = .29, sim(q, d3) = .82

sim(q, d4) = .78, sim(q, d5) = 0

Efficiency versus Flexibility

• Storing computed document weights is good for efficiency but bad for flexibility.

– Recomputation needed if tfw and idfw formulas change and/or tf and df information change.

• Flexibility is improved by storing raw tf and df information but efficiency suffers.

• A compromise– Store pre-computed tf weights of documents.– Use idf weights with query term tf weights instead

of document term tf weights.

Indexing & Retrieval in Google

Slides from http://www.cs.huji.ac.il/~sdbi/2000/google/index.htm

7th March Lecture

System Anatomy

•High Level Overview

Major Data Structures

• Big Files– virtual files spanning multiple file systems– addressable by 64 bit integers– handles allocation & deallocation of File

Descriptions since the OS’s is not enough– supports rudimentary compression

Major Data Structures (2)

• Repository– tradeoff between speed & compression

ratio– choose zlib (3 to 1) over bzip (4 to 1)– requires no other data structure to access

it

Major Data Structures (3)

• Document Index– keeps information about each document– fixed width ISAM (index sequential access mode)

index– includes various statistics

• pointer to repository, if crawled, pointer to info lists

– compact data structure– we can fetch a record in 1 disk seek during search

Major Data Structures (4)

• URL’s - docID file– used to convert URLs to docIDs– list of URL checksums with their docIDs– sorted by checksums– given a URL a binary search is performed– conversion is done in batch mode

Major Data Structures (4)

• Lexicon– can fit in memory for reasonable price

• currently 256 MB• contains 14 million words• 2 parts

– a list of words– a hash table

Major Data Structures (4)

• Hit Lists– includes position font & capitalization– account for most of the space used in the

indexes– 3 alternatives: simple, Huffman , hand-

optimized– hand encoding uses 2 bytes for every hit

f size:3cap:1plaintype:4f size:7cap:1fancy

pos: 4hash:4type:4f size:7cap:1anchor

position:12position: 8

Major Data Structures (4)

• Hit Lists (2) hit hit hitdocIDhit hithit hit hithitdocIDhit hithit hit hithit

nhits: 8wordID: 24

wordID: 24 nhits: 8

nhits: 8

nhits: 8wordID: 24null wordID

forward barrels: total 43 GB

null wordID

nhits: 8wordID: 24

wordID: 24

ndocswordIDndocswordIDndocswordID

Lexicon: 293 MB

Nhits: 8Nhits: 8Nhits: 8Nhits: 8hit hit

Inverted Barrels: 41 GBhit hit hit hithit hit hithit hit hit hit

docId: 27docId: 27docId: 27docId: 27

Major Data Structures (5)

• Forward Index– partially ordered– used 64 Barrels– each Barrel holds a range of wordIDs– requires slightly more storage– each wordID is stored as a relative difference from

the minimum wordID of the Barrel– saves considerable time in the sorting

Major Data Structures (6)

• Inverted Index– 64 Barrels (same as the Forward Index)– for each wordID the Lexicon contains a

pointer to the Barrel that wordID falls into– the pointer points to a doclist with their hit

list– the order of the docIDs is important

• by docID or doc word-ranking– Two inverted barrels—the short barrel/full barrel

Major Data Structures (7)

• Crawling the Web– fast distributed crawling system– URLserver & Crawlers are implemented in phyton– each Crawler keeps about 300 connection open– at peek time the rate - 100 pages, 600K per second– uses: internal cached DNS lookup

– synchronized IO to handle events– number of queues

– Robust & Carefully tested

Major Data Structures (8)

• Indexing the Web– Parsing

• should know to handle errors– HTML typos– kb of zeros in a middle of a TAG– non-ASCII characters– HTML Tags nested hundreds deep

• Developed their own Parser– involved a fair amount of work– did not cause a bottleneck

Major Data Structures (9)

• Indexing Documents into Barrels– turning words into wordIDs– in-memory hash table - the Lexicon– new additions are logged to a file– parallelization

• shared lexicon of 14 million pages• log of all the extra words

Major Data Structures (10)

• Indexing the Web– Sorting

• creating the inverted index• produces two types of barrels

– for titles and anchor (Short barrels)– for full text (full barrels)

• sorts every barrel separately• running sorters at parallel• the sorting is done in main memory

Ranking looks at

Short barrels first

And then full barrels

Searching• Algorithm

– 1. Parse the query– 2. Convert word into

wordIDs– 3. Seek to the start of

the doclist in the short barrel for every word

– 4. Scan through the doclists until there is a document that matches all of the search terms

– 5. Compute the rank of that document

– 6. If we’re at the end of the short barrels start at the doclists of the full barrel, unless we have enough

– 7. If were not at the end of any doclist goto step 4

– 8. Sort the documents by rank return the top K

• (May jump here after 40k pages)

The Ranking System

• The information– Position, Font Size, Capitalization– Anchor Text– PageRank

• Hits Types– title ,anchor , URL etc..– small font, large font etc..

The Ranking System (2)

• Each Hit type has it’s own weight– Counts weights increase linearly with counts at first

but quickly taper off this is the IR score of the doc– (IDF weighting??)

• the IR is combined with PageRank to give the final Rank

• For multi-word query– A proximity score for every set of hits with a

proximity type weight• 10 grades of proximity

Feedback

• A trusted user may optionally evaluate the results

• The feedback is saved

• When modifying the ranking function we can see the impact of this change on all previous searches that were ranked

Results

• Produce better results than major commercial search engines for most searches

• Example: query “bill clinton”– return results from the “Whitehouse.gov”– email addresses of the president– all the results are high quality pages– no broken links– no bill without clinton & no clinton without bill

Storage Requirements

• Using Compression on the repository

• about 55 GB for all the data used by the SE

• most of the queries can be answered by just the short inverted index

• with better compression, a high quality SE can fit onto a 7GB drive of a new PC

Storage Statistics

Total size ofFetched Pages

147.8 GB

CompressedRepository

53.5 GB

Short InvertedIndex

4.1 GB

TemporaryAnchor Data

6.6 GB

DocumentIndex Incl.Variable WidthData

9.7 GB

Links Database 3.9 GB

Total WithoutRepository

55.2 GB

Web Page Statistics

Number of WebPages Fetched

24 million

Number of URLsSeen

76.5 million

Number of EmailAddresses

1.7 million

Number of 404’s 1.6 million

System Performance

• It took 9 days to download 26million pages• 48.5 pages per second• The Indexer & Crawler ran simultaneously• The Indexer runs at 54 pages per second• The sorters run in parallel using 4 machines,

the whole process took 24 hours

Clustering

Clustering in WebSearch

• Global clustering– Generate clusters

from the entire document database

• Offline

– Eg. Google approach

• Local clustering– Cluster just the

search results for the current query

• Online

– Eg. Manjara,• Uses LSI ideas

– Grouper• Uses suffix trees

– Northern exposure• Manual/collaborativeNotice the sim

ilarity

With clustering based

Query expansion

What is clustering?

• The process of grouping a set of physical or abstract objects into classes of similar objects.

• It is also called unsupervised learning.

• It is a common and important task that finds many applications.

Classical clustering methods

• Partitioning methods– k-Means (and EM), k-Medoids

• Hierarchical methods– agglomerative, divisive, BIRCH

• Distance measures– minimum, maximum– mean– average

Clustering -- Example 1

• For simplicity, 1-dimension objects and k=2.• Objects: 1, 2, 5, 6,7• K-means:

– Randomly select 5 and 6 as centroids; – => Two clusters {1,2,5} and {6,7}; meanC1=8/3,

meanC2=6.5– => {1,2}, {5,6,7}; meanC1=1.5, meanC2=6– => no change.– Aggregate dissimilarity = 0.5^2 + 0.5^2 + 1^2 + 1^2 = 2.5

Clustering -- Example 2

• For simplicity, we still use 1-dimension objects.• Objects: 1, 2, 5, 6,7• agglomerative clustering:

– find two closest objects and merge; – => {1,2}, so we have now {1.5,5, 6,7}; – => {1,2}, {5,6}, so {1.5, 5.5,7}; – => {1,2}, {{5,6},7}.

Challenges

• Scalability• Dealing with different types of attributes• Clusters with arbitrary shapes• Automatically determining input parameters• Dealing with noise (outliers)• Order insensitivity• High dimensionality• Interpretability and usability

Scalable Techniques for Clustering the Web

Slides provided by

Taher Haveliwala

Overview

• Choose document representation

• Choose similarity metric

• Compute pairwise document similarities

• Generate clusters

Document Representation

• Start with union of anchor text and content of the page

• Remove stopwords (~ 750)

• Remove high frequency & low frequency terms

• Use stemming

• Apply TFIDF scaling

Bag Generation

...click here for a great music page...

...click here for great sports page...

...this music is great...

...what I had for lunch...

http://www.foobar.com/

http://www.baz.com/

http://www.music.com/

Enter our site

MusicWorld

Similarity

• Similarity metric for pages U1, U2, that were assigned bags B1, B2, respectively

– sim(U1, U2) = |B1 B2| / |B1 B2|

• Threshold is set to 20%Reality Check

www.foodchannel.com:www.epicurious.com/a_home/a00_home/home.html .37www.gourmetworld.com .36www.foodwine.com .325www.cuisinenet.com .3125www.kitchenlink.com .3125www.yumyum.com .3www.menusonline.com .3www.snap.com/directory/category/0,16,-324,00.html .2875www.ichef.com .2875www.home-canning.com .275

Pair Generation

• Find all pairs of pages (U1, U2) satisfying sim(U1, U2) 20%

• Ignore all url pairs with sim < 20%

• How do we avoid the join bottleneck?

Locality Sensitive Hashing

• Idea: use special kind of hashing• Locality Sensitive Hashing (LSH) provides a

solution:– Min-wise hash functions [Broder’98]– LSH [Indyk, Motwani’98], [Cohen et al’2000]

• Properties:– Similar urls are hashed together with high

probability– Dissimilar urls are not hashed together

Locality Sensitive Hashing

sports.comgolf.com

music.comopera.comsing.com

Avoding n2 through hashing

• Two steps– Min-hash (MH): a way to consistently

sample words from bags– Locality sensitive hashing (LSH): similar

pages get hashed to the same bucket while dissimilar ones do not

Step 1: Min-hash

• Step 1: Generate m min-hash signatures for each url (m = 80)– For i = 1...m

• Generate a random order hi on words

• mhi(u) = argmin {hi(w) | w Bu}

• Pr(mhi(u) = mhi(v)) = sim(u, v)

Step 1: Min-hash

Round 1:

ordering = [cat, dog, mouse, banana]

Set A:{mouse, dog}MH-signature = dog

Set B:{cat, mouse}MH-signature = cat

Step 1: Min-hashRound 2:

ordering = [banana, mouse, cat, dog]

Set A:{mouse, dog}MH-signature = mouse

Set B:{cat, mouse}MH-signature = mouse

Step 2: LSH• Generate l LSH signatures for each url, using k of the min-hash values (l = 125, k = 3)– For i = 1...l

• Randomly select k min-hash indices and concatenate them to form ith LSH signature

• Generate candidate pair if u and v have an LSH signature in common in any round – Pr(lsh(u) = lsh(v)) = Pr(mh(u) = mh(v))k

Lsh signature drives down the probability of False positives --higher the k lower the false positiveBut also increases the prob. False negatives --Soln: Use l signatures instead of one

Step 2: LSH

Set A:{mouse, dog, horse, ant}MH1 = horseMH2 = mouseMH3 = antMH4 = dog

LSH134 = horse-ant-dogLSH234 = mouse-ant-dog

Set B:{cat, ice, shoe, mouse}MH1 = catMH2 = mouse MH3 = iceMH4 = shoe

LSH134 = cat-ice-shoeLSH234 = mouse-ice-shoe

Extracting Similar Pairs

Do l times

Generate k distinct random indices

each from the interval {1…m}

For each URL u

Create an LSH-signature for u by concatenating

the MH-signatures pointed by the k indices

Sort all urls by their LSH-signatures

For each run of URLs with matching LSH-signatures,

output all pairs.

Postprocessing stage checks to seeFor each pair u1 and u2 that is output, U1 and u2 indeed agree on 20% of their MH signatures

AnalysisFalse positive prob (two dissimilar bags collide)

< 1/248 (over 20million docs, we will have 4x107 pairs and this says about 5 of those pairs may be false positives)

False negatives prob If the docs have 20% similairity, probability that their MH

sigs will be equal is 2/10 For lsh sig to be equal we need three MH sigs to be

equal (2/10)k = 1/125 (since k=3) To make sure that such a pair will actually collide we

need at least l=125 lsh signatures

Num MH sigs M=80;Length of LSH sig k=3

K needs to be picked so that both the false positive andFalse negative probabilities are kept in check…

Sort & Filter

• Using all buckets from all LSH rounds, generate candidate pairs

• Sort candidate pairs on first field

• Filter candidate pairs: keep pair (u, v), only if u and v agree on 20% of MH-signatures

• Ready for “What’s Related?” queries...

Overview

• Choose document representation

• Choose similarity metric

• Compute pairwise document similarities

• Generate clusters

Clustering

• The set of document pairs represents the document-document similarity matrix with 20% similarity threshold

• Scan through pairs (they are sorted on first component)

• For each run [(u, v1), ... , (u, vn)]– if u is not marked

• cluster = u + unmarked neighbors of u– mark u and all neighbors of u

Center

Results

Algorithm Step Running Time(hours)

Bag generation 23Bag sorting 4.7Min-hash 26LSH 16Filtering 83Sorting 107CENTER 18

20 Million urls on Pentium-II 450

Sample Cluster

feynman.princeton.edu/~sondhi/205main.html hep.physics.wisc.edu/wsmith/p202/p202syl.htmlhepweb.rl.ac.uk/ppUK/PhysFAQ/relativity.htmlpdg.lbl.gov/mc_particle_id_contents.htmlphysics.ucsc.edu/courses/10.htmltown.hall.org/places/SciTech/qmachinewww.as.ua.edu/physics/hetheory.htmlwww.landfield.com/faqs/by-newsgroup/sci/

sci.physics.relativity.htmlwww.pa.msu.edu/courses/1999spring/PHY492/desc_PHY492.htmlwww.phy.duke.edu/Courses/271/Synopsis.html

. . . (total of 27 urls) . . .