1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

63
1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST
  • date post

    19-Dec-2015
  • Category

    Documents

  • view

    224
  • download

    3

Transcript of 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

Page 1: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

1

Knowledge Management with Documents

Qiang YangHKUST

Thanks: Professor Dik Lee, HKUST

Page 2: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

2

Keyword Extraction

Goal: given N documents, each consisting of words, extract the most significant subset of words

keywords Example

[All the students are taking exams] -- >[student, take, exam]

Keyword Extraction Process remove stop words stem remaining terms collapse terms using thesaurus build inverted index extract key words - build key word index extract key phrases - build key phrase index

it

Page 3: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

3

Stop Words and Stemming From a given Stop Word List

[a, about, again, are, the, to, of, …] Remove them from the documents

Or, determine stop words Given a large enough corpus of common

English Sort the list of words in decreasing order of

their occurrence frequency in the corpus Zipf’s law: Frequency * rank constant

most frequent words tend to be short most frequent 20% of words account for 60% of usage

Page 4: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

4

Zipf’s Law -- An illustrationRank(R) Term Frequency (F) R*F (10**6)

1 the 69,971 0.0702 of 36,411 0.0733 and 28,852 0.0864 to 26,149 0.1045 a 23,237 0.1166 in 21,341 0.1287 that 10,595 0.0748 is 10,009 0.0819 was 9,816 0.08810 he 9,543 0.095

Page 5: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

5

Resolving Power of Word

Words in decreasing frequency order

Non-significant high-frequency

terms

Non-significant low-frequency

terms

Presumed resolving power of significant words

Page 6: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

6

Stemming

The next task is stemming: transforming words to root form

Computing, Computer, Computation comput

Suffix based methods Remove “ability” from “computability” “…”+ness, “…”+ive, remove

Suffix list + context rules

Page 7: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

7

Thesaurus Rules A thesaurus aims at

classification of words in a language for a word, it gives related terms which are

broader than, narrower than, same as (synonyms) and opposed to (antonyms) of the given word (other kinds of relationships may exist, e.g., composed of)

Static Thesaurus Tables [anneal, strain], [antenna, receiver], … Roget’s thesaurus WordNet at Preinceton

Page 8: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

8

Thesaurus Rules can also be Learned

From a search engine query log After typing queries, browse… If query1 and query2 leads to the same document

Then, Similar(query1, query2) If query1 leads to Document with title keyword K,

Then, Similar(query1, K) Then, transitivity…

Microsoft Research China’s work in WWW10 (Wen, et al.) on Encarta online

Page 9: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

9

The Vector-Space Model T distinct terms are available; call them index terms or the vocabulary The index terms represent important terms for an application a

vector to represent the document <T1,T2,T3,T4,T5> or <W(T1),W(T2),W(T3),W(T4),W(T5)>

T1=architectureT2=busT3=computerT4=databaseT5=xml

computer sciencecollection

index terms or vocabularyof the colelction

Page 10: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

10

The Vector-Space Model Assumptions: words are

uncorrelated

T1 T2 …. Tt

D1 d11 d12 … d1t

D2 d21 d22 … d2t

: : : : : : : :Dn dn1 dn2 … dnt

Given: 1. N documents and a Query 2. Query considered a documenttoo 2. Each represented by t terms

3. Each term j in document i has weight 4. We will deal with how to compute the weights later

ijdtqqqQ ...21

Page 11: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

11

Graphic RepresentationExample:

D1 = 2T1 + 3T2 + 5T3

D2 = 3T1 + 7T2 + T3

Q = 0T1 + 0T2 + 2T3

T3

T1

T2

D1 = 2T1+ 3T2 + 5T3

D2 = 3T1 + 7T2 + T3

Q = 0T1 + 0T2 + 2T3

7

32

5

• Is D1 or D2 more similar to Q?• How to measure the degree of

similarity? Distance? Angle? Projection?

Page 12: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

12

Similarity Measure - Inner Product

Similarity between documents Di and query Q can be computed as the inner vector product:

sim ( Di , Q ) = Di Q)

Binary: weight = 1 if word present, 0 o/w Non-binary: weight represents degree of similary

Example: TF/IDF we explain later

k

t

1

t

jjij qd

1

*

Page 13: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

13

Inner Product -- Examples

Binary: D = 1, 1, 1, 0, 1, 1, 0

Q = 1, 0 , 1, 0, 0, 1, 1

sim(D, Q) = 3

retri

eval

database

archite

cture

computer

textmanagem

ent

informatio

n Size of vector = size of vocabulary = 7

Weighted D1 = 2T1 + 3T2 + 5T3

Q = 0T1 + 0T2 + 2T3

sim(D1 , Q) = 2*0 + 3*0 + 5*2 = 10

Page 14: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

14

Properties of Inner Product

The inner product similarity is unbounded Favors long documents

long document a large number of unique terms, each of which may occur many times

measures how many terms matched but not how many terms not matched

Page 15: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

15

Cosine Similarity Measures

Cosine similarity measures the cosine of the angle between two vectors

Inner product normalized by the vector lengths

t3

t1

t2

D1

D2

Q

t

k

t

k

t

kkik

qd

qd

kik1 1

22

1

)(CosSim(Di, Q) =

Page 16: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

16

Cosine Similarity: an Example

D1 = 2T1 + 3T2 + 5T3 CosSim(D1 , Q) = 5 / 38 = 0.81D2 = 3T1 + 7T2 + T3 CosSim(D2 , Q) = 1 / 59 = 0.13

Q = 0T1 + 0T2 + 2T3

D1 is 6 times better than D2 using cosine similarity but only 5

times better using inner product

Page 17: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

17

Document and Term Weights

Document term weights are calculated using frequencies in documents (tf) and in collection (idf)

tfij = frequency of term j in document i

df j = document frequency of term j = number of documents containing term j

idfj = inverse document frequency of term j

= log2 (N/ df j) (N: number of documents in collection)

Inverse document frequency -- an indication of term values as a document discriminator.

Page 18: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

18

Term Weight Calculations Weight of the jth term in ith

document:dij = tfij idfj = tfij log2 (N/ df j)

TF Term Frequency

A term occurs frequently in the document but rarely in the remaining of the collection has a high weight

Let maxl{tflj} be the term frequency of the most frequent term in document j

Normalization: term frequency = tfij /maxl{tflj}

Page 19: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

19

An example of TF

Document=(A Computer Science Student Uses Computers)

Vector Model based on keywords (Computer, Engineering, Student)Tf(Computer) = 2Tf(Engineering)=0Tf(Student) = 1Max(Tf)=2TF weight for:

Computer = 2/2 = 1Engineering = 0/2 = 0Student = ½ = 0.5

Page 20: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

20

Inverse Document Frequency

Dfj gives a the number of times term j appeared among N documents

IDF = 1/DF Typically use log2 (N/ df j) for IDF Example: given 1000 documents,

computer appeared in 200 of them, IDF= log2 (1000/ 200) =log2(5)

Page 21: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

21

TF IDF

dij = (tfij /maxl{tflj}) idfj

= (tfij /maxl {tflj}) log2 (N/ df j) Can use this to obtain non-binary

weights Used in the SMART Information

Retrieval System by the late Gerald Salton and MJ McGill, Cornell University to tremendous success, 1983

Page 22: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

22

Implementation based on Inverted Files

In practice, document vectors are not stored directly; an inverted organization provides much better access speed.

The index file can be implemented as a hash file, a sorted list, or a B-tree.

system

computer

database

science D2, 4

D5, 2

D1, 3

D7, 4

Index terms df

3

2

4

1

Dj, tfj

Page 23: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

23

A Simple Search Engine Now we have got enough tools to build a simple

Search engine (documents == web pages)1. Starting from well known web sites, crawl to obtain N

web pages (for very large N)2. Apply stop-word-removal, stemming and thesaurus to

select K keywords3. Build an inverted index for the K keywords4. For any incoming user query Q,

1. For each document D1. Compute the Cosine similarity score between Q and

document D

2. Select all documents whose score is over a certain threshold T

3. Let this result set of documents be M4. Return M to the user

Page 24: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

24

Remaining Questions

How to crawl? How to evaluate the result

Given 3 search engines, which one is better?

Is there a quantitative measure?

Page 25: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

25

Measurement

Let M documents be returned out of a total of N documents;

N=N1+N2 N1 total documents are relevant to query N2 are not

M=M1+M2 M1 found documents are relevant to query M2 are not

Precision = M1/M Recall = M1/N1

Page 26: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

26

documents relevant of number Total

retrieved documents relevant of Number recall

retrieved documents of Number totalretrieved documents relevant of Number

precision

Retrieval Effectiveness - Precision and RecallRetrieval Effectiveness - Precision and Recall

Relevant documents

Retrieved documents

Entire document collection

retrieved & relevant

not retrieved but relevant

retrieved & irrelevant

Not retrieved & irrelevant

retrieved not retrieved

rele

vant

irre

leva

nt

Page 27: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

27

Precision and Recall Precision

evaluates the correlation of the query to the database an indirect measure of the completeness of indexing

algorithm Recall

the ability of the search to find all of the relevant items in the database

Among three numbers, only two are always available

total number of items retrieved number of relevant items retrieved

total number of relevant items is usually not available

Page 28: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

28

Relationship between Recall and Precision

10

1

recall

prec

isio

n

Return mostly relevantdocuments but includemany junks too

The idealReturn relevant documents butmiss many useful ones too

Page 29: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

29

R=2/5=0.4; p=2/3=0.67

Computation of Recall and Precision

n doc #relevantRecallPrecision1 588 x 0.2 1.002 589 x 0.4 1.003 576 0.4 0.674 590 x 0.6 0.765 986 0.6 0.606 592 x 0.8 0.677 984 0.8 0.578 988 0.8 0.509 578 0.8 0.4410 985 0.8 0.4011 103 0.8 0.3612 591 0.8 0.3313 772 x 1.0 0.3814 990 1.0 0.36

Suppose:total no. of relevant docs = 5

R=1/5=0.2; p=1/1=1

R=2/5=0.4; p=2/2=1

R=5/5=1; p=5/13=0.38

Page 30: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

30

Computation of Recall and Precision

n RecallPrecision1 0.2 1.002 0.4 1.003 0.4 0.674 0.6 0.765 0.6 0.606 0.8 0.677 0.8 0.578 0.8 0.509 0.8 0.4410 0.8 0.4011 0.8 0.3612 0.8 0.3313 1.0 0.3814 1.0 0.36 0.4 0.8

1.0

0.8

0.6

0.4

0.2

0.2 1.00.6

1 2

3

4

5

6

7

12

13

200

recall

prec

isio

n

Page 31: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

31

Compare Two or More Systems

Computing recall and precision values for two or more systems F1 score: see http://en.wikipedia.org/wiki/F1_score

Superimposing the results in the same graph The curve closest to the upper right-hand corner of the graph

indicates the best performance

0

0.2

0.4

0.6

0.8

1

0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1

Recall

Precision

Stem Theraurus

Page 32: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

32

The TREC Benchmark TREC: Text Retrieval Conference Originated from the TIPSTER program sponsored by

Defense Advanced Research Projects Agency (DARPA) Became an annual conference in 1992, co-sponsored by

the National Institute of Standards and Technology (NIST) and DARPA

Participants are given parts of a standard set of documents and queries in different stages for testing and training

Participants submit the P/R values on the final document and query set and present their results in the conference

http://trec.nist.gov/

Page 33: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

33

Link Based Search Engines

Qiang YangHKUST

Page 34: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

34

Search Engine Topics Text-based Search Engines

Document based Ranking: TF-IDF, Vector Space Model No relationship between pages modeled Cannot tell which page is important

without query Link-based search engines: Google,

Hubs and Authorities Techniques Can pick out important pages

Page 35: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

35

The PageRank Algorithm Fundamental question to ask

What is the importance level of a page P, Information Retrieval

Cosine + TF IDF does not give related hyperlinks

Link based Important pages (nodes) have many other

links point to it Important pages also point to other

important pages

)(PI

Page 36: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

36

The Google Crawler Algorithm

“Efficient Crawling Through URL Ordering”, Junghoo Cho, Hector Garcia-Molina, Lawrence Page,

Stanford http://www.www8.org http://www-db.stanford.edu/~cho/crawler-paper/

“Modern Information Retrieval”, BY-RN Pages 380—382

Lawrence Page, Sergey Brin. The Anatomy of a Search Engine. The Seventh International WWW Conference (WWW 98). Brisbane, Australia, April 14-18, 1998.

http://www.www7.org

Page 37: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

37

Back Link Metric

IB(P) = total number of backlinks of P IB(P) impossible to know, thus, use

IB’(P) which is the number of back links crawler has seen so far

Web PageP

IB(P)=3

Page 38: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

38

Page Rank Metric

i

N

ii CTIRddPIR /)(*)1()(

1

Web PageP

T1

T2

TN

Let 1-d be probabilitythat user randomly jumpto page P;

“d” is the damping factor

Let Ci be the number ofout links from each Ti

C=2

d=0.9

Page 39: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

39

Matrix Formulation Consider a random walk on the web (denote IR(P) by r(P))

Let Bij = probability of going directly from i to j Let ri be the limiting probability (page rank) of being at page

i

nnnnnn

n

n

r

r

r

r

r

r

bbb

bbb

bbb

......

...

............

...

...

2

1

2

1

21

22212

12111

rrBT

Thus, the final page rank r is a principle eigenvector of BT

Page 40: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

40

How to compute page rank?

For a given network of web pages, Initialize page rank for all pages (to

one) Set parameter (d=0.90) Iterate through the network, L times

Page 41: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

41

Example: iteration K=1

A

B

C

IR(P)=1/3 for all nodes, d=0.9

node IP

A 1/3

B 1/3

C 1/3

Page 42: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

42

Example: k=2

A

B

C node IP

A 0.4

B 0.1

C 0.55

i

l

ii CTIRPIR /)(*9.01.0)(

1

l is the in-degree of P

Note: A, B, C’s IR values are Updated in order of A, then B, then CUse the new value of A when calculating B, etc.

Page 43: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

43

Example: k=2 (normalize)

A

B

C node IP

A 0.38

B 0.095

C 0.52

Page 44: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

44

Crawler Control

All crawlers maintain several queues of URL’s to pursue next Google initially maintains 500 queues Each queue corresponds to a web site pursuing

Important considerations: Limited buffer space Limited time Avoid overloading target sites Avoid overloading network traffic

Page 45: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

45

Crawler Control

Thus, it is important to visit important pages first

Let G be a lower bound threshold on I(P)

Crawl and Stop Select only pages with IP>G to crawl, Stop after crawled K pages

Page 46: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

46

Test Result: 179,000 pages

                                                                     

      Percentage of Stanford Web crawled vs. PST –

the percentage of hot pages visited so far

Page 47: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

47

Google Algorithm (very simplified)

First, compute the page rank of each page on WWW Query independent

Then, in response to a query q, return pages that contain q and have highest page ranks

A problem/feature of Google: favors big commercial sites

Page 48: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

48

How powerful is Google?

A PageRank for 26 million web pages can be computed in a few hours on a medium size workstation

Currently has indexed a total of 1.3 Billion pages

Page 49: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

49

Hubs and Authorities 1998

Kleinburg, Cornell University http://www.cs.cornell.edu/home/kleinber/

Main Idea: type “java” in a text-based search engine Get 200 or so pages Which one’s are authoritive?

http://java.sun.com What about others?

www.yahoo.com/Computer/ProgramLanguages

Page 50: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

50

Hubs and Authorities

HubsAuthorities

Others

- An authority is a page pointed to by many strong hubs;

- A hub is a page that points to many strong authorities

Page 51: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

51

H&A Search Engine Algorithm

First submit query Q to a text search engine

Second, among the results returned select ~200, find their neighbors, compute Hubs and Authorities

Third, return Authorities found as final result

Important Issue: how to find Hubs and Authorities?

Page 52: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

52

Link Analysis: weights

Let Bij=1 if i links to j, 0 otherwise hi=hub weight of page i ai = authority weight of page I Weight normalization

1)(1

2

N

iih

1)( 2

1

N

iia

(3) 11

N

iih

11

N

iia

(3’)

But, for simplicity, we will use

Page 53: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

53

Link Analysis: update a-weight

h1

h2

a

hBhBha Tjji

Bji

ji

0 (1)

Page 54: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

54

Link Analysis: update h-weight

h

BaaBah jijB

ji

ij

0

a1

a2

(2)

Page 55: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

55

H&A: algorithm

1. Set value for K, the number of iterations2. Initialize all a and h weights to 13. For l=1 to K, do

a. Apply equation (1) to obtain new ai weights

b. Apply equation (2) to obtain all new hi weights, using the new ai weights obtained in the last step

c. Normalize ai and hi weights using equation (3)

Page 56: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

56

Does it converge?

Yes, the Kleinberg paper includes a proof

Needs to know Linear algebra and eigenvector analysis

We will skip the proof but only using the results: The a and h weight values will converge

after sufficiently large number of iterations, K.

Page 57: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

57

Example: K=1

A

B

C

h=1 and a=1 for all nodes

node a h

A 1 1

B 1 1

C 1 1

Page 58: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

58

Example: k=1 (update a)

A

B

C node a h

A 1 1

B 0 1

C 2 1

Page 59: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

59

Example: k=1 (update h)

A

B

C node a h

A 1 2

B 0 2

C 2 1

Page 60: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

60

Example: k=1 (normalize)

A

B

C node a h

A 1/3 2/5

B 0 2/5

C 2/3 1/5

Use Equation (3’)

Page 61: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

61

Example: k=2 (update a, h,normalize)

A

B

C

node a h

A 1/5 4/9

B 0 4/9

C 4/5 1/9

Use Equation (1)

If we choose a threshold of ½, then C is anAuthority, and there are no hubs.

Page 62: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

62

Search Engine Using H&A

For each query q, Enter q into a text-based search

engine Find the top 200 pages Find the neighbors of the 200 pages

by one link, let the set be S Find hubs and authorities in S Return authorities as final result

Page 63: 1 Knowledge Management with Documents Qiang Yang HKUST Thanks: Professor Dik Lee, HKUST.

63

Conclusions

Link based analysis is very powerful in find out the important pages

Models the web as a graph, and based on in-degree and out-degree

Google: crawl only important pages H&A: post analysis of search result