Salvatore Orlando "Mining query logs to improve web search engines' operations"
ITEC547 Text Mining Web Technologies Search Engines.
-
Upload
tyler-simpson -
Category
Documents
-
view
216 -
download
1
Transcript of ITEC547 Text Mining Web Technologies Search Engines.
Outline of Presentation
1Early Search Engines
2Indexing Text
for Search
3Indexing
Multimedia
4Queries
5Searching an Index
Search Engines
• Open Text (1995-1997)• Magellan (1995-2001)• Infoseek (Go) (1995-2001)• Snap (NBCi)(1997-2001)• Direct Hit (1998-2002)• Lycos (1994; reborn 1999)• WebCrawler (1994; reborn 2001)• Yahoo (1994; reborn 2002)• Excite (1995; reborn 2001)• HotBot (1996; reborn 2002)• Ask Jeeves (1998; reborn 2002)• Teoma (2000- 2001)• AltaVista (1995- )• LookSmart (1996- )• Overture (1998- )
4
Information Retrieval
• The indexing and retrieval of textual documents.
• Searching for pages on the World Wide Web is the most recent and perhaps most widely used IR application
• Concerned firstly with retrieving relevant documents to a query.
• Concerned secondly with retrieving from large sets of documents efficiently.
Typical IR Task
• Given:– A corpus of textual natural-language documents.– A user query in the form of a textual string.
• Find:– A ranked set of documents that are relevant to
the query.
Typical IR System Architecture
IRSystem
Query String
Documentcorpus
RankedDocuments
1. Doc12. Doc23. Doc3 . .
EARLY SEARCH ENGINES
• Initially used in academic or specialized domains.– Legal and specialized domains consume a large
amount of textual info• Use of expensive proprietary hardware and
software– High computational and storage requirements
• Boolean query model• Iterative search model
– Fetch documents in many steps
8
Medline of National Library of Medicine
• Developed in late 1960 and made available in 1971• Based on inverted file organization• Boolean query language
– Queries broken down and numbered into segments– Results of a queries fed into the next query segment
• Each user assigned a time slot– If cycle not completed in time slot, most recent results are
returned• Query and browse operations performed as separate steps
– Following a query, results are viewed– Modifications start a new query-browse cycle
Dialog
• Broader subject content• Specialized collections of data on payment• Boolean query
– Each term numbered and executed separately then combined
– Word patterns– For multiword queries proximity operator W
Why Index
• Simplest approach search text sequentially– Size must be small
• Static, semi-static index• Inverted Index
– mapping from content, such as words or numbers, to its locations in a database file, or in a document or a set of documents.
• Documents/Positions in Documents/Weight• Fuzzy/Stemming/Stopwords
Example
• T1 : "it is what it is“• T2 : "what is it“• T3 : "it is a banana"
• "a": {2} • "banana": {2} • "is": {0, 1, 2} • "it": {0, 1, 2} • "what": {0, 1}
Inverted Index
Example
• "a": {(2, 2)} • "banana": {(2, 3)} • "is": {(0, 1), (0, 4), (1, 1), (2, 1)} • "it": {(0, 0), (0, 3), (1, 2), (2, 0)} • "what": {(0, 2), (1, 0)}
• T0 : "it is what it is“• T1 : "what is it“• T2 : "it is a banana"
Full Inverted Index
Google Index
• A unique DocId associated with each URL• Hit: word occurences
– wordID: 24 bit number– Word position– Font size relative to the rest of the document– Plain hit : in the document– Fancy hit : in the URL, title, anchor text, meta tags
• Word occurrences of a web page are distributed across a set of barrels
Indexing Multimedia
• Forming an index for multimedia– Use context : surrounding
text– Add manual description– Analyze automatically and
attach a description
Queries
• Boolean query– No relevance measure– May be hard to understand
• Multimedia query– Find images of Everest– Find x-rays showing the human rib cage– Find companies whose stock prices have similar
patterns
Relevance
• Relevance is a subjective judgment and may include:– Being on the proper subject.– Being timely (recent information).– Being authoritative (from a trusted source).– Satisfying the goals of the user and his/her
intended use of the information (information need).
26
Keyword Search
• Simplest notion of relevance is that the query string appears verbatim in the document.
• Slightly less strict notion is that the words in the query appear frequently in the document, in any order (bag of words).
27
Problems with Keywords
• May not retrieve relevant documents that include synonymous terms.– “restaurant” vs. “café”– “PRC” vs. “China”
• May retrieve irrelevant documents that include ambiguous terms.– “bat” (baseball vs. mammal)– “Apple” (company vs. fruit)– “bit” (unit of data vs. act of eating)
28
Relevance Feedback
• User enters query terms– Keywords maybe weighted or not
• Links returned– Choose the relevant and irrelevant ones
• If there is no negative feedback second term is 0• T’s are terms from relevant and irrelevant sets
marked by the user
Searching an Inverted Index
• Tokenize the query, search index vocabulary for each query token
• Get a list of documents associated with each token
• Combine the list of documents using constraints specified in the query
Google Search
1. Tokenize query and remove stopwords2. Translate the query words into wordIDs using the lexicon3. For every wordID get the list of documents from the short
inverted barrel and build a composite set of documents4. Scan the composite list of documents
i. Skip to next document if the current document does not matchii. Compute a rank using query and featuresiii. If no more documents go to step 3 and use full inverted barrels
to find more docsiv. If there are sufficient # of docs go to step 5
5. Sort the final Document List by rank
How are results ranked?
• Weight type• Location: title,URL, anchor,body• Size: relative font size• Capitalization• Count occurences • Closeness (proximity)
Evaluation
• Response time • quality• Recall : % of correct items that are selected
• Precision : % of selected items that are correct
Ranking Algorithms : Hyperlink
• Popularity Ranking• Rank “popular” documents higher among set of
documents with specific keywords.• Determining “Popularity”
– Access rate ?• How to get accurate data?
– Bookmarks?• Might be private?
– Links to related pages?• Using web crawler to analyze external links.
Popularity/Prestige
• transfer of prestige– a link from a popular page x to a page y is treated
as conferring more prestige to page y than a link from a not-so-popular page z.
• Count of In-links/Out-links
Hypertext Induced Topic Search (HITS)
• The HITS algorithm:– compute popularity using set of related pages
only.• Important web pages : cited by other
important web pages or a large number of less-important pages
• Initially all pages have same importance
Hubs and Authorities
• Hub - A page that stores links to many related pages– may not in itself contain actual information on a topic
• Authority - A page that contains actual information on a topic – may not store links to many related pages
• Each page gets a prestige value as a hub (hub-prestige), and another prestige value as an authority (authority-prestige).
Hubs and Authorities algorithm
1. Locate and build the subgraph2. Assign initial values to hub and authority scores of each
node3. Run a loop till convergence
i. Assign the sum of the hub scores of all nodes y that link to node x to the authority score of x
ii. Assign the sum of the authority scores of all nodes y that are linked from node x to node y to hub score of node x
iii. Normalize the hub and authority scores of all nodesiv. Check for convergence. Is the difference< threshold?
4. Return the list of nodes sorted in descending order of hub and authority scores
Page Rank Algorithm
• Ranks based on citation statistics– In/out links
• Rank of a page depends on the ranks of the pages that link to it.
Page rank Algorithm
1. Locate and build subgraph2. Save the number of out-links from every node in an
array3. Assign a default PageRank to all nodes4. Run a loop till convergence
i. Compute a new PageRank score for every node. Assign the sum of PageRank scores divided by the number of out-links of every node that links to a node and add the default rank source
ii. Check convergence. Is the difference between new and old PageRank< threshold?