Machine Learned Relevance at A Large Scale Search Engine

75
Machine Learned Relevance at a Large Scale Search Engine Salford Analytics and Data Mining Conference 2012

Transcript of Machine Learned Relevance at A Large Scale Search Engine

Page 1: Machine Learned Relevance at A Large Scale Search Engine

Machine Learned Relevance at a Large Scale Search EngineSalford Analytics and Data Mining Conference 2012

Page 2: Machine Learned Relevance at A Large Scale Search Engine

Machine Learned Relevance at a Large Scale Search Engine

Salford Data Mining – May 25, 2012 Presented by:

Dr. Eric Glover – [email protected]. James Shanahan – [email protected]

Page 3: Machine Learned Relevance at A Large Scale Search Engine

About the Authors James G. Shanahan - PhD in Machine Learning University of Bristol, UK

– 20+ years in the field AI and information science– Principal and Founder, Boutique Data Consultancy

• Clients include: Adobe, Digg, SearchMe, AT&T, Ancestry SkyGrid, Telenav – Affiliated with University of California Santa Cruz (UCSC)– Adviser to Quixey– Previously

• Chief Scientist, Turn Inc. (A CPX ad network, DSP)• Principal Scientist, Clairvoyance Corp (CMU spinoff) • Co-founder of Document Souls (task centric info access system)• Research Scientist, Xerox Research (XRCE)• AI Research Engineer, Mitsubishi Group

Page 4: Machine Learned Relevance at A Large Scale Search Engine

About the Authors Eric Glover - PhD CSE (AI) From U of M in 2001

– Fellow at Quixey, where among other things, he focuses on the architecture and processes related to applied machine learning for relevance and evaluation methodologies

– More than a dozen years of Search Engine experience including: NEC Labs, Ask Jeeves, SearchMe, and own startup.

– Multiple relevant publications ranging from classification to automatically discovering topical hierarchies

– Dissertation studied Personalizing Web Search through incorporation of user-preferences and machine learning

– More than a dozen filed patents

Page 5: Machine Learned Relevance at A Large Scale Search Engine

Talk Outline Introduction: Search and Machine Learned Ranking Relevance and evaluation methodologies Data collection and metrics Quixey – Functional Application Search™ System Architecture, features, and model training Alternative approaches Conclusion

Page 6: Machine Learned Relevance at A Large Scale Search Engine

Google

Page 7: Machine Learned Relevance at A Large Scale Search Engine

Search Engine: SearchMe Search engine: lets you see and hear what you're searching for

Page 8: Machine Learned Relevance at A Large Scale Search Engine

6 Steps to MLR in Practice

Systems Modeling is inherentlyinteractive and iterative.

1

2

Understand the domain and Define problems

Collect requirements, and Data

Modeling: Extract Patterns/Models

Feature Engineering3

4

5

6Deploy System in the wild (and

test)

Interpret and Evaluate discovered knowledge

Page 9: Machine Learned Relevance at A Large Scale Search Engine

How is ML for Search Unique Many Machine Learning (ML) systems start with source data

– Goal is to analyze, model, predict– Features are often pre-defined, in a well-studied area

MLR for Search Engines is different from many other ML applications:– Does not start with labeled data

• Need to pay judges to provide labels– Opportunity to invent new features (Feature Engineering)– Often require real-time operation

• Processing tens of billions of possible results, microseconds matter– Require domain-specific metrics for evaluation

Page 10: Machine Learned Relevance at A Large Scale Search Engine

If we can’t measure “it”, then…

….we should think twice about doing “it” Measurement has enabled us to compare systems

and also to machine learn them Search is about measurement, measurement and

measurement

Page 11: Machine Learned Relevance at A Large Scale Search Engine

Improve in a Measured Way

Page 12: Machine Learned Relevance at A Large Scale Search Engine

From Information Needs Queries The idea of using computers to search for relevant pieces of information was

popularized in the article As We May Think by Vannevar Bush in 1945

An information need is an individual or group's desire to locate and obtain information to satisfy a conscious or unconscious need.

Within the context of web search information needs are expressed as textual queries (possibly with constraints)

E.g., “Analytics Data Mining Conference” program

Metric: “Relevance” as a measure of how well is a system performing

Page 13: Machine Learned Relevance at A Large Scale Search Engine

Relevance is a Huge Challenge Relevance typically denotes how well a retrieved object (document) or set of objects

meets the information need of the user. Relevance is often viewed as multifaceted.

– A core facet of relevance relates to topical relevance or aboutness, • i.e., to what extent the topic of a result matches the topic of the query or

information need. • Another facet of relevance is based on user perception, and sometimes referred

to as user relevance; it encompasses other concerns of the user such as timeliness, authority or novelty of the result

In local search type queries, yet another facet of relevance that comes into play is geographical aboutness,– i.e., to what extent the location of a result, a business listing, matches the location of

the query or information need

Page 14: Machine Learned Relevance at A Large Scale Search Engine

From Cranfield to TREC Text REtrieval

Conference/Competition– http://trec.nist.gov/– Run by NIST (National Institute of

Standards & Technology)

Started in 1992

Collections: > 6 Gigabytes (5 CRDOMs), >1.5 Million Docs– Newswire & full text news (AP,

WSJ, Ziff, FT)– Government documents (federal

register, Congressional Record)– Radio Transcripts (FBIS)– Web “subsets”– Tweets

Page 15: Machine Learned Relevance at A Large Scale Search Engine

15

The TREC Benchmark TREC: Text REtrieval Conference (http://trec.nist.gov/) Originated from the

TIPSTER program sponsored by Defense Advanced Research Projects Agency (DARPA).

Became an annual conference in 1992, co-sponsored by National Institute of Standards and Technology (NIST) and DARPA.

Participants are given parts of a standard set of documents and TOPICS (from which queries have to be derived) in different stages for training and testing.

Participants submit the P/R values for the final document and query corpus and present their results at the conference.

Page 16: Machine Learned Relevance at A Large Scale Search Engine

User’s InformationNeed

Index

Pre-process

Parse

Collections

Match

Query

text input

Query Reformulation

Page 17: Machine Learned Relevance at A Large Scale Search Engine

User’s InformationNeed

Index

Pre-process

Parse

Collections

Rank or Match

Query

text input

Query Reformulation

Evaluation

Page 18: Machine Learned Relevance at A Large Scale Search Engine

Talk Outline Introduction: Search and Machine Learned Ranking Relevance and evaluation methodologies Data collection and metrics Quixey – Functional Application Search™ System Architecture, features, and model training Alternative approaches Conclusion

Page 19: Machine Learned Relevance at A Large Scale Search Engine

Difficulties in Evaluating IR Systems

Effectiveness is related to the relevancy of the set of returned items.

Relevancy is not typically binary but continuous. Even if relevancy is binary, it can be a difficult judgment to make. Relevancy, from a human standpoint, is:

– Subjective: Depends upon a specific user’s judgment.– Situational: Relates to user’s current needs.– Cognitive: Depends on human perception and behavior.– Dynamic: Changes over time.

Page 20: Machine Learned Relevance at A Large Scale Search Engine

Relevance as a MeasureRelevance is everything!

How relevant is the document retrieved– for the user’s information need.

Subjective, but one assumes it’s measurable Measurable to some extent

– How often do people agree a document is relevant to a query• More often than expected

How well does it answer the question?– Complete answer? Partial? – Background Information?– Hints for further exploration?

Page 21: Machine Learned Relevance at A Large Scale Search Engine

What to Evaluate?What can be measured that reflects users’ ability to use system? (Cleverdon 66)– Coverage of Information– Form of Presentation– Effort required/Ease of Use– Time and Space Efficiency– Effectiveness

Recall– proportion of relevant material actually retrieved

Precision– proportion of retrieved material actually relevant

Typically a 5-point scale is used 5=best, 1=worst

Page 22: Machine Learned Relevance at A Large Scale Search Engine

Talk Outline Introduction: Search and Machine Learned Ranking Relevance and evaluation methodologies Data collection and metrics Quixey – Functional Application Search™ System Architecture, features, and model training Alternative approaches Conclusion

Page 23: Machine Learned Relevance at A Large Scale Search Engine

Data Collection is a Challenge Most search engines do not start with labeled data (relevance judgments) Good labeled data is required to perform evaluations and perform learning Not practical to hand-label all possibilities for modern large-scale search engines Using 3rd party sources such as Mechanical Turk is often very noisy/inconsistent

Data collection is non-trivial– A custom system (specific to the domain) is often required– Phrasing of the “questions”, options (including a skip option), UI design and

judge training are critical to increase the chance of consistency Can leverage judgment collection to aid in feature engineering

– Judges can provide reasons and observations

Page 24: Machine Learned Relevance at A Large Scale Search Engine
Page 25: Machine Learned Relevance at A Large Scale Search Engine

Relevance/Usefulness/Ranking Web Search: topical relevance or aboutness, trustability of source Local Search: topical relevance and geographical applicability Functional App Search:

– Task relevance – User must be convinced app results can solve need– Finding the “best” apps that address the users task needs– Very domain and user specific

Advertising– Performance measure – expected revenue P(click) * revenue(click)– Consistency with user-search (showing irrelevant ads hurts brand)

Page 26: Machine Learned Relevance at A Large Scale Search Engine

Commonly used Search Metrics Early search systems used binary judgments (relevant/not relevant) and evaluated

based on precision and recall– Recall difficult to assess for large sets

Modern search systems often use DCG or nDCG:– Easy to collect and compare large sets of “independent judgments”

• Independent judgments map easily to MSE minimization learners– Relevance is not binary, and depends on the order of results

Other measures exist– Subjective “how did I do”, but these are difficult to use for MLR or compare– Pairwise comparison – measure number of out-of order pairs

• Lots of recent research on pairwise based MLR• Most companies use “independent judgments”

Page 27: Machine Learned Relevance at A Large Scale Search Engine

Metrics for Web Search

Existing metrics limited such as Precision and Recall– Not always clear-cut binary decision: relevant vs. not relevant– Not position sensitive:

p: relevant, n: not relevantranking 1: p n p n n ranking 2: n n n p p

How do you measure recall over the whole web?– How many of the potentially billions results will get looked at? Which ones actually need to be good?

Normalized Discounted Cumulated Gain (NDCG)– K. Jaervelin and J. Kekaelaeinen (TOIS 2002)– Gain: relevance of a document is no longer binary– Sensitive to the position of highest rated documents

• Log-discounting of gains according to the positions– Normalize the DCG with the “ideal set” DCG (NDCG)

Page 28: Machine Learned Relevance at A Large Scale Search Engine

Cumulative Gain

With graded relevance judgments, we can compute the gain at each rank.

Cumulative Gain at rank n:

(Where reli is the graded relevance of the document at position i)

28

Page 29: Machine Learned Relevance at A Large Scale Search Engine

Discounting Based on Position

Users care more about high-ranked documents, so we discount results by 1/log2(rank)

Discounted Cumulative Gain:

29

Page 30: Machine Learned Relevance at A Large Scale Search Engine

Normalized Discounted Cumulative Gain(NDCG) To compare DCGs, normalize values so that a ideal ranking would have a

Normalized DCG of 1.0 Ideal ranking:

30

Page 31: Machine Learned Relevance at A Large Scale Search Engine

Normalized Discounted Cumulative Gain(NDCG) Normalize by DCG of the ideal

ranking:

NDCG ≤ 1 at all ranks NDCG is comparable across

different queries

31

Page 32: Machine Learned Relevance at A Large Scale Search Engine

Machine Learning Uses in Commercial SE

Query parsing

SPAM Classification Result Categorization Behavioral Categories

Search engine results ranking

Page 33: Machine Learned Relevance at A Large Scale Search Engine

6 Steps to MLR in Practice

Systems Modeling is inherentlyinteractive and iterative

1

2

Understand the domain and Define problems

Collect requirements, and Data

Modeling: Extract Patterns/Models

Feature Engineering3

4

5

6Deploy System in the wild (and

test)

Interpret and Evaluate discovered knowledge

Page 34: Machine Learned Relevance at A Large Scale Search Engine

In Practice QPS, Deploy model Imbalanced data Relevance changes over time; non-stationary behavior Speed, accuracy, (SVMs,) Practical : Grid search, 8-16 nodes, 500 trees million of records, InteractionsVariable selection: 1000 100s variables, add random variables ~6 weeks cycle Training time is days; lab evaluation is weeks; live AB testing Why Treenet? No missing values, Categorical variables

Page 35: Machine Learned Relevance at A Large Scale Search Engine

MLR – Typical Approach by Companies1. Define goals and “specific problem”2. Collect human judged training data:

– Given a large set of <query, result> tuples• Judges rate “relevance” on a 1 to 5 scale (5=“perfect”,

1=“worst”)3. Generate training data from the provided <query, result> tuples

– <q,r> Features, Input to model is: <F,judgment>4. Train model typically minimize MSE (Mean Squared Error)5. Lab evaluation using DCG-type metrics6. Deploy model in a test system and evaluate

Page 36: Machine Learned Relevance at A Large Scale Search Engine

MLR Training Data1. Collect human judged training data:

Given a large set of <query, result> tuplesJudges rate “relevance” on a 1 to 5 scale (5=“perfect”, 1=“worst”)

2. Featurize the training data from the provided <query, result> tuples<q,r> Features, Input to model is: <F,judgment>

Instance\Attr x0 x1 x2 … xn Label

<query1, Doc1> 1 3 0 .. 7 4

<query1, Doc2> 1 5

… … … … … … …

<queryn, Docn> 1 0 4 ... 8 3

Page 37: Machine Learned Relevance at A Large Scale Search Engine

The Evaluation Disconnect Evaluation in a supervised learner tries to minimize MSE of the targets

– for each tuple Fi,xi the learner predicts a target yi

• Error is f(yi – xi) – typically (yi – xi) ^2• Optimum is some function of the “errors” – i.e. try minimize total error

Evaluation of the deployed model is different evaluation of the learner - typically DCG or nDCG

Individual result error calculation is different from error based on result ordering– A small error between predicted target for a result could have substantial

impact on result ordering – likewise, the “best result ordering” might not exactly match the predicted targets for any results

– An affine transform of the targets produces no change to DCG, but large change to calculated MSE

Page 38: Machine Learned Relevance at A Large Scale Search Engine

From Grep to Machine Learnt Ranking

Boolean,VSM, TF_IDF

Graph-Features, Language Models

Machine Learning,Behavioral Data

??Personalization,

Social

1990s 2000s 2010sPre-1990s

Rela

tive

Per

form

ance

(e

.g.,

DCG

)

Page 39: Machine Learned Relevance at A Large Scale Search Engine

Real World MLR Systems SearchMe was a visual/media search engine – about 3 Billion pages in index, and hundreds

of unique features used to predict the score (and ultimately rank results). Results could be video, audio, images, or regular web pages.– The goal was for a given input query, return the best ordering of relevant results – in

an immersive UI (mixing different results types simultaneously) Quixey – Functional App Search™ - over 1M apps, many sources of data for each app

(multiple stores, reviews, blog sites, etc…) – goal is given a “functional query” i.e. “a good offline san diego map for iphone” or “kids games for android”– find the most relevant apps (ranked properly)– Dozens of sources of data for each app, many potential features used to:

• Predict “quality”, “text relevance” and other meta-features• Calculate a meaningful score used to make decisions by partners• Rank-order and raw score matter (important to know “how good” an app is)

Local Search (Telenav, YellowPages)

Page 40: Machine Learned Relevance at A Large Scale Search Engine

Talk Outline Introduction: Search and Machine Learned Ranking Relevance and evaluation methodologies Data collection and metrics Quixey – Functional Application Search™ System Architecture, features, and model training Alternative approaches Conclusion

Page 41: Machine Learned Relevance at A Large Scale Search Engine

Quixey: What is an App? An app is a piece of computer software designed to help a user perform specific

tasks. – Contrast with systems software and middleware

Apps were originally intended for productivity – (email, calendar and contact databases), but consumer and business demand

has caused rapid expansion into other areas such as games, factory automation, GPS and location-based services, banking, order-tracking, and ticket purchases

Run on various devices (phones, tablets, game consoles, cars)

Page 42: Machine Learned Relevance at A Large Scale Search Engine

My house is awash with platforms

Page 43: Machine Learned Relevance at A Large Scale Search Engine

My car...

NPR programs such as Car Talk are available 24/7 on the NPR News app for Ford SYNC

Page 44: Machine Learned Relevance at A Large Scale Search Engine

My life...

Page 45: Machine Learned Relevance at A Large Scale Search Engine
Page 46: Machine Learned Relevance at A Large Scale Search Engine

©

Page 47: Machine Learned Relevance at A Large Scale Search Engine

Own “The Millionaires App" for $1,000

.

Page 48: Machine Learned Relevance at A Large Scale Search Engine

Law Students App ..

Page 49: Machine Learned Relevance at A Large Scale Search Engine

Apps for Pets ..

Page 50: Machine Learned Relevance at A Large Scale Search Engine

Pablo Picatso! ..

Page 51: Machine Learned Relevance at A Large Scale Search Engine

50 Best iPhone Apps 2011 [Time]

GamesAngry BirdsScrabblePlants v. ZombiesDoodle JumpFruit NinjaCut the RopePicturekaWurdleGeoDefense Swarm

On the GoKayakYelpWord LensWeather ChannelOpenTableWikipediaHopstopAroundMeGoogle EarthZipcar

LifestyleAmazonEpicuriousMixologyPaypalShop SavvyMintWebMDLose It!Springpad

Music & PhotographyMogPandoraSoundHoundBloomCamera+Photoshop ExpressHipstamaticInstagramColorSplash

EntertainmentNetflixIMDbESPN ScorecenterInstapaperKindlePulseNews

SocialFacebookTwitterGoogleAIMSkypeFoursquareBump

[http://www.time.com/time/specials/packages/completelist/0,29569,2044480,00.html#ixzz1s1pAMNWM]

Page 52: Machine Learned Relevance at A Large Scale Search Engine

©

Page 53: Machine Learned Relevance at A Large Scale Search Engine

Examples of Functional Search™

©

Page 54: Machine Learned Relevance at A Large Scale Search Engine
Page 55: Machine Learned Relevance at A Large Scale Search Engine
Page 56: Machine Learned Relevance at A Large Scale Search Engine

App World: Integrating Multi-Data Sources

App Store 1

A1A2A3

App Store 2

A2A4A5

App Catalog

A1A5A7

App Store 3

A5A7A8 App

Developer

Blogsblah blah Angry Birds

App Review Site

blah blah Learn Spanish

?

?

Developer Homepage …

Page 57: Machine Learned Relevance at A Large Scale Search Engine

Talk Outline Introduction: Search and Machine Learned Ranking Relevance and evaluation methodologies Data collection and metrics Quixey – Functional Application Search™ System Architecture, features, and model training Alternative approaches Conclusion

Page 58: Machine Learned Relevance at A Large Scale Search Engine

Search Architecture (Online)Query

Processingquery

Data storage

Indexes

Feature Data

DBQ = data storage queries

Offline Processingand Data Building

Simple Scoring(set

reducer)

FeatureGeneration

Consideration Set

ResultScoring

ShownResults

ResultSorting

MLModels

Page 59: Machine Learned Relevance at A Large Scale Search Engine

Architecture DetailsOnline Flow:1. Given a “query” generate Query-specific features, Fq

2. Using Fq generate appropriate “database queries”

3. Cheaply pare down initial possible results4. Obtain result features Fr for remaining consideration set

5. Generate query-result features Fqr for remaining consideration set

6. Given all features score each result (assuming independent scoring)7. Present and organize the “best results” (not nesc. linearized by score)

Page 60: Machine Learned Relevance at A Large Scale Search Engine

Examples of Possible Features Query features:

– popularity/frequency of query– number of words in query, individual POS tags per term/token– collection term-frequency information (per term/token)– Geo-location of user

Result features– (web) – in-links/page-rank, anchortext match (might be processed with query)– (app) – download rate, app-popularity, platform(s), star-rating(s), review-text– (app) – ML-Quality score, etc..

Query-result features– BM-25 (per text-block)– Frequency in specific sections – lexical similarity query to title– etc…

Page 61: Machine Learned Relevance at A Large Scale Search Engine

Features Are Key Typically MLR systems use both textual and non-textual features:

• What makes one app better than another?• Text-match alone insufficient• Popularity alone insufficient

No single feature or simple combination is sufficient At both SearchMe and Quixey we built learned “meta-features” (next slide)

query: Games Title Text Matchnon-title freq of

"game" App Popularity How good for queryAngry Birds low high Very high very highSudoku (genina.com) low low high highPacMan low high high highCave Shooter low/medium medium low mediumStupid Maze Game very high medium very low low

Page 62: Machine Learned Relevance at A Large Scale Search Engine

Features Are Key: Learned Meta-Features Meta-features can capture multiple simple features into fewer “super-features”

SearchMe: SpamScore, SiteAuthority, Category-related Quixey: App-Quality, TextMatch (as distinct from overall-relevance)

SpamScore and App-Quality are complex learned meta-features – Potentially hundreds of “smaller features” feed into simpler model– SpamScore considered – average-pageRank, num-ads, distinct concepts,

several language-related features– App-Quality is learned (TreeNet) – designed to be resistant to gaming

• An app developer might pay people to give high-ratings• Has a well defined meaning

Page 63: Machine Learned Relevance at A Large Scale Search Engine

Idea of Metafeatures (Example) In this case – each Metafeature is independently solved on different training data

Final Model

F1 F2 F3 F4 F5 F6 F7 F8 F9 F10

VS

Final Model

MF1 MF2 MF3

Many data points (expensive)Many complex treesJudgments prone to human errors

F1 F2 F3 F4 F5 F6 F7 F8 F9 F10

Explicit human decided metafeaturesproduces simpler, faster modelsrequires fewer total training points

Humans can define metafeatures to minimize human errors, and possiblyuse different targets

Page 64: Machine Learned Relevance at A Large Scale Search Engine

Data and Feature Engineering are Key! Selection of “good” query/result pairs for labeling, and good metafeatures

– Should cover various areas of the sub-space (i.e. popular and rare queries)– Be sure to only pick examples which “can be learned” and are representative

• Misspellings are a bad choice if no spell-corrector• “exceptions” - i.e. special cases (i.e. Spanish results) for an English engine

are bad and should be avoided unless features can capture this– Distribution is important

• Bias the data to focus on business goals– If the goal is be the best for “long queries” have more “long queries”

Features are critical – must be able to capture the variations (good metafeatures) Feature engineering is probably the single most important (and most difficult)

aspect of MLR

Page 65: Machine Learned Relevance at A Large Scale Search Engine

Applying TreeNet for MLR Starting with a set of query, result pairs obtain human judgments [1-5], and features

– 5=perfect, 1=worst (maps to target [0-1]

Query, Result, Judgmentq1,r1, 2q1,r2, 5q1,r3, 2q2,r1, 4…

Query, Result, Featuresq1,r1, f1,1, f1,2,f1,3,…,f1,n

q1,r2, f2,1, f2,2,f2,3,…,f2,n

…TreeNet

SearchEngine

Candidate Models:M1, M2, M3, ….

TestQueries

(q1,…qn)

Candidate Model: M

Results from Test Queriesq1 r1,1 r1,2 …q2 r2,1 r2,2 …

HumanJudgments

DCGCalculation

Page 66: Machine Learned Relevance at A Large Scale Search Engine
Page 67: Machine Learned Relevance at A Large Scale Search Engine

Talk Outline Introduction: Search and Machine Learned Ranking Relevance and evaluation methodologies Data collection and metrics Quixey – Functional Application Search™ System Architecture, features, and model training Alternative approaches Conclusion

Page 68: Machine Learned Relevance at A Large Scale Search Engine

Choosing the Best Model - Disconnect TreeNet uses a mean-squared error minimization

– The “best” model is the one with the lowest MSE where error is:• abs(target – predicted_score)

– Each result is independent

DCG minimizes rank-ordering error– The ranking is query-dependent

Might require evaluating several TreeNet models before a real DCG improvement– Try new features, – TreeNet options (learn rate, max-trees), change splits of data– Collect more/better data (clean errors), consider active learning

Page 69: Machine Learned Relevance at A Large Scale Search Engine

Assumptions Made (Are there choices) MSE is used because the input data is independent judgment pairs Assumptions of consistency over time and between users (stationarity of

judgments)– Is Angry Birds v1 a perfect score for “popular game” in 10 years?– Directions need to be very clear to ensure user consistency

• Independent model assumes all users are consistent with each other Collect judgments in a different form:

– Pairwise comparisons <q1,r1> is better than <q1,r2>, etc…– Evaluate a “set” of results– Use a different scale for judgments which is more granular– Full-ordering (lists)

Page 70: Machine Learned Relevance at A Large Scale Search Engine

Other Ways to do MLR Changing data collection:

– Use inferred as opposed to direct data• Click/user behavior to infer relevance targets

– From independent judgments to pairwise or listwise Pairwise SVM:

– R. Herbrich, T. Graepel, K. Obermayer. “Support Vector Learning for Ordinal Regression.” In Proceedings of ICANN 1999.

– T. Joachims, “A Support Vector Method for Multivariate Performance Measures.” In Proceedings of ICML 2005. (http://www.cs.cornell.edu/People/tj/svm_light/svm_perf.html)

Listwise learning– LambdaRank, Chris Burghes et al, 2007– LambdaMART, Qiang Wu, Chris J.C. Burges, Krysta M. Svore and Jianfeng Gao,

2008

Page 71: Machine Learned Relevance at A Large Scale Search Engine

Talk Outline Introduction: Search and Machine Learned Ranking Relevance and evaluation methodologies Data collection and metrics Quixey – Functional Application Search™ System Architecture, features, and model training Alternative approaches Conclusion

Page 72: Machine Learned Relevance at A Large Scale Search Engine

Conclusion Machine Learning is very important to Search

– Metafeatures reduce model complexity and lower costs• Divide and conquer (parallel development)

– MLR – is real, and is just one part of ML in search Major challenges include data collection and feature engineering

– Must pay for data – non-trivial, but have a say in what you collect– Features must be reasonable for given problem (domain specific)

Evaluation is critical– How to evaluate effectively is important to ensure improvement– MSE vs DCG disconnect

TreeNet can and is an effective tool for Machine Learning in Search

Page 73: Machine Learned Relevance at A Large Scale Search Engine

Quixey is hiring

If you want a cool internship, or a great job, contact us afterwards or e-mail:

[email protected] and mention this presentation

Page 74: Machine Learned Relevance at A Large Scale Search Engine

QuestionsJames_DOT_Shanahan_AT_gmail_DOT_com

Eric_AT_Quixey_DOT_com

Page 75: Machine Learned Relevance at A Large Scale Search Engine

3250 Ash St.Palo Alto, CA 94306

888.707.4441www.quixey.com