MLConf Seattle 2015 - ML@Quora

24
ML @ Quora ML Algorithms for Growing the World’s Knowledge Seattle, 05/01/2015 Xavier Amatriain (@xamat)

Transcript of MLConf Seattle 2015 - ML@Quora

Page 1: MLConf Seattle 2015 - ML@Quora

ML @ QuoraML Algorithms for Growing the World’s Knowledge

Seattle, 05/01/2015Xavier Amatriain (@xamat)

Page 2: MLConf Seattle 2015 - ML@Quora

About Quora

Page 3: MLConf Seattle 2015 - ML@Quora

Our Mission

“To share and grow the world’s

knowledge”

• Millions of questions & answers

• Millions of users

• Thousands of topics

• ...

Page 4: MLConf Seattle 2015 - ML@Quora

Lots of data relations

Questions Answers

Users Users

TopicsUpvotes/ Downvotes

FOLLOW

ENDORSE

WRITE

HAVE

UPVOTE/DOWNVOTEWANT ANSWERS

FOLLOW

Page 5: MLConf Seattle 2015 - ML@Quora

Complex network propagation effects

Page 6: MLConf Seattle 2015 - ML@Quora

Importance of topics & semantics

Page 7: MLConf Seattle 2015 - ML@Quora

Demand

What we care about

Quality

Relevance

Page 8: MLConf Seattle 2015 - ML@Quora

Machine Learning@Quora

Page 9: MLConf Seattle 2015 - ML@Quora

Ranking - Answer ranking

What is a good Quora answer?

• truthful

• reusable

• provides explanation

• well formatted

• ...

Page 10: MLConf Seattle 2015 - ML@Quora

Ranking - Answer ranking

How are those dimensions translated

into features?

• Features that relate to the text

quality itself

• Interaction features

(upvotes/downvotes, clicks,

comments…)

• User features (e.g. expertise in topic)

Page 11: MLConf Seattle 2015 - ML@Quora

Ranking - Feed

• Personalized learning-to-rank

approach

• Goal: Present most interesting stories

for a user at a given time

• Interesting = topical relevance +

social relevance + timeliness

• Stories = questions + answers

Page 12: MLConf Seattle 2015 - ML@Quora

Ranking - Feed

• Features

• Quality of question/answer

• Topics the user is interested on/

knows about

• Users the user is following

• What is trending/popular

• …

• Different temporal windows

• Multi-stage solution with different

“streams”

Page 13: MLConf Seattle 2015 - ML@Quora

Recommendations - Topics

Goal: Recommend new topics for the

user to follow

• Based on

• Other topics followed

• Users followed

• User interactions

• Topic-related features

• ...

Page 14: MLConf Seattle 2015 - ML@Quora

Recommendations - Users

Goal: Recommend new users to follow

• Based on:

• Other users followed

• Topics followed

• User interactions

• User-related features

• ...

Page 15: MLConf Seattle 2015 - ML@Quora

Related Questions

• Given interest in question A (source) what other

questions will be interesting?

• Not only about similarity, but also “interestingness”

• Features such as:

• Textual

• Co-visit

• Topics

• …

• Important for logged-out use case

Page 16: MLConf Seattle 2015 - ML@Quora

Duplicate Questions

• Important issue for Quora

• Want to make sure we don’t disperse

knowledge to the same question

• Solution: binary classifier trained with

labelled data

• Features

• Textual vector space models

• Usage-based features

• ...

Page 17: MLConf Seattle 2015 - ML@Quora

User Trust/Expertise Inference

Goal: Infer user’s trustworthiness in relation

to a given topic

• We take into account:

• Answers written on topic

• Upvotes/downvotes received

• Endorsements

• ...

• Trust/expertise propagates through the network

• Must be taken into account by other algorithms

Page 18: MLConf Seattle 2015 - ML@Quora

Trending Topics

Goal: Highlight current events that are

interesting for the user

• We take into account:

• Global “Trendiness”

• Social “Trendiness”

• User’s interest

• ...

• Trending topics are a great discovery mechanism

Page 19: MLConf Seattle 2015 - ML@Quora

Spam Detection/Moderation

• Very important for Quora to keep quality of

content

• Pure manual approaches do not scale

• Hard to get algorithms 100% right

• ML algorithms detect content/user issues

• Output of the algorithms feed manually

curated moderation queues

Page 20: MLConf Seattle 2015 - ML@Quora

Content Creation Prediction

• Quora’s algorithms not only optimize for

probability of reading

• Important to predict probability of a user

answering a question

• Parts of our system completely rely on

that prediction

• E.g. A2A (ask to answer) suggestions

Page 21: MLConf Seattle 2015 - ML@Quora

Models

● Logistic Regression

● Elastic Nets

● Gradient Boosted Decision

Trees

● Random Forests

● Neural Networks

● LambdaMART

● Matrix Factorization

● LDA

● ...

Page 22: MLConf Seattle 2015 - ML@Quora

Conclusions

Page 23: MLConf Seattle 2015 - ML@Quora

Conclusions

• At Quora we have not only Big, but also “rich” data

• Our algorithms need to understand and optimize complex aspects

such as quality, interestingness, or user expertise

• We believe ML will be one of the keys to our success

• We have many interesting problems, and many unsolved challenges

Page 24: MLConf Seattle 2015 - ML@Quora

We’re Hiring!

http://www.quora.com/careers/