CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated...

Post on 20-Mar-2018

224 views 3 download

Transcript of CS 5306 INFO 5306: Crowdsourcing and Human Computationhirsh/5306/lecture14.pdf · Human-Annotated...

CS 5306INFO 5306:

Crowdsourcing andHuman Computation

Lecture 1410/19/17

Haym Hirsh

Extra Credit Opportunities

• 10/19/17 4:15pm, Gates G01, Tom Kalil

• 10/26/17 4:15pm, Gates G01, Tapan Parikh, Cornell Tech

• 11/16/17 4:15pm, Gates G01, James Grimmelmann, Cornell Law

• 12/1/17 12:15pm, Gates G01, Sudeep Bhatia, Penn Psychology

AI Successes

AI Successes

Machine Learning

AI Successes

Machine Learning

Human Annotated Data

AI Successes

Machine Learning

Human Annotated Data

Human-Annotated Data

• November 2005: Amazon Mechanical TurkMay 2006: “AI gets a brain”, Barr J, Cabrera LF. Queue.

Human-Annotated Data

• November 2005: Amazon Mechanical TurkMay 2006: “AI gets a brain”, Barr J, Cabrera LF. Queue.

Human data annotation cited as an example use

Human-Annotated Data

• November 2005: Amazon Mechanical TurkMay 2006: “AI gets a brain”, Barr J, Cabrera LF. Queue.

• April 2008: “Crowdsourcing user studies with Mechanical Turk”, Kittur A, Chi EH, Suh B. In Proceedings of the SIGCHI conference on human factors in computing systems.

AMT for human subjects in HCI

Human-Annotated Data

• November 2005: Amazon Mechanical TurkMay 2006: “AI gets a brain”, Barr J, Cabrera LF. Queue.

• April 2008: “Crowdsourcing user studies with Mechanical Turk”, Kittur A, Chi EH, Suh B. In Proceedings of the SIGCHI conference on human factors in computing systems.

• June 2008: “Utility data annotation with Amazon Mechanical Turk”, Sorokin A, Forsyth D. In Computer Vision and Pattern Recognition Workshops, 2008. CVPRW'08.

Image annotation

Human-Annotated Data

• November 2005: Amazon Mechanical TurkMay 2006: “AI gets a brain”, Barr J, Cabrera LF. Queue.

• April 2008: “Crowdsourcing user studies with Mechanical Turk”, Kittur A, Chi EH, Suh B. In Proceedings of the SIGCHI conference on human factors in computing systems.

• June 2008: “Utility data annotation with Amazon Mechanical Turk”, Sorokin A, Forsyth D. In Computer Vision and Pattern Recognition Workshops, 2008. CVPRW'08.

• August 2008: “Get another label? Improving data quality and data mining using multiple, noisy labelers”, Sheng VS, Provost F, Ipeirotis PG. In Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining.

Be smart in using human annotators

Human-Annotated Data

• November 2005: Amazon Mechanical TurkMay 2006: “AI gets a brain”, Barr J, Cabrera LF. Queue.

• April 2008: “Crowdsourcing user studies with Mechanical Turk”, Kittur A, Chi EH, Suh B. In Proceedings of the SIGCHI conference on human factors in computing systems.

• June 2008: “Utility data annotation with Amazon Mechanical Turk”, Sorokin A, Forsyth D. In Computer Vision and Pattern Recognition Workshops, 2008. CVPRW'08.

• August 2008: “Get another label? Improving data quality and data mining using multiple, noisy labelers”, Sheng VS, Provost F, Ipeirotis PG. In Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining.

Be smart in using human annotators

Human-Annotated Data

• November 2005: Amazon Mechanical TurkMay 2006: “AI gets a brain”, Barr J, Cabrera LF. Queue.

• April 2008: “Crowdsourcing user studies with Mechanical Turk”, Kittur A, Chi EH, Suh B. In Proceedings of the SIGCHI conference on human factors in computing systems.

• June 2008: “Utility data annotation with Amazon Mechanical Turk”, Sorokin A, Forsyth D. In Computer Vision and Pattern Recognition Workshops, 2008. CVPRW'08.

• August 2008: “Get another label? Improving data quality and data mining using multiple, noisy labelers”, Sheng VS, Provost F, Ipeirotis PG. In Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining.

• October 2008: “Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks”, Snow R, O'Connor B, Jurafsky D, Ng AY. In Proceedings of the conference on empirical methods in natural language processing.

Human-Annotated Data• November 2005: Amazon Mechanical Turk

May 2006: “AI gets a brain”, Barr J, Cabrera LF. Queue.

• April 2008: “Crowdsourcing user studies with Mechanical Turk”, Kittur A, Chi EH, Suh B. In Proceedings of the SIGCHI conference on human factors in computing systems.

• June 2008: “Utility data annotation with Amazon Mechanical Turk”, Sorokin A, Forsyth D. In Computer Vision and Pattern Recognition Workshops, 2008. CVPRW'08.

• August 2008: “Get another label? Improving data quality and data mining using multiple, noisy labelers”, Sheng VS, Provost F, Ipeirotis PG. In Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining.

• October 2008: “Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks”, Snow R, O'Connor B, Jurafsky D, Ng AY. In Proceedings of the conference on empirical methods in natural language processing.

Human-Annotated Data• November 2005: Amazon Mechanical Turk

May 2006: “AI gets a brain”, Barr J, Cabrera LF. Queue.

• April 2008: “Crowdsourcing user studies with Mechanical Turk”, Kittur A, Chi EH, Suh B. In Proceedings of the SIGCHI conference on human factors in computing systems.

• June 2008: “Utility data annotation with Amazon Mechanical Turk”, Sorokin A, Forsyth D. In Computer Vision and Pattern Recognition Workshops, 2008. CVPRW'08.

• August 2008: “Get another label? Improving data quality and data mining using multiple, noisy labelers”, Sheng VS, Provost F, Ipeirotis PG. In Proceedings of the 14th ACM SIGKDD international conference on knowledge discovery and data mining.

• October 2008: “Cheap and fast---but is it good?: evaluating non-expert annotations for natural language tasks”, Snow R, O'Connor B, Jurafsky D, Ng AY. In Proceedings of the conference on empirical methods in natural language processing.

Human language annotation

4 non-experts = 1 expert

ImageNet

• June 2009: “Imagenet: A large-scale hierarchical image database”, Deng J, Dong W, Socher R, Li LJ, Li K, Fei-Fei L. In Computer Vision and Pattern Recognition, 2009. CVPR 2009.

Cornell: OpenSurfaces

The Data Quality Problem

• You don’t know if the annotations are correct• Requesters often blame workers (bots, spamming, …)• Workers often blame requesters (poorly written task descriptions, …)

• Approaches:• Presume you want to find the “wrong” answers and workers

• Downvote the inaccurate people:• “Maximum likelihood estimation of observer error-rates using the EM algorithm”, Dawid AP, Skene

AM. Applied statistics. 1979 Jan 1:20-8.

• Provide the right incentives:• “Mechanisms for making crowds truthful”. Jurca R, Faltings B. Journal of Artificial Intelligence

Research. 2009 Jan 1;34(1):209.

• Majority vote

• Task design to avoid bad work• Training• Feedback

"Revolt: Collaborative Crowdsourcing for Labeling Machine Learning Datasets", Chang JC, Amershi S, and Kamar E. In

Proceedings of the 2017 CHI Conference on Human Factors.

Vote – Explain – Categorize

“Collaboration” takes place during the Categorize stage

Define new categories that propagate to other workers in real time

"CrowdVerge: Predicting If People Will Agree on the Answer to a Visual Question", Gurari, D. and Grauman, K., In Proceedings of the 2017 CHI

Conference on Human Factors in Computing Systems.

• Many projects get a fixed number of workers for each annotation –wastes money

• (Compare to computing an AND function – you stop when you know the answer)

• (GalaxyZoo gets a variable number of labels based on annotator disagreement)

• (Compare to VoxPL – made the judgment on a statistical basis)

• Idea here: Learn to predict whether people will agree

"CrowdVerge: Predicting If People Will Agree on the Answer to a Visual Question", Gurari, D. and Grauman, K., In Proceedings of the 2017 CHI

Conference on Human Factors in Computing Systems.

• Why do people disagree (not assuming malevolent workers!) on an annotation?• Crowd worker skill

• Expertise

• “crowd worker may inadequately answer a seemingly simple question”

• Ambiguity in question and visual content

• Insufficient visual evidence

• Subjective questions

• Synonymous answers

• Varying levels of answer granularity