Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York...

32
Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos Ipeirotis

Transcript of Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York...

Page 1: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers

New York University Stern School

Victor ShengFoster ProvostPanos Ipeirotis

Page 2: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

2

Outsourcing KDD preprocessing

Traditionally, data mining teams have invested substantial internal resources in data formulation, information extraction, cleaning, and other preprocessing

– Raghu from his Innovation Lecture

“the best you can expect are noisy labels”

Now, we can outsource preprocessing tasks, such as labeling, feature extraction, verifying information extraction, etc.

– using Mechanical Turk, Rent-a-Coder, etc.

– quality may be lower than expert labeling (much?)

– but low costs can allow massive scale

The ideas may apply also to focusing user-generated tagging, crowdsourcing, etc.

Page 3: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

ESP Game (by Luis von Ahn)

3

Page 4: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

Other “free” labeling schemes

Open Mind initiative (www.openmind.org)

Other gwap games– Tag a Tune– Verbosity (tag words)– Matchin (image ranking)

Web 2.0 systems?– Can/should tagging be directed?

Page 5: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

5

Noisy labels can be problematic

Many tasks rely on high-quality labels for objects:– learning predictive models– searching for relevant information – finding duplicate database records – image recognition/labeling– song categorization

Noisy labels can lead to degraded task performance

Page 6: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

6

40

50

60

70

80

90

100

1 20 40 60 80 100

120

140

160

180

200

220

240

260

280

300

Number of examples (Mushroom)

Acc

ura

cyQuality and Classification Performance

Labeling quality increases classification quality increases

P = 0.5

P = 0.6

P = 0.8

P = 1.0

Here, labels are values for target variable

Page 7: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

Summary of results

Repeated labeling can improve data quality and model quality (but not always)

When labels are noisy, repeated labeling can be preferable to single labeling even when labels aren’t particularly cheap

When labels are relatively cheap, repeated labeling can do much better (omitted)

Round-robin repeated labeling does well Selective repeated labeling improves

substantially

Page 8: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

9

Majority Voting and Label Quality

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

1 3 5 7 9 11 13

Number of labelers

Inte

grat

ed q

ualit

y

P=0.4

P=0.5

P=0.6

P=0.7

P=0.8

P=0.9

P=1.0

Ask multiple labelers, keep majority label as “true” label

Quality is probability of being correct

P is probabilityof individual labelerbeing correct

Page 9: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

10

Tradeoffs for Modeling

Get more labels Improve label quality Improve classification Get more examples Improve classification

40

50

60

70

80

90

100

1 20 40 60 80 100

120

140

160

180

200

220

240

260

280

300

Number of examples (Mushroom)

Acc

ura

cy

P = 0.5

P = 0.6

P = 0.8

P = 1.0

Page 10: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

11

Basic Labeling Strategies

Single Labeling– Get as many data points as possible

– one label each

Round-robin Repeated Labeling– Fixed Round Robin (FRR)

keep labeling the same set of points

– Generalized Round Robin (GRR) repeatedly-label data points, giving next label to point with fewest so

far

Page 11: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

12

Fixed Round Robin vs. Single Labeling

p= 0.6, labeling quality#examples =100

FRR(100 examples)

SL

With high noise, repeated labeling better than single labeling

Page 12: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

13

Fixed Round Robin vs. Single Labeling

p= 0.8, labeling quality#examples =50

FRR (50 examples)

Single

With low noise, more (single labeled) examples better

Page 13: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

50

60

70

80

90

100

80 1680 3280 4880 6480Data acquisition cost (mushroom, p=0.6)

Acc

urac

yGen. Round Robin vs. Single Labeling

P=0.6, k=5

Repeated labeling is better than single labeling

P: labeling qualityk: #labels

GRR

SL

Page 14: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

15

Tradeoffs for Modeling

Get more labels Improve label quality Improve classification Get more examples Improve classification

40

50

60

70

80

90

100

1 20 40 60 80 100

120

140

160

180

200

220

240

260

280

300

Number of examples (Mushroom)

Acc

ura

cy

P = 0.5

P = 0.6

P = 0.8

P = 1.0

Page 15: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

16

Selective Repeated-Labeling

We have seen: – With enough examples and noisy labels, getting multiple

labels is better than single-labeling

– When we consider costly preprocessing, the benefit is magnified (omitted -- see paper)

Can we do better than the basic strategies?

Key observation: we have additional information to guide selection of data for repeated labeling

– the current multiset of labels

Example: {+,-,+,+,-,+} vs. {+,+,+,+}

Page 16: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

17

Natural Candidate: Entropy

Entropy is a natural measure of label uncertainty:

E({+,+,+,+,+,+})=0 E({+,-, +,-, +,- })=1

Strategy: Get more labels for examples with high-entropy label multisets

||

||log

||

||

||

||log

||

||)( 22 S

S

S

S

S

S

S

SSE

negativeSpositiveS |:||:|

Page 17: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

18

What Not to Do: Use Entropy

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

0 400 800 1200 1600 2000

Number of labels (waveform, p=0.6)

Lab

elin

g qu

ality

ENTROPYGRR

Improves at first, hurts in long run

Page 18: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

Why not Entropy

In the presence of noise, entropy will be high even with many labels

Entropy is scale invariant (3+ , 2-) has same entropy as (600+ , 400-)

19

Page 19: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

20

Estimating Label Uncertainty (LU)

Observe +’s and –’s and compute Pr{+|obs} and Pr{-|obs}

Label uncertainty = tail of beta distribution

SLU

0.50.0 1.0

Beta probability density function

Page 20: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

Label Uncertainty

p=0.7 5 labels

(3+, 2-) Entropy ~ 0.97 CDF=0.34

21

Page 21: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

Label Uncertainty

p=0.7 10 labels

(7+, 3-) Entropy ~ 0.88 CDF=0.11

22

Page 22: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

Label Uncertainty

p=0.7 20 labels

(14+, 6-) Entropy ~ 0.88 CDF=0.04

23

Page 23: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

Label Uncertainty vs. Round Robin

24

0.6

0.7

0.8

0.9

1

0 400 800 1200 1600 2000

Number of labels (waveform, p=0.6)

Labe

ling q

ualit

y

GRRLU

similar results across a dozen data sets

Page 24: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

50

60

70

80

90

100

80 1680 3280 4880 6480Data acquisition cost (mushroom, p=0.6)

Acc

urac

y

Recall:Gen. Round Robin vs. Single Labeling

P=0.6, k=5

Multi-labeling is better than single labeling

P: labeling qualityk: #labels

GRR

SL

Page 25: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

Label Uncertainty vs. Round Robin

26

0.6

0.7

0.8

0.9

1

0 400 800 1200 1600 2000

Number of labels (waveform, p=0.6)

Labe

ling q

ualit

y

GRRLU

similar results across a dozen data sets

Page 26: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

27

Another strategy:Model Uncertainty (MU)

Learning a model of the data provides an alternative source of information about label certainty

Model uncertainty: get more labels for instances that cannot be modeled well

Intuition?– for data quality, low-certainty “regions”

may be due to incorrect labeling of corresponding instances

– for modeling: why improve training data quality if model already is certain there?

+ ++

++ + +

+

+ ++

+

+ ++

++ +

+ +

- - - -

- - - -- -

- -

- - - -

- - - -- - - -- -

- -

- - - -

?

?

Page 27: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

28

Yet another strategy:Label & Model Uncertainty (LMU)

Label and model uncertainty (LMU): avoid examples where either strategy is certain

MULULMU SSS

Page 28: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

0.6

0.65

0.7

0.75

0.8

0.85

0.9

0.95

1

0 400 800 1200 1600 2000

Number of labels (waveform, p=0.6)

Lab

elin

g q

ual

ity

GRRMULULMU

Comparison

29

Label Uncertainty

GRR

Label & Model Uncertainty

Model Uncertainty alone also improves

quality

Page 29: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

30

Comparison: Model Quality

60

65

70

75

80

85

90

0 400 800 1200 1600 2000

Number of labels (spambase, p=0.6)

Acc

urac

y

GRRMULULMU

Label & Model Uncertainty

Across 12 domains, LMU is always better than GRR. LMU is statistically significantlybetter than LU and MU.

Page 30: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

Summary of results

Micro-task outsourcing (e.g., MTurk, RentaCoder ESP game) has changed the landscape for data formulation

Repeated labeling can improve data quality and model quality (but not always)

When labels are noisy, repeated labeling can be preferable to single labeling even when labels aren’t particularly cheap

When labels are relatively cheap, repeated labeling can do much better (omitted)

Round-robin repeated labeling can do well Selective repeated labeling improves substantially

Page 31: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

32

Opens up many new directions…

Strategies using “learning-curve gradient”

Estimating the quality of each labeler

Example-conditional quality

Increased compensation vs. labeler quality

Multiple “real” labels

Truly “soft” labels

Selective repeated tagging

Page 32: Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York University Stern School Victor Sheng Foster Provost Panos.

Thanks!

Q & A?