Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York...
-
Upload
patrick-griffith -
Category
Documents
-
view
216 -
download
0
Transcript of Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers New York...
Get Another Label? Improving Data Quality and Data Mining Using Multiple, Noisy Labelers
New York University Stern School
Victor ShengFoster ProvostPanos Ipeirotis
2
Outsourcing KDD preprocessing
Traditionally, data mining teams have invested substantial internal resources in data formulation, information extraction, cleaning, and other preprocessing
– Raghu from his Innovation Lecture
“the best you can expect are noisy labels”
Now, we can outsource preprocessing tasks, such as labeling, feature extraction, verifying information extraction, etc.
– using Mechanical Turk, Rent-a-Coder, etc.
– quality may be lower than expert labeling (much?)
– but low costs can allow massive scale
The ideas may apply also to focusing user-generated tagging, crowdsourcing, etc.
ESP Game (by Luis von Ahn)
3
Other “free” labeling schemes
Open Mind initiative (www.openmind.org)
Other gwap games– Tag a Tune– Verbosity (tag words)– Matchin (image ranking)
Web 2.0 systems?– Can/should tagging be directed?
5
Noisy labels can be problematic
Many tasks rely on high-quality labels for objects:– learning predictive models– searching for relevant information – finding duplicate database records – image recognition/labeling– song categorization
Noisy labels can lead to degraded task performance
6
40
50
60
70
80
90
100
1 20 40 60 80 100
120
140
160
180
200
220
240
260
280
300
Number of examples (Mushroom)
Acc
ura
cyQuality and Classification Performance
Labeling quality increases classification quality increases
P = 0.5
P = 0.6
P = 0.8
P = 1.0
Here, labels are values for target variable
Summary of results
Repeated labeling can improve data quality and model quality (but not always)
When labels are noisy, repeated labeling can be preferable to single labeling even when labels aren’t particularly cheap
When labels are relatively cheap, repeated labeling can do much better (omitted)
Round-robin repeated labeling does well Selective repeated labeling improves
substantially
9
Majority Voting and Label Quality
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1 3 5 7 9 11 13
Number of labelers
Inte
grat
ed q
ualit
y
P=0.4
P=0.5
P=0.6
P=0.7
P=0.8
P=0.9
P=1.0
Ask multiple labelers, keep majority label as “true” label
Quality is probability of being correct
P is probabilityof individual labelerbeing correct
10
Tradeoffs for Modeling
Get more labels Improve label quality Improve classification Get more examples Improve classification
40
50
60
70
80
90
100
1 20 40 60 80 100
120
140
160
180
200
220
240
260
280
300
Number of examples (Mushroom)
Acc
ura
cy
P = 0.5
P = 0.6
P = 0.8
P = 1.0
11
Basic Labeling Strategies
Single Labeling– Get as many data points as possible
– one label each
Round-robin Repeated Labeling– Fixed Round Robin (FRR)
keep labeling the same set of points
– Generalized Round Robin (GRR) repeatedly-label data points, giving next label to point with fewest so
far
12
Fixed Round Robin vs. Single Labeling
p= 0.6, labeling quality#examples =100
FRR(100 examples)
SL
With high noise, repeated labeling better than single labeling
13
Fixed Round Robin vs. Single Labeling
p= 0.8, labeling quality#examples =50
FRR (50 examples)
Single
With low noise, more (single labeled) examples better
50
60
70
80
90
100
80 1680 3280 4880 6480Data acquisition cost (mushroom, p=0.6)
Acc
urac
yGen. Round Robin vs. Single Labeling
P=0.6, k=5
Repeated labeling is better than single labeling
P: labeling qualityk: #labels
GRR
SL
15
Tradeoffs for Modeling
Get more labels Improve label quality Improve classification Get more examples Improve classification
40
50
60
70
80
90
100
1 20 40 60 80 100
120
140
160
180
200
220
240
260
280
300
Number of examples (Mushroom)
Acc
ura
cy
P = 0.5
P = 0.6
P = 0.8
P = 1.0
16
Selective Repeated-Labeling
We have seen: – With enough examples and noisy labels, getting multiple
labels is better than single-labeling
– When we consider costly preprocessing, the benefit is magnified (omitted -- see paper)
Can we do better than the basic strategies?
Key observation: we have additional information to guide selection of data for repeated labeling
– the current multiset of labels
Example: {+,-,+,+,-,+} vs. {+,+,+,+}
17
Natural Candidate: Entropy
Entropy is a natural measure of label uncertainty:
E({+,+,+,+,+,+})=0 E({+,-, +,-, +,- })=1
Strategy: Get more labels for examples with high-entropy label multisets
||
||log
||
||
||
||log
||
||)( 22 S
S
S
S
S
S
S
SSE
negativeSpositiveS |:||:|
18
What Not to Do: Use Entropy
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
0 400 800 1200 1600 2000
Number of labels (waveform, p=0.6)
Lab
elin
g qu
ality
ENTROPYGRR
Improves at first, hurts in long run
Why not Entropy
In the presence of noise, entropy will be high even with many labels
Entropy is scale invariant (3+ , 2-) has same entropy as (600+ , 400-)
19
20
Estimating Label Uncertainty (LU)
Observe +’s and –’s and compute Pr{+|obs} and Pr{-|obs}
Label uncertainty = tail of beta distribution
SLU
0.50.0 1.0
Beta probability density function
Label Uncertainty
p=0.7 5 labels
(3+, 2-) Entropy ~ 0.97 CDF=0.34
21
Label Uncertainty
p=0.7 10 labels
(7+, 3-) Entropy ~ 0.88 CDF=0.11
22
Label Uncertainty
p=0.7 20 labels
(14+, 6-) Entropy ~ 0.88 CDF=0.04
23
Label Uncertainty vs. Round Robin
24
0.6
0.7
0.8
0.9
1
0 400 800 1200 1600 2000
Number of labels (waveform, p=0.6)
Labe
ling q
ualit
y
GRRLU
similar results across a dozen data sets
50
60
70
80
90
100
80 1680 3280 4880 6480Data acquisition cost (mushroom, p=0.6)
Acc
urac
y
Recall:Gen. Round Robin vs. Single Labeling
P=0.6, k=5
Multi-labeling is better than single labeling
P: labeling qualityk: #labels
GRR
SL
Label Uncertainty vs. Round Robin
26
0.6
0.7
0.8
0.9
1
0 400 800 1200 1600 2000
Number of labels (waveform, p=0.6)
Labe
ling q
ualit
y
GRRLU
similar results across a dozen data sets
27
Another strategy:Model Uncertainty (MU)
Learning a model of the data provides an alternative source of information about label certainty
Model uncertainty: get more labels for instances that cannot be modeled well
Intuition?– for data quality, low-certainty “regions”
may be due to incorrect labeling of corresponding instances
– for modeling: why improve training data quality if model already is certain there?
+ ++
++ + +
+
+ ++
+
+ ++
++ +
+ +
- - - -
- - - -- -
- -
- - - -
- - - -- - - -- -
- -
- - - -
?
?
28
Yet another strategy:Label & Model Uncertainty (LMU)
Label and model uncertainty (LMU): avoid examples where either strategy is certain
MULULMU SSS
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
1
0 400 800 1200 1600 2000
Number of labels (waveform, p=0.6)
Lab
elin
g q
ual
ity
GRRMULULMU
Comparison
29
Label Uncertainty
GRR
Label & Model Uncertainty
Model Uncertainty alone also improves
quality
30
Comparison: Model Quality
60
65
70
75
80
85
90
0 400 800 1200 1600 2000
Number of labels (spambase, p=0.6)
Acc
urac
y
GRRMULULMU
Label & Model Uncertainty
Across 12 domains, LMU is always better than GRR. LMU is statistically significantlybetter than LU and MU.
Summary of results
Micro-task outsourcing (e.g., MTurk, RentaCoder ESP game) has changed the landscape for data formulation
Repeated labeling can improve data quality and model quality (but not always)
When labels are noisy, repeated labeling can be preferable to single labeling even when labels aren’t particularly cheap
When labels are relatively cheap, repeated labeling can do much better (omitted)
Round-robin repeated labeling can do well Selective repeated labeling improves substantially
32
Opens up many new directions…
Strategies using “learning-curve gradient”
Estimating the quality of each labeler
Example-conditional quality
Increased compensation vs. labeler quality
Multiple “real” labels
Truly “soft” labels
Selective repeated tagging
Thanks!
Q & A?