Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday,...

45
Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci 216: Everything Data

Transcript of Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday,...

Page 1: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Algorithmic Fairness in Machine Learning

Thursday, April 4, 2019CompSci 216: Everything Data

Page 2: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Human Decision Making

Data

Jane likes Bollywood musicals.

Decision Maker

Bob

Decision

Bob: “You should watch Les Miserables, it’s also a musical!”

Jane: “Nice try, Bob, but you clearly don’t understand how to generalize from your prior experience.”

Suppose we want to recommend a movie.

Page 3: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Human Decision Making

Data

Jane is a woman.

Decision Maker

Bob

Decision

Or even worse:

Bob: “I bet you’d like one of these dumb women’s movies.”

Jane: “Actually Bob, that’s a sexist recommendation that doesn’t reflect well on you as a person or your understanding of cinema.”

Page 4: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

What if we use machine learning algorithms instead? They will generalize well and be less biased, right?

Page 5: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Algorithmic Decision Making

Data

Netflix database, Jane’s watch history

Decision Maker Decision

“A blackbox collaborative filtering algorithm suggests you would like this movie.”

Jane: “Wow Netflix, that was a great recommendation, and you didn’t negatively stereotype me in order to generalize from your data!”

Page 6: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Problem solved! Right?

Page 7: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Machine BiasThere’s software used across the country to predict future criminals. And it’s biased against blacks. by Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, ProPublica, May 23, 2016

Bernard Parker, left, was rated high risk; Dylan Fugett was rated low risk. (Josh Ritchie for ProPublica)

Page 8: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Recidivism Prediction

• In many parts of the U.S., when someone is arrested and accused of a crime, a judge decides whether to grant bail.

• In practice, this decides whether a defendant gets to wait for their trial at home or in jail.

• Judges are allowed or even encouraged to make this decision based on how likely a defendant is to re-commit crimes, i.e., recidivate.

Page 9: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Recidivism PredictionData

Criminal history of defendant (and others)

Decision Maker Decision

High risk of recommitting a crime.

Low risk of recommitting a crime.

Do not grant bail.

Grant bail.

Software used in practice tends to predict that African Americans will recidivate at much higher rates than White Americans.

How do we quantify and correct for that?

Page 10: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Outline• Introduction to Algorithmic Decision Making

• Review Supervised Learning and Prediction: Binary Classification

• Disparate Impact • Defining Disparate Impact• Certifying and Removing Disparate Impact Feldman et. al, KDD 2015)• Limitations and other approaches

Page 11: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Binary Classification

• Suppose we want a cat classifier. We need labeled training data.

= cat

= cat

= cat

!= cat

Page 12: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Binary Classification

• We learn a binary classifier, which is a function f from the input space (pictures, for example) to a binary class (e.g., 1 or 0).• To classify a new data point, apply the function to make a prediction.

Ideally, we get:

• f = 1.

Page 13: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Binary Classification

• More generally…We are given training data, a matrix where every row represents a data point and every column is a feature, along with the true target value for every data point.

• What we “learn” is a function from the feature space to the prediction target. E.g., if there are m features, the feature space might be ℝ", in which case a binary classifier is a function

#:ℝ" → 0, 1 .

Page 14: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Support Vector Machines

• A support vector machine (with a linear kernel) just learns a linear function of the feature variables.

• In other words, it defines a hyperplane in the feature space, mapping points on one side to 0 and the other side to 1.

• It chooses the hyperplane that minimizes the hinge loss: max(0, distance to hyperplane).

Page 15: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

https://en.wikipedia.org/wiki/Support_vector_machine

Page 16: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Justice

• Fact: sometimes we make errors in prediction. So what?• When prediction is used for decision making, it impacts the lives of

real people. For example:• Recidivism prediction for granting bail• Predicting credit worthiness to give loans• Predicting success in school/job to decide on admission/hiring

• Big question of justice: are people being treated as they deserve?• (“Justice is the constant and perpetual wish to render every one their due.” –

Corpus Juris Civilis, Codex Justinianus, 534).• This seems hard. Potentially any error is an injustice to that person.

Page 17: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Fairness

• Smaller question of fairness: are people being treated equally?• Is our classifier working as well for black cats as white cats?

• Accompanying question: what is the relevant sense of “treated equally?”

Page 18: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Outline• Introduction to Algorithmic Decision Making

• Review Supervised Learning and Prediction: Binary Classification

• Disparate Impact • Defining Disparate Impact• Certifying and Removing Disparate Impact Feldman et. al, KDD 2015)• Limitations and other approaches

Page 19: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Disparate Impact

• Suppose we are contracted by Duke admissions to build a machine learning classifier that predicts whether students will succeed in college. For simplicity, assume we admit students who will succeed.

Gender Age GPA SAT0 19 3.5 14001 18 3.8 13001 22 3.3 15001 18 3.5 1500… … … …0 18 4.0 1600

Succeed1001…1

Page 20: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Disparate Impact

• Let D=(X, Y, C) be a labeled data set, where X = 0 means protected, C = 1 is the positive class, and Y is everything else.• We say that a classifier f has disparate impact (DI) of ! (0 < ! < 1) if:

Pr $ % = 1 ( = 0)Pr($ % = 1 | ( = 1) ≤ !

that is, if the protected class is positively classified less than ! times as often as the unprotected class. (legally, ! = 0.8 is common).

Page 21: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Disparate Impact• Arguably this is the only good measure if you think the data are

biased and you have a strong prior belief protected status is uncorrelated with outcomes.

• “In Griggs v. Duke Power Co. [20], the US Supreme Court ruled a business hiring decision illegal if it resulted in disparate impact by race even if the decision was not explicitly determined based on race. The Duke Power Co. was forced to stop using intelligence test scores and high school diplomas, qualifications largely correlated with race, to make hiring decisions. The Griggs decision gave birth to the legal doctrine of disparate impact...” (Feldman et. al, KDD 2015).

Page 22: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Disparate Impact

• Suppose our ground truth data looks like this.

• The first thought is that we shouldn’t use the gender bit to train our model. Suppose we learn a classifier that predicts success if SAT ≥ 1400.

Gender SAT Succeed0 1400 10 1300 01 1400 01 1400 01 1500 11 1500 1

• But then we would admit 50% of Gender 0, and 100% of Gender 1,giving a disparate impact of 0.5.

• Note that this holds even though both genders succeed at the same rate (50%) in our training data!

Page 23: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Disparate Impact• What if data generation itself is biased?• Consider the recidivism context. What if police are more likely to

detect recidivism for one population? Let • C=1 mean recidivate (commit a crime after release), and C=0. • X=1 mean race 1, and X=0 mean race 0.• Z=1 if police detect crime, and Z = 0 otherwise.

• Suppose • Pr(C=1|X=1) = Pr(C=0|X=0) = 0.5, but • Pr(Z=1|X=1, C=1) = 0.5, whereas Pr(Z=1|X=0, C=1) = 1.

• Remember, we only see Z in our data set. So the data set itself could show disparate impact (again, even when the underlying phenomenon of interest doesn’t).

Page 24: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Disparate Impact

• Does this actually happen?• The data at right counts the

number of traffic stops (to check for contraband) by race. • The highlighted circle is Raleigh.• What would happens if we use

this data to build a model for predicting which cars police should stop?

Data from Stanford Open Policing Project at https://openpolicing.stanford.edu/findings/

Page 25: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Outline• Introduction to Algorithmic Decision Making

• Review Supervised Learning and Prediction: Binary Classification

• Disparate Impact • Defining Disparate Impact• Certifying and Removing Disparate Impact Feldman et. al, KDD 2015)• Limitations and other approaches

Page 26: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Certifying Disparate Impact

• Suppose you are given D = (X, Y, C).

• Can we verify that a new classifier learned on Y aiming to predict C will not have disparate impact with respect to X?

• Big idea: A classifier learned from Y will not have disparate impact if Xcannot be predicted from Y.

• Therefore, we can check a data set itself for possible problems, even without knowing what algorithm will be used.

Page 27: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Certifying Disparate Impact – Definitions

• Balanced Error Rate: Let !: # → % be a predictor of the protected class. Then the balanced error rate is defined as

&'( ! # , % = Pr !(#) = 0 % = 1) + Pr ! # = 1 % = 0)2

• Predictability: D is 3-predictable if there exists !: # → % with &'( ! # , % ≤ 3.• Theorem (simplified). If D = (X, Y, Z) admits a classifier f with

disparate impact 0.8, then D is is (1/2 – B/8)-predictable, where B = Pr 6(#) = 1 % = 0).

Page 28: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Removing Disparate Impact

• Suppose we find that X and Y do admit disparate impact. What do we do about it?• Can we define a “repair” protocol that corrects the problems with a

data set?• We want to change D so that it is no longer predictable. How can we

do this?• Formally, given !, # , we want to construct a repaired data set (!, $#)

such that for all &: # → !, )*+ & # , ! > -, where - depends on the strength of guarantee we want.

Page 29: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Removing Disparate Impact

• For simplicity, suppose that Y is a single well ordered numerical attribute like SAT score.• Claim. Perfect Repair is always possible.• Proof. Just set Y to 0 for every individual.

• Recall that !"# $ % , ' = )* +(-)/0 1/2)3 )* + - /2 1/0)4

• Then on the repaired data, the balanced error rate of any classifier is ½, which is the maximum possible balanced error rate. □

Page 30: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Removing Disparate Impact

• We would like a smarter way, that preserves the ability to classify accurately.

• More specifically, we want to transform Y in a way that preserves rankings within the protected group and within the nonprotected group (but not necessarily across).

• Ideally, this leads to a smooth transformation that still allows us to perform reasonably accurate classification. How?

Page 31: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Removing Disparate Impact

• Assume we have a single well ordered numerical attribute and that the protected and unprotected groups have equal size.• Algorithm.

1. Let !"# be percentage of agents with protected status x whose numerical score is at most y.

2. Take a data point (%&, (&). Calculate !"*#*.

3. Find (&+, such that !"*-.,+#* = !"*

#*.

4. Repair 0(& = 123456((&, (&+,).

• In a picture…

Page 32: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Removing Disparate Impact

Page 33: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Removing Disparate Impact

• Here is an equivalent formulation that works when we havepotentially different group sizes.• Let !" and !# be the data points with $ = 0 and $ = 1 respectively.

Suppose !" = ( |!#| for some integer (.• Algorithm.

1. Sort !" and !# in non-decreasing order.2. For i=0 to !# − 1:• +, = !# - . /, !" ( ∗ - . /, !" ( ∗ - + 1 . /, …!" ( ∗ - + ( − 1 . /• 4/, = 567-89(+,)• Set /, to 4/, for all points in +,

Page 34: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Removing Disparate Impact

• Let’s see this on an example.Gender SAT Succeed0 1400 10 1300 01 1400 01 1400 01 1500 11 1500 1

Gender SAT Succeed0 1300 00 1400 1

Gender SAT Succeed1 1400 01 1400 01 1500 11 1500 1

Page 35: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Removing Disparate Impact

• Let’s see this on an example.Gender SAT Succeed0 1400 10 1300 01 1400 01 1400 01 1500 11 1500 1

Gender SAT Succeed0 1400 00 1500 1

Gender SAT Succeed1 1400 01 1400 01 1500 11 1500 1

• Note that on this repaired data set, the simple classifier that predicts success for SAT at least 1500 is perfectly correct.

• On the unrepaired data set, no classifier could be 100% accurate using only SAT information.

Page 36: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Removing Disparate Impact

• If Y is more than just one attribute, Feldman et. al repair each attribute individually.

• The same basic idea can be extended for a partial repair algorithm, that still allows some disparate impact, but modifies the data less.

• Are there reasons not to do this?

Page 37: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Outline• Introduction to Algorithmic Decision Making

• Review Supervised Learning and Prediction: Binary Classification

• Disparate Impact • Defining Disparate Impact• Certifying and Removing Disparate Impact Feldman et. al, KDD 2015)• Limitations and other approaches

Page 38: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Disparate Impact – Limitations

• Typically forbids the “perfect” classifier.• Allows “laziness.” For example, here is a disparate impact free

classifier:• Accept the top 50% (by SAT score) of men who apply• Accept a random sample of 50% of the women who apply.

• Arguably this is a biased classifier, but it doesn’t have disparate impact. • It also assumes that there is not a fundamental difference between

the two groups. If that assumption isn’t true, disparate impact might not make sense, and could be viewed as “anti-meritocratic.”

Page 39: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Disparate Impact - Limitations

• Consider an example where disparate impact might not be the right notion: predicting default on a loan (for simplicity, say for a given amount of $200,000).• There is some latent value C=1 if an applicant will default, and C=0

otherwise. We want to build a classifier f(Y) where Y is the credit history, income, wealth, etc. of the individual.• Wealth is equally divided by race, so it’s quite likely that f(Y) will

predict higher rates of default among some minority groups.• We could repair the data to remove disparate impact. Should we?

Page 40: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Disparate Impact - Limitations

• Maybe not. Note that in loan default, false positives are pretty costly (you could lose your home!)• Of course, so are false negatives (you

might not be able to buy a home!)• Removing disparate impact could give

a lot more false positives to the minority group, and false negatives to the majority group: everyone loses.• (Caveat: These are subtle and

complicated issues that cannot be answered on a slide).

Page 41: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Another Approach: Equality of Opportunity

• Is there another notion that might be more appropriate here?• Let D=(X, Y, C) be a labeled data set, where X = 0 means protected, C =

1 is the positive class, and Y is everything else.• Classifier ! satisfies true positive parity if

Pr ! $ = 1 ' = 0, * = 1) = Pr ! $ = 1 ' = 1, * = 1)• Pros:• Allows the “perfect” classifier• Captures the a meritocratic ideal• Does not assume that the true class (C) and protected group (X) are

independent. • Cons:• No guarantees if your data are biased

Page 42: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Another Approach: Predictive Value Parity

• Yet another concern: what if the decision maker knows we are trying to be fair: does she have an incentive to take race, sex, etc. into account?• A classifier ! satisfies positive predictive value parity if

Pr $ = 1 ' = 0, ! * = 1) = Pr $ = 1 ' = 1, ! * = 1)• Pros:• Also allows the perfect classifier• Provides proper incentives for the decision maker to use the classifier without

rebalancing based on the protected attribute• Cons:• Still no guarantees if your data is biased• Feels like a minimal requirement, not a fairness guarantee

Page 43: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Incompatibility

• Suppose that Pr # = 1 & = 0) ≠ Pr # = 1 & = 1) and that we cannot compute the “perfect” classifier.

• Claim. No two of the following can be guaranteed simultaneously:• Disparate Impact of 1• True Positive Parity• Positive Predictive Value Parity

• There are real tradeoffs here!

Page 44: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Survey of Approaches to Fairness in Supervised Learning• Group Fairness: Statistical Parity• Disparate Impact: We should make predictions at the same rate for both

groups. • Equality of Opportunity: We should make predictions at the same rate for

both groups, conditioned on the ground truth.• Predictive Value Parity: Of those we predicted as 1, the same fraction should

really be 1’s (ground truth) for both groups.• Individual Fairness• Fairness Through Awareness: Similar individuals should be treated similarly.

• Causal Inference• We should make the same prediction in a counterfactual world where the

group membership is flipped.

Page 45: Algorithmic Fairness in Machine Learning€¦ · Algorithmic Fairness in Machine Learning Thursday, April 4, 2019 CompSci216: Everything Data. Human Decision Making Data Jane likes

Some High Level Takeaways

• Algorithmic decision making isn’t necessarily more fair and less biased than human decision making,

• Especially when humans generate the data,

• And there are fundamentally different ways of construing fairness.

• The most important thing for a data scientist is to be aware of and watching out for these issues: there is no one solution.