Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes...

26
Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Transcript of Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes...

Page 1: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Bayes Classifiers II: More Examples

CAP5610 Machine Learning

Instructor: Guo-Jun QI

Page 2: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Recap: Bayes Classifier

• MAP rule: given an input feature vector X=(X1,X2,…,XN) with N attributes, the optimal binary prediction on its class Y is made by

𝑌∗= argmax𝑌∈{0,1} 𝑃(𝑌|𝑋)

• Bayes rule: 𝑃(𝑌|𝑋) ∝ 𝑃 𝑋 𝑌 𝑃(𝑌)• where P(X|Y) is the class-conditional distribution for class y, and

• P(Y) is the prior distribution.

Page 3: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Bayes error

• Two types of prediction error

𝑝 𝑒𝑟𝑟𝑜𝑟|𝑋 = 𝑝 𝑌 = 𝐶1 𝑋 , 𝑖𝑓 𝑃 𝑌 = 𝐶2 𝑋 > 𝑃(𝑌 = 𝐶1|𝑋)

𝑝 𝑌 = 𝐶2 𝑋 , 𝑖𝑓 𝑃 𝑌 = 𝐶1 𝑋 > 𝑃(𝑌 = 𝐶2|𝑋)= min{𝑝 𝑌 = 𝐶1 𝑋 , 𝑝 𝑌 = 𝐶2 𝑋 }

• An example with one dimensional X

Page 4: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Bayes error

• The shaded area under the curve corresponds to the prediction errors incurred by the Bayes classifier

Page 5: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Optimality

• Bayes classifier is the optimal classifier we can obtained with the smallest prediction error.

• The prediction error of NN classifier is upper bounded by twice Bayes error asymptotically (i.e., when the size of training set is very large).

Page 6: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Decision region and boundary

• Log likelihood ratio divides the feature space into two regions by threshold 0.

X1

X2

R1R2

Decision boundary

𝑙𝑜𝑔P(Y= 1|X)

P(Y= 2|X)> 0

𝑙𝑜𝑔P(Y= 2|X)

P(Y= 1|X)< 0

Page 7: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

The boundary

• is determined by the equation:

• Discriminant function f X = log 𝑃 𝑌 = 1 𝑋 − log 𝑃 𝑌 = 2 𝑋

log𝑃 𝑌 = 1 𝑋

𝑃 𝑌 = 2 𝑋= log 𝑃 𝑌 = 1 𝑋 − log 𝑃 𝑌 = 2 𝑋 = 0

Page 8: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Examples of binary decision boundary

• Gaussian class-conditional density (one dimensional X)

with y = {1,2}.

Page 9: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

High dimensional case

• Gaussian class-conditional density in high dimensional space

• Decision boundary equation:

• A quadratic surface in high dimensional space

𝑓(𝑋) = log)𝑃(𝑌 = 1)𝑃(𝑋|𝑌 = 1

)𝑃(𝑌 = 2)𝑃(𝑋|𝑌 = 2

= −(𝑋 − 𝜇1)𝛴1

−1(𝑋 − 𝜇1)′ − (𝑋 − 𝜇2)𝛴2−1(𝑋 − 𝜇2)′

2+1

2log

|𝛴2|P(Y = 1)

|𝛴1|P(Y = 2)= 0

Page 10: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Special Case

• If , discriminate function boils down to

where W is a N dimensional vector, b is a real number.

• f(X) is linear! Decision boundary is a hyper plan in high-dimensional space.

'1 1 ' 1 '

1 2 1 1 2 2

(Y 1)( ) ( ) log

( 2)W

b

Pf X X

P Y

XW b

𝛴1 = 𝛴2 = 𝛴

Page 11: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Decision boundary

• We got the first linear classifier from Bayes classifier model.

Page 12: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Linear classifier

• W and b is indirectly obtained from two class-conditional distribution

• We waste much effort on estimating Gaussian covariance matrix and mean vector, but what we really need is simply W and b, can we learn W and b directly? Yes!

• Directly estimating W and b reduces the number of parameters we have to estimate.

'1 1 ' 1 '

1 2 1 1 2 2

(Y 1)( ) ( ) log

( 2)W

b

Pf X X

P Y

XW b

Page 13: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Bayes classifier for continuous feature vectors

• Input feature vector X=(X1,…,XN) with N attributes

• Step 1: design class-conditional density for 𝑌 ∈ {0,1, … , 9}

• Naïve Gaussian Bayes classifier

𝑃 𝑋 𝑌 = 𝑃 𝑋1 𝑌 …𝑃(𝑋𝑁|𝑌)

Where each individual term is 2

,

2

,,

(X )1( | ) exp

22

n y n

n

y ny n

P X Y y

Page 14: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Bayes classifier for continuous feature vectors

• Maximum Likelihood estimation of 𝜇𝑦,𝑛 and 𝜎𝑦,𝑛 for

are

with a set of (𝑋𝑖, 𝑌𝑖) for ith training example, and 𝑋𝑖,𝑛 is the nth feature for 𝑋𝑖,

my is the number of training examples of class y, and 𝛿 𝑌𝑖 = 𝑦 is indicator function.

2

,

2

,,

(X )1( | ) exp

22

n y n

n

y ny n

P X Y y

, ,

1

2

, , ,

1

1

1

1

m

y n i n i

iy

m

y n i i n y n

iy

X Y ym

Y y Xm

Page 15: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Bayes classifier for continuous feature vectors

• Step 2: Estimate the prior distribution

𝑃 𝑦 = 𝑑 = 𝜃𝑑𝑤𝑖𝑡ℎ

𝑑=0

9

𝜃𝑑 = 1

• Solution 1: Maximum likelihood estimation

𝜃𝑑 =# 𝑜𝑓 𝑑𝑖𝑔𝑖𝑡 𝑑 𝑖𝑛 𝑡𝑟𝑎𝑖𝑛𝑔 𝑠𝑒𝑡

# 𝑜𝑓 𝑡𝑜𝑡𝑎𝑙 𝑑𝑖𝑔𝑖𝑡𝑠 𝑖𝑛 𝑡𝑟𝑎𝑖𝑛𝑖𝑛𝑔 𝑠𝑒𝑡

Page 16: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Bayes classifier for continuous feature vectors

• Solution 2: Maximum A Posterior parameter estimation• Imposing a prior P(𝜃) on the parameter of prior distribution P(y|𝜃) : Prior on

prior

• Instead of only estimating a single point for 𝜃, we estimate a posterior distribution P(𝜃|𝐷𝑌) over all possible 𝜃, where 𝐷𝑌 is the class labels for training examples

• Dirichlet distribution P 𝜃 ∝ 𝜃1𝛼1−1…𝜃9

𝛼9−1, conjugate to P(y|𝜃)

• By the property of conjugate distribution, we have

argmax𝜃𝑑

P 𝜃𝑑 𝐷𝑌 =# 𝑜𝑓 𝑑𝑖𝑔𝑖𝑡 𝑑 𝑖𝑛 𝑡𝑟𝑎𝑖𝑛𝑛𝑔 𝑠𝑒𝑡 + 𝛼𝑑

# 𝑜𝑓 𝑡𝑟𝑎𝑖𝑛𝑛𝑔 𝑒𝑥𝑎𝑚𝑝𝑙𝑒𝑠 + 𝑑=09 𝛼𝑑

Page 17: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Bayes classifier for continuous feature vectors

• Test example X, its most possible digit is

𝑌∗= argmax𝑌∝ 0,1,…,9 𝑃 𝑋1 𝑌 …𝑃 𝑋𝑁 𝑌 𝑃 𝑌

= argmax𝑌∝{0,1,…,9} log 𝑃 𝑋1 𝑌 +⋯+ log 𝑃 𝑋𝑁 𝑌 + log 𝑃(𝑌)

• A trick: working with log of probability• Convert multiplication to summation

• Avoid arithmetic underflow: too large N will cause too small posterior that cannot be correctly operated by computer.

Page 18: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Machine Problem 2

• Applying the Naïve Gaussian Bayes classifier on MNIST.• Use two solutions (MLE/MAP) to estimate prior distribution P(Y)

• Estimate the independent Gaussian density for each dimension of feature vector

• Report error rates with three test protocol, you need to tune 𝛼𝑑 for d=0,…,9• Test/training

• Test/validation/training

• 5-fold/10-fold cross-validation

• Tune 𝛼𝑑 = 1%, 2%, 4%, 8%, 16% of training examples.

• Compare your result with KNN, which is better? Why?

• KNN lost correlations between different features (pixels in MNIST case)?

• Do you observe over fitting? Do {𝛼𝑑} help to reduce over fitting?

Page 19: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Machine Problem 2

• Due in three weeks ( 11:59am, before Thursday’s class, Feb 16)

• Submit to [email protected].

• Report, and source code

Page 20: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Why not using multivariate Gaussian

• With full covariance matrix to model the class-conditional distribution?

• In MNIST, feature space dimension N=28X28, how many parameters are there in a full covariance matrix?

•𝑁(𝑁+1)

2= 307,720, compared with 50000 training examples

• Underdetermined: The parameters cannot be completely determined.

Page 21: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Text document: length-varying feature vectors

• Input• For each document m, a vector of length n represents all the words appearing

in the document: 𝑋𝑛𝑚 is the n-th word in document m.

• The domain of 𝑋𝑛𝑚 is a vocabulary of a dictionary (e.g., Webster dictionary)

• The length of documents vary between each other, so feature vector 𝑋𝑚 =𝑋1𝑚, 𝑋2

𝑚, … , for a document does not have a fixed size.

• Output• Ym defines the category of document m – {Ads, Non-Ads}

Page 22: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Class-conditional distribution

• Assume independence between words in a document

• A graphical model representation for the independence decomposition of a distribution• Each circle node represent a random variable

• Each arrow represents a conditional distribution, conditioned on the parent node

𝑃(𝑋1𝑚, 𝑋2

𝑚, … , 𝑋𝑛𝑚 𝑌𝑚 = 𝑃 𝑋1

𝑚 𝑌𝑚 𝑃 𝑋2𝑚 𝑌𝑚 …𝑃 𝑋𝑛

𝑚 𝑌𝑚

Ym

𝑋1𝑚

𝑋2𝑚

𝑋𝑛𝑚

Ym 𝑋𝑗𝑚

j=1,2,..,n

Page 23: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Class-conditional distribution

• Class-conditional distribution

• MLE of 𝑃(𝑤𝑖|𝑌𝑚) for each word wi in a vocabulary.

𝑃 𝑤𝑖 𝑌𝑚 = 𝑦 =# 𝑜𝑓 𝑤𝑜𝑟𝑑 𝑤𝑖 𝑖𝑛 𝑑𝑜𝑐𝑢𝑚𝑒𝑛𝑡𝑠 𝑜𝑓 𝑐𝑙𝑎𝑠𝑠 𝑦

# 𝑜𝑓 𝑤𝑜𝑟𝑑𝑠 𝑖𝑛 𝑑𝑜𝑐𝑢𝑚𝑒𝑛𝑡𝑠 𝑜𝑓 𝑐𝑙𝑎𝑠𝑠 𝑦

𝑃(𝑋1𝑚, 𝑋2

𝑚, … , 𝑋𝑛𝑚 𝑌𝑚 = 𝑃 𝑋1

𝑚 𝑌𝑚 𝑃 𝑋2𝑚 𝑌𝑚 …𝑃 𝑋𝑛

𝑚 𝑌𝑚 =

𝑖=1

𝑉

𝑃(𝑤𝑖|𝑌𝑚)𝐶𝑜𝑢𝑛𝑡𝑖

Page 24: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Class-conditional distribution

• MAP estimate of 𝑃(𝑤𝑖|𝑌𝑚) for each word wi in a vocabulary.

𝑃 𝑤𝑖 𝑌𝑚 = 𝑦

=# 𝑜𝑓 𝑤𝑜𝑟𝑑 𝑤𝑖 𝑖𝑛 𝑑𝑜𝑐𝑢𝑚𝑒𝑛𝑡𝑠 𝑜𝑓 𝑐𝑙𝑎𝑠𝑠 𝑦 𝑝𝑙𝑢𝑠 𝑠𝑜𝑓𝑡 𝑐𝑜𝑢𝑛𝑡 𝛼𝑖

# 𝑜𝑓 𝑤𝑜𝑟𝑑𝑠 𝑝𝑙𝑢𝑠 𝑎𝑙𝑙 𝑠𝑜𝑓𝑡 𝑐𝑜𝑢𝑛𝑡𝑠 𝑖𝑛 𝑑𝑜𝑐𝑢𝑚𝑒𝑛𝑡𝑠 𝑜𝑓 𝑐𝑙𝑎𝑠𝑠 𝑦

• Here a soft count 𝛼𝑖 is added to each word wi

Page 25: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Class prior

• Similar MLE/MAP methods can be applied to estimate P(Y)

• Given a test example X,

• Decide the optimal y

𝑃(𝑋1 , 𝑋2 , … , 𝑋𝑛 𝑌 =

𝑖=1

𝑉

𝑃(𝑤𝑖|𝑌)𝐶𝑜𝑢𝑛𝑡𝑖

argmax𝑦𝑃(𝑌 = 𝑦)

𝑖=1

𝑉

𝑃(𝑤𝑖|𝑌 = 𝑦)𝐶𝑜𝑢𝑛𝑡𝑖

Page 26: Bayes Classifiers II: More Examples - CS Departmentgqi/CAP5610/CAP5610Lecture04.pdf · Bayes Classifiers II: More Examples CAP5610 Machine Learning Instructor: Guo-Jun QI

Summary

• Recap Bayes classifier, Bayes error

• Decision region and boundary with Gaussian class-conditional distributions• Linear hyper plane and quadratic surface in high dimensional space

• Gaussian Naive Bayes, Machine Problem 2

• Bayes classifier with length-varying feature vector for text document

• Basic concept of graphical model