Pattern Recognition and Machine Learning
description
Transcript of Pattern Recognition and Machine Learning
![Page 1: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/1.jpg)
PATTERN RECOGNITION AND MACHINE LEARNINGCHAPTER 2: PROBABILITY DISTRIBUTIONS
![Page 2: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/2.jpg)
Two approached to density estimation
1. Parametric2. Non Parametric
![Page 3: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/3.jpg)
1. Parametric Estimation of Density
Basic building blocks:Need to determine given Representation: or ?
Recall Curve Fitting
![Page 4: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/4.jpg)
Assumptions
1. There is a known parametric form2. Data is i.i.d.
Note: given data you can fit infinitely many p(x)
![Page 5: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/5.jpg)
Binary Variables (1)
Exp 1: Coin flipping: heads=1, tails=0
Bernoulli Distribution
![Page 6: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/6.jpg)
Binary Variables (2)
Exp 2:N coin flips:
Binomial Distribution
![Page 7: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/7.jpg)
Binomial Distribution
![Page 8: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/8.jpg)
Parameter Estimation (1)
MLE for BernoulliGiven:
![Page 9: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/9.jpg)
Parameter Estimation (2)
Example:Prediction: all future tosses will land heads up
Overfitting to D
![Page 10: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/10.jpg)
Congugate Prior
Recall Bayes formula:
p(w|x, t) ∝ p(t|x, w)p(w). Assume, a functional form of the prior is the
same as the functional form of posterior.
Need to find cojugate prior of the likelihood
![Page 11: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/11.jpg)
Beta Distribution
Distribution over .
![Page 12: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/12.jpg)
Gamma Function: Extension of Factorial Function
![Page 13: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/13.jpg)
Beta Distribution
![Page 14: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/14.jpg)
Bayesian Bernoulli
The Beta distribution provides the conjugate prior for the Bernoulli distribution.
![Page 15: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/15.jpg)
Prior Likelihood = Posterior∙
![Page 16: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/16.jpg)
Properties of the PosteriorAs the size of the data set, N , increase
![Page 17: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/17.jpg)
Sequential Approach for Density Estimation
Suppose we receive observation one at a time.What is the probability that the next coin toss will land heads up?
![Page 18: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/18.jpg)
Multinomial Variables
1-of-K coding scheme:
![Page 19: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/19.jpg)
Lagrange Multipliers
maximize f(x,y)subject to g(x,y)=cDefine and maximize
![Page 20: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/20.jpg)
ML Parameter estimation
Given:
Ensure , use a Lagrange multiplier, ¸.
![Page 21: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/21.jpg)
The Multinomial Distribution
![Page 22: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/22.jpg)
The Dirichlet Distribution
Simplex in K dimension: Conjugate prior for the multinomial distribution.
![Page 23: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/23.jpg)
Bayesian Multinomial (1)
Conjugate prior of Multinomial is Dirichlet distribution
![Page 24: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/24.jpg)
Bayesian Multinomial (2)
![Page 25: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/25.jpg)
The King of the Distributions: Gaussian
![Page 26: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/26.jpg)
Central Limit Theorem
The distribution of the sum of N i.i.d. random variables becomes increasingly Gaussian as N grows.Example: N uniform [0,1] random variables.
![Page 27: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/27.jpg)
Geometry of the Multivariate Gaussian
![Page 28: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/28.jpg)
Moments of the Multivariate Gaussian (1)
thanks to anti-symmetry of z
![Page 29: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/29.jpg)
Moments of the Multivariate Gaussian (2)
![Page 30: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/30.jpg)
Partitioned Gaussian DistributionsPartition the Gaussian data X into two parts: What is the distribution of each part? P(xa)What is the conditional distributions? P(xa/xb)
![Page 31: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/31.jpg)
Partitioned Conditionals and Marginals:They are all Gaussian…
![Page 32: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/32.jpg)
Partitioned Conditionals and Marginals
![Page 33: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/33.jpg)
Bayes’ Theorem for Gaussian Variables
Given
we have
where
![Page 34: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/34.jpg)
Maximum Likelihood for the Gaussian (1)
Given i.i.d. data , the log likeli-hood function is given by
Sufficient statistics
![Page 35: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/35.jpg)
Maximum Likelihood for the Gaussian (2)
Set the derivative of the log likelihood function to zero,
and solve to obtain
Similarly
![Page 36: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/36.jpg)
Maximum Likelihood for the Gaussian (3)
Under the true distribution
Hence define
![Page 37: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/37.jpg)
Contribution of the N th data point, xN
Sequential Estimation
correction given xN correction weightold estimate
![Page 38: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/38.jpg)
Consider µ and z governed by p(z,µ) and define the regression function
Seek µ? such that f(µ?) = 0.
The Robbins-Monro Algorithm (1)
![Page 39: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/39.jpg)
Assume we are given samples from p(z,µ), one at the time.
The Robbins-Monro Algorithm (2)
![Page 40: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/40.jpg)
Successive estimates of µ? are then given by
Conditions on aN for convergence :
The Robbins-Monro Algorithm (3)
![Page 41: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/41.jpg)
Regarding
as a regression function, finding its root is equivalent to finding the maximum likelihood solution µML. Thus
Robbins-Monro for Maximum Likelihood (1)
![Page 42: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/42.jpg)
Example: estimate the mean of a Gaussian.
Robbins-Monro for Maximum Likelihood (2)
The distribution of z is Gaussian with mean ¹ { ¹ML.
For the Robbins-Monro update equation, aN = ¾2=N.
![Page 43: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/43.jpg)
Bayesian Inference for the Gaussian (1)
Assume ¾2 is known. Given i.i.d. data , the likelihood function for¹ is given by
This has a Gaussian shape as a function of ¹ (but it is not a distribution over ¹).
![Page 44: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/44.jpg)
Bayesian Inference for the Gaussian (2)
Combined with a Gaussian prior over ¹,
this gives the posterior
Completing the square over ¹, we see that
![Page 45: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/45.jpg)
Bayesian Inference for the Gaussian (3)
… where
Note:
![Page 46: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/46.jpg)
Bayesian Inference for the Gaussian (4)
Example: for N = 0, 1, 2 and 10.
![Page 47: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/47.jpg)
Bayesian Inference for the Gaussian (5)
Sequential Estimation
The posterior obtained after observing N { 1 data points becomes the prior when we observe the N
th data point.
![Page 48: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/48.jpg)
Bayesian Inference for the Gaussian (6)
Now assume ¹ is known. The likelihood function for ¸ = 1/¾2 is given by
This has a Gamma shape as a function of ¸.
![Page 49: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/49.jpg)
Bayesian Inference for the Gaussian (7)
The Gamma distribution
![Page 50: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/50.jpg)
Bayesian Inference for the Gaussian (8)
Now we combine a Gamma prior, ,with the likelihood function for ¸ to obtain
which we recognize as with
![Page 51: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/51.jpg)
Bayesian Inference for the Gaussian (9)
If both ¹ and ¸ are unknown, the joint likelihood function is given by
We need a prior with the same functional dependence on ¹ and ¸.
![Page 52: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/52.jpg)
Bayesian Inference for the Gaussian (10)
The Gaussian-gamma distribution
• Quadratic in ¹.• Linear in ¸.
• Gamma distribution over ¸.• Independent of ¹.
![Page 53: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/53.jpg)
Bayesian Inference for the Gaussian (11)
The Gaussian-gamma distribution
![Page 54: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/54.jpg)
Bayesian Inference for the Gaussian (12)
Multivariate conjugate priors• ¹ unknown, ¤ known: p(¹) Gaussian.• ¤ unknown, ¹ known: p(¤) Wishart,
• ¤ and ¹ unknown: p(¹,¤) Gaussian-Wishart,
![Page 55: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/55.jpg)
where
Infinite mixture of Gaussians.
Student’s t-Distribution
![Page 56: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/56.jpg)
Student’s t-Distribution
![Page 57: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/57.jpg)
Student’s t-Distribution
Robustness to outliers: Gaussian vs t-distribution.
![Page 58: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/58.jpg)
Student’s t-Distribution
The D-variate case:
where .
Properties:
![Page 59: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/59.jpg)
Periodic variables
• Examples: calendar time, direction, …• We require
![Page 60: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/60.jpg)
von Mises Distribution (1)
This requirement is satisfied by
where
is the 0th order modified Bessel function of the 1st kind.
![Page 61: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/61.jpg)
von Mises Distribution (4)
![Page 62: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/62.jpg)
Maximum Likelihood for von Mises
Given a data set, , the log likelihood function is given by
Maximizing with respect to µ0 we directly obtain
Similarly, maximizing with respect to m we get
which can be solved numerically for mML.
![Page 63: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/63.jpg)
Mixtures of Gaussians (1)
Old Faithful data set
Single Gaussian Mixture of two Gaussians
![Page 64: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/64.jpg)
Mixtures of Gaussians (2)
Combine simple models into a complex model:
Component
Mixing coefficientK=3
![Page 65: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/65.jpg)
Mixtures of Gaussians (3)
![Page 66: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/66.jpg)
Mixtures of Gaussians (4)
Determining parameters ¹, §, and ¼ using maximum log likelihood
Solution: use standard, iterative, numeric optimization methods or the expectation maximization algorithm (Chapter 9).
Log of a sum; no closed form maximum.
![Page 67: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/67.jpg)
The Exponential Family (1)
where ´ is the natural parameter and
so g(´) can be interpreted as a normalization coefficient.
![Page 68: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/68.jpg)
The Exponential Family (2.1)
The Bernoulli Distribution
Comparing with the general form we see that
and so
Logistic sigmoid
![Page 69: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/69.jpg)
The Exponential Family (2.2)
The Bernoulli distribution can hence be written as
where
![Page 70: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/70.jpg)
The Exponential Family (3.1)
The Multinomial Distribution
where, , and
NOTE: The ´ k parameters are not independent since the corresponding ¹k must satisfy
![Page 71: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/71.jpg)
The Exponential Family (3.2)
Let . This leads to
and
Here the ´ k parameters are independent. Note that
and
Softmax
![Page 72: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/72.jpg)
The Exponential Family (3.3)
The Multinomial distribution can then be written as
where
![Page 73: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/73.jpg)
The Exponential Family (4)
The Gaussian Distribution
where
![Page 74: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/74.jpg)
ML for the Exponential Family (1)
From the definition of g(´) we get
Thus
![Page 75: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/75.jpg)
ML for the Exponential Family (2)
Give a data set, , the likelihood function is given by
Thus we have
Sufficient statistic
![Page 76: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/76.jpg)
Conjugate priors
For any member of the exponential family, there exists a prior
Combining with the likelihood function, we get
Prior corresponds to º pseudo-observations with value Â.
![Page 77: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/77.jpg)
Noninformative Priors (1)
With little or no information available a-priori, we might choose a non-informative prior.• ¸ discrete, K-nomial :• ¸2[a,b] real and bounded: • ¸ real and unbounded: improper!
A constant prior may no longer be constant after a change of variable; consider p(¸) constant and ¸=´2:
![Page 78: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/78.jpg)
Noninformative Priors (2)
Translation invariant priors. Consider
For a corresponding prior over ¹, we have
for any A and B. Thus p(¹) = p(¹ { c) and p(¹) must be constant.
![Page 79: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/79.jpg)
Noninformative Priors (3)
Example: The mean of a Gaussian, ¹ ; the conjugate prior is also a Gaussian,
As , this will become constant over ¹ .
![Page 80: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/80.jpg)
Noninformative Priors (4)
Scale invariant priors. Consider and make the change of variable
For a corresponding prior over ¾, we have
for any A and B. Thus p(¾) / 1/¾ and so this prior is improper too. Note that this corresponds to p(ln ¾) being constant.
![Page 81: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/81.jpg)
Noninformative Priors (5)
Example: For the variance of a Gaussian, ¾2, we have
If ¸ = 1/¾2 and p(¾) / 1/¾ , then p(¸) / 1/ ¸.
We know that the conjugate distribution for ¸ is the Gamma distribution,
A noninformative prior is obtained when a0 = 0 and b0 = 0.
![Page 82: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/82.jpg)
Nonparametric Methods (1)
Parametric distribution models are restricted to specific forms, which may not always be suitable; for example, consider modelling a multimodal distribution with a single, unimodal model.
Nonparametric approaches make few assumptions about the overall shape of the distribution being modelled.
![Page 83: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/83.jpg)
Nonparametric Methods (2)
Histogram methods partition the data space into distinct bins with widths ¢i and count the number of observations, ni, in each bin.
• Often, the same width is used for all bins, ¢i = ¢.
• ¢ acts as a smoothing parameter.
• In a D-dimensional space, using M bins in each dimen-sion will require MD bins!
![Page 84: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/84.jpg)
Nonparametric Methods (3)
Assume observations drawn from a density p(x) and consider a small region R containing x such that
The probability that K out of N observations lie inside R is Bin(KjN,P ) and if N is large
If the volume of R, V, is sufficiently small, p(x) is approximately constant over R and
Thus
V small, yet K>0, therefore N large?
![Page 85: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/85.jpg)
Nonparametric Methods (4)
Kernel Density Estimation: fix V, estimate K from the data. Let R be a hypercube centred on x and define the kernel function (Parzen window)
It follows that
and hence
![Page 86: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/86.jpg)
Nonparametric Methods (5)
To avoid discontinuities in p(x), use a smooth kernel, e.g. a Gaussian
Any kernel such that
will work.h acts as a smoother.
![Page 87: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/87.jpg)
Nonparametric Methods (6)
Nearest Neighbour Density Estimation: fix K, estimate V from the data. Consider a hypersphere centred on x and let it grow to a volume, V ?, that includes K of the given N data points. Then
K acts as a smoother.
![Page 88: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/88.jpg)
Nonparametric Methods (7)
Nonparametric models (not histograms) requires storing and computing with the entire data set.
Parametric models, once fitted, are much more efficient in terms of storage and computation.
![Page 89: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/89.jpg)
K-Nearest-Neighbours for Classification (1)
Given a data set with Nk data points from class Ck and , we have
and correspondingly
Since , Bayes’ theorem gives
![Page 90: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/90.jpg)
K-Nearest-Neighbours for Classification (2)
K = 1K = 3
![Page 91: Pattern Recognition and Machine Learning](https://reader035.fdocuments.in/reader035/viewer/2022062218/56816679550346895dda18e5/html5/thumbnails/91.jpg)
K-Nearest-Neighbours for Classification (3)
• K acts as a smother• For , the error rate of the 1-nearest-neighbour classifier is never more than twice the optimal error (obtained from the true conditional class distributions).