Chapter 3 cont’d.

67
Chapter 3 cont’d. Chapter 3 cont’d. Adjacency, Histograms, & Adjacency, Histograms, & Thresholding Thresholding

description

Chapter 3 cont’d. Adjacency, Histograms, & Thresholding. RAGs (Region Adjacency Graphs). Define graph. G=(V,E) where V is a set of vertices or nodes and E is a set of edges. Define graph. G=(V,E) where V is a set of vertices or nodes and E is a set of edges - PowerPoint PPT Presentation

Transcript of Chapter 3 cont’d.

Page 1: Chapter 3 cont’d.

Chapter 3 cont’d.Chapter 3 cont’d.

Adjacency, Histograms, & Adjacency, Histograms, & ThresholdingThresholding

Page 2: Chapter 3 cont’d.

RAGsRAGs

(Region Adjacency Graphs)(Region Adjacency Graphs)

Page 3: Chapter 3 cont’d.

Define graphDefine graph

G=(V,E)G=(V,E)

where V is a set of vertices or nodeswhere V is a set of vertices or nodes

and E is a set of edgesand E is a set of edges

Page 4: Chapter 3 cont’d.

Define graphDefine graph

G=(V,E)G=(V,E)

where V is a set of vertices or nodeswhere V is a set of vertices or nodes

and E is a set of edgesand E is a set of edges

represented by either unordered or ordered pairs represented by either unordered or ordered pairs of verticesof vertices

Page 5: Chapter 3 cont’d.

RAGs (Region Adjacency Graphs)RAGs (Region Adjacency Graphs)

Steps:Steps:1.1. label imagelabel image

2.2. scan and enter adjacencies in graphscan and enter adjacencies in graph

(RAGs also represent containment.)(RAGs also represent containment.)

Page 6: Chapter 3 cont’d.

0 (background)

1 2 3

-1 -3 -2

(Personally, I’d draw it this way!)

Page 7: Chapter 3 cont’d.

Define degree of a node.

What is special about nodes with degree 1?

Page 8: Chapter 3 cont’d.

But how do we obtain But how do we obtain binary images (from gray or binary images (from gray or

color images)?color images)?

Page 9: Chapter 3 cont’d.

Histograms & ThresholdingHistograms & Thresholding

Page 10: Chapter 3 cont’d.

Gray to binaryGray to binary

ThresholdingThresholding G G B B

const int t = 200;const int t = 200;

if (G[r][c] > t)if (G[r][c] > t) B[r][c] = 1;B[r][c] = 1;

elseelse B[r][c] = 0;B[r][c] = 0;

How do we choose t?How do we choose t?1.1. interactivelyinteractively

2.2. automaticallyautomatically

Page 11: Chapter 3 cont’d.

Gray to binaryGray to binary

1.1. Interactively. How?Interactively. How?

2.2. Automatically.Automatically. Many, many, many, …, many methods.Many, many, many, …, many methods.

a)a) Experimentally (using Experimentally (using a prioria priori information). information).

b)b) Supervised / training methods.Supervised / training methods.

c)c) UnsupervisedUnsupervised Otsu’s method (among many, many, many, many, … other Otsu’s method (among many, many, many, many, … other

methods).methods).

Page 12: Chapter 3 cont’d.

HistogramHistogram

““Probability” of a given gray value in an image.Probability” of a given gray value in an image.

h(h(g) = count of pixels w/ gray value equal g) = count of pixels w/ gray value equal to g.to g.

p(g) = h(g) / (w*h)p(g) = h(g) / (w*h)w*h = # of pixels in entire imagew*h = # of pixels in entire imageWhat are the range of possible values for What are the range of possible values for

p(g)?p(g)?

Page 13: Chapter 3 cont’d.

HistogramHistogram

““Probability” of a given gray value in an image.Probability” of a given gray value in an image.

h(h(g) = count of pixels w/ gray value equal g) = count of pixels w/ gray value equal to g. to g. What data type is used for counts?What data type is used for counts?

p(g) = h(g) / (w*h)p(g) = h(g) / (w*h)w*h = # of pixels in entire imagew*h = # of pixels in entire imageWhat are the range of possible values for What are the range of possible values for

p(g)? p(g)? So what data type is p(g)? What So what data type is p(g)? What happens when h(g) is divided by w*h?happens when h(g) is divided by w*h?

Page 14: Chapter 3 cont’d.

HistogramHistogram

Note: Sometimes we need to group gray Note: Sometimes we need to group gray values together in our histogram into “bins” values together in our histogram into “bins” or “buckets.”or “buckets.”

E.g., we have 10 bins in our histogram and E.g., we have 10 bins in our histogram and 100 possible different gray values. So we 100 possible different gray values. So we put 0..9 into bin 0, 10..19 into bin 1, …put 0..9 into bin 0, 10..19 into bin 1, …

Page 15: Chapter 3 cont’d.

HistogramHistogram

Page 16: Chapter 3 cont’d.

Something is missing here!

Page 17: Chapter 3 cont’d.

Example of histogramExample of histogram

Page 18: Chapter 3 cont’d.

Example of histogramExample of histogram

We can even analyze the histogram just as we analyze images.

One common measure is entropy:

Page 19: Chapter 3 cont’d.

EntropyEntropy Ice melting in a warm room is a Ice melting in a warm room is a

common example of “entropy common example of “entropy increasing”, described in 1862 by increasing”, described in 1862 by Rudolf Clausius as an increase in the Rudolf Clausius as an increase in the disgregation of the molecules of the disgregation of the molecules of the body of ice.body of ice.

from from http://en.wikipedia.org/wiki/Entropy

Page 20: Chapter 3 cont’d.

EntropyEntropy

““My greatest concern was what to call it. I thought of calling it My greatest concern was what to call it. I thought of calling it ‘information’, but the word was overly used, so I decided to call it ‘information’, but the word was overly used, so I decided to call it ‘uncertainty’. When I discussed it with John von Neumann, he had ‘uncertainty’. When I discussed it with John von Neumann, he had a better idea. Von Neumann told me, ‘You should call it entropy, for a better idea. Von Neumann told me, ‘You should call it entropy, for two reasons. In the first place your uncertainty function has been two reasons. In the first place your uncertainty function has been used in statistical mechanics under that name, so it already has a used in statistical mechanics under that name, so it already has a name. In the second place, and more important, nobody knows name. In the second place, and more important, nobody knows what entropy really is, so in a debate you will always have the what entropy really is, so in a debate you will always have the advantage.”advantage.”

– – Conversation between Claude Shannon and John von Neumann Conversation between Claude Shannon and John von Neumann regarding what name to give to the “measure of uncertainty” or regarding what name to give to the “measure of uncertainty” or attenuation in phone-line signals (1949)attenuation in phone-line signals (1949)

Page 21: Chapter 3 cont’d.

Example of histogramExample of histogram

We can even analyze the histogram just as we analyze images!One common measure is entropy:

Page 22: Chapter 3 cont’d.

Calculating entropyCalculating entropy

Notes:Notes:1.1. p(k) is in [0,1]p(k) is in [0,1]

2.2. If p(k)=0 then don’t If p(k)=0 then don’t calculate log(p(k)). calculate log(p(k)). Why?Why?

3.3. My calculator only has My calculator only has log base 10. How do I log base 10. How do I calculate log base 2?calculate log base 2?

4.4. Why ‘-’ to the left of Why ‘-’ to the left of the summation?the summation?

Page 23: Chapter 3 cont’d.

Let’s calculate some histogram Let’s calculate some histogram entropy values.entropy values.

Say we have 3 bits per gray value.Say we have 3 bits per gray value.So our histogram has 8 bins.So our histogram has 8 bins.Calculate the entropy for the following histograms (image Calculate the entropy for the following histograms (image

size is 10x10):size is 10x10):

1.1. 9999 00 00 00 00 00 00 11

2.2. 9999 00 11 00 00 00 00 00

3.3. 2020 2020 1010 1010 1010 1010 1010 1010

4.4. 5050 00 00 5050 00 00 00 00

5.5. 2525 00 2525 00 2525 00 2525 00

0.08

0.08

2.92

1.00

2.00

most disorder

Page 24: Chapter 3 cont’d.

Example histogramsExample histograms

Same subject but different images and histograms (because of a difference in contrast).

Page 25: Chapter 3 cont’d.

Example of Example of different different

thresholdsthresholds

Page 26: Chapter 3 cont’d.

So how can we determine So how can we determine the threshold value the threshold value automatically?automatically?

Page 27: Chapter 3 cont’d.

Example automatic thresholding Example automatic thresholding methodsmethods

1.1. Otsu’s methodOtsu’s method

2.2. K-means clusteringK-means clustering

Page 28: Chapter 3 cont’d.

Otsu’s methodOtsu’s method

Page 29: Chapter 3 cont’d.

Otsu’s methodOtsu’s method

Automatic thresholding methodAutomatic thresholding method automatically picks “best” threshold t given automatically picks “best” threshold t given

an image histograman image histogram

Assumes 2 groups are present in the Assumes 2 groups are present in the image:image:

1.1. Those that are <= t.Those that are <= t.

2.2. Those that are > t.Those that are > t.

Page 30: Chapter 3 cont’d.

Best choices for t.

Otsu’s methodOtsu’s method

Page 31: Chapter 3 cont’d.

Otsu’s methodOtsu’s method

For For everyevery possible t: possible t:A.A. Calculate within group variances:Calculate within group variances:

1.1. probability of being in group 1; probability of being in group 2probability of being in group 1; probability of being in group 2

2.2. determine mean of group 1; determine mean of group 2determine mean of group 1; determine mean of group 2

3.3. calculate variance for group 1; calculate variance for group 2calculate variance for group 1; calculate variance for group 2

4.4. calculate weighted sum of group variancescalculate weighted sum of group variances

B.B. Remember which t gave rise to minimum.Remember which t gave rise to minimum.

Page 32: Chapter 3 cont’d.

Otsu’s method:Otsu’s method:probability of being in each groupprobability of being in each group

max

12

01

ti

t

i

iptq

iptq

Page 33: Chapter 3 cont’d.

Otsu’s method:Otsu’s method:mean of individual groupsmean of individual groups

max

122

011

/

/

ti

t

i

tqipit

tqipit

Page 34: Chapter 3 cont’d.

Otsu’s method:Otsu’s method:variance of individual groupsvariance of individual groups

max

12

22

22

01

21

21

/

/

ti

t

i

tqiptit

tqiptit

Page 35: Chapter 3 cont’d.

Otsu’s method:Otsu’s method:weighted sum of group variancesweighted sum of group variances

Calculate for all t’s and minimize.Calculate for all t’s and minimize.

Demo Otsu.Demo Otsu.

ttqttqtW222

211

2

max0|min 2 ttW

Page 36: Chapter 3 cont’d.

Demo of Otsu’s methodDemo of Otsu’s method

beforebefore

Page 37: Chapter 3 cont’d.

Demo of Otsu’s methodDemo of Otsu’s method

Otsu’s reportOtsu’s report

Page 38: Chapter 3 cont’d.

Demo of Otsu’s methodDemo of Otsu’s method

Otsu’sOtsu’s

thresholdthreshold

Page 39: Chapter 3 cont’d.
Page 40: Chapter 3 cont’d.

Generalized thresholdingGeneralized thresholding

Page 41: Chapter 3 cont’d.

Generalized thresholdingGeneralized thresholding

Single range of gray valuesSingle range of gray values

const int t1 = 200;const int t1 = 200;

const int t2 = 500;const int t2 = 500;

if (G[r][c] > t1 && G[r][c] < t2)if (G[r][c] > t1 && G[r][c] < t2)B[r][c] = 1;B[r][c] = 1;

elseelse B[r][c] = 0;B[r][c] = 0;

Page 42: Chapter 3 cont’d.

Even more general thresholdingEven more general thresholding

Union of ranges of gray values.Union of ranges of gray values.

const int t1 = 200, t2 = 500;const int t1 = 200, t2 = 500;

const int t3 =1200, t4 =1500;const int t3 =1200, t4 =1500;

if (G[r][c] > t1 && G[r][c] < t2)if (G[r][c] > t1 && G[r][c] < t2)B[r][c] = 1;B[r][c] = 1;

else if (G[r][c] > t3 && G[r][c] < t4)else if (G[r][c] > t3 && G[r][c] < t4) B[r][c] = 1;B[r][c] = 1;

elseelse B[r][c] B[r][c] = 0;= 0;

Page 43: Chapter 3 cont’d.

Something is missing here!

Page 44: Chapter 3 cont’d.

K-means clusteringK-means clustering

Page 45: Chapter 3 cont’d.

K-Means ClusteringK-Means Clustering

In statistics and machine learning, k-means In statistics and machine learning, k-means clustering is a method of cluster analysis which clustering is a method of cluster analysis which aims to partition n observations into k clusters in aims to partition n observations into k clusters in which each observation belongs to the cluster which each observation belongs to the cluster with the nearest mean. It is similar to the with the nearest mean. It is similar to the expectation-maximization algorithm for mixtures expectation-maximization algorithm for mixtures of Gaussians in that they both attempt to find the of Gaussians in that they both attempt to find the centers of natural clusters in the data.centers of natural clusters in the data.

from wikipediafrom wikipedia

Page 46: Chapter 3 cont’d.

K-Means ClusteringK-Means Clustering

Clustering = the process of partitioning a Clustering = the process of partitioning a set of pattern vectors into subsets called set of pattern vectors into subsets called clusters.clusters.

K = number of clusters (must be known in K = number of clusters (must be known in advance).advance).

Not an exhaustive search so it may Not an exhaustive search so it may notnot find the globally optimal solution.find the globally optimal solution.

(see section 10.1.1)(see section 10.1.1)

Page 47: Chapter 3 cont’d.

Iterative K-Means Clustering Iterative K-Means Clustering AlgorithmAlgorithm

Form K-means clusters from a set of nD feature vectors.Form K-means clusters from a set of nD feature vectors.

1.1. Set ic=1 (iteration count).Set ic=1 (iteration count).

2.2. Choose randomly a set of K means mChoose randomly a set of K means m11(1), m(1), m22(1), … (1), … mmKK(1).(1).

3.3. For each vector xFor each vector xii compute D(x compute D(xii,m,mjj(ic)) for each j=1,(ic)) for each j=1,…,K.…,K.

4.4. Assign xAssign xii to the cluster C to the cluster Cjj with the nearest mean. with the nearest mean.

5.5. ic =ic+1; update the means to get a new set mic =ic+1; update the means to get a new set m11(ic), (ic), mm22(ic), … m(ic), … mKK(ic).(ic).

6.6. Repeat 3..5 until CRepeat 3..5 until Cjj(ic+1) = C(ic+1) = Cjj(ic) for all j.(ic) for all j.

Page 48: Chapter 3 cont’d.

K-Means Clustering ExampleK-Means Clustering Example

0. Let K=3.0. Let K=3.

1. Randomly (may not necessarily be actual data 1. Randomly (may not necessarily be actual data points) choose 3 means (i.e., cluster centers).points) choose 3 means (i.e., cluster centers).

- figure from wikipedia

Page 49: Chapter 3 cont’d.

K-Means Clustering ExampleK-Means Clustering Example

1. Randomly (may not necessarily be actual data 1. Randomly (may not necessarily be actual data points) choose 3 means (i.e., cluster centers).points) choose 3 means (i.e., cluster centers).

2. Assign each point to nearest cluster center 2. Assign each point to nearest cluster center (mean).(mean).

- figure from wikipedia

Page 50: Chapter 3 cont’d.

K-Means Clustering ExampleK-Means Clustering Example

2. Assign each point to nearest cluster center 2. Assign each point to nearest cluster center (mean).(mean).

3. Calculate centroid of each cluster center 3. Calculate centroid of each cluster center (mean). These will become the new centers.(mean). These will become the new centers.

- figure from wikipedia

Page 51: Chapter 3 cont’d.

K-Means Clustering ExampleK-Means Clustering Example

3. Calculate centroid of each cluster center 3. Calculate centroid of each cluster center (mean). These will become the new centers.(mean). These will become the new centers.

4. Repeat steps 2 and 3 until convergence.4. Repeat steps 2 and 3 until convergence.

- figure from wikipedia

Page 52: Chapter 3 cont’d.

K-Means for Optimal ThresholdingK-Means for Optimal Thresholding

What are the features?What are the features?

Page 53: Chapter 3 cont’d.

K-Means for Optimal ThresholdingK-Means for Optimal Thresholding

What are the features?What are the features? Individual pixel gray valuesIndividual pixel gray values

Page 54: Chapter 3 cont’d.

K-Means for Optimal ThresholdingK-Means for Optimal Thresholding

What value for K should be used?What value for K should be used?

Page 55: Chapter 3 cont’d.

K-Means for Optimal ThresholdingK-Means for Optimal Thresholding

What value for K should be used?What value for K should be used?K=2 to be like Otsu’s method.K=2 to be like Otsu’s method.

Page 56: Chapter 3 cont’d.

Iterative K-Means Clustering Iterative K-Means Clustering AlgorithmAlgorithm

Form 2 clusters from a set of pixel gray values.Form 2 clusters from a set of pixel gray values.

1.1. Set ic=1 (iteration count).Set ic=1 (iteration count).2.2. Choose 2 random gray values as our initial K means, Choose 2 random gray values as our initial K means,

mm11(1), and m(1), and m22(1).(1).

3.3. For each pixel gray value xFor each pixel gray value xii compute fabs(x compute fabs(xii,m,mjj(ic)) for (ic)) for each j=1,2.each j=1,2.

4.4. Assign xAssign xii to the cluster C to the cluster Cjj with the nearest mean. with the nearest mean.

5.5. ic =ic+1; update the means to get a new set mic =ic+1; update the means to get a new set m11(ic), (ic), mm22(ic), … m(ic), … mKK(ic).(ic).

6.6. Repeat 3..5 until Cj(ic+1) = Cj(ic) for all j.Repeat 3..5 until Cj(ic+1) = Cj(ic) for all j.

Page 57: Chapter 3 cont’d.

Iterative K-Means Clustering Iterative K-Means Clustering AlgorithmAlgorithm

Form 2 clusters from a set of pixel gray values.Form 2 clusters from a set of pixel gray values.

1.1. Set ic=1 (iteration count).Set ic=1 (iteration count).2.2. Choose 2 random gray values as our initial K means, Choose 2 random gray values as our initial K means,

mm11(1), and m(1), and m22(1).(1).

3.3. For each pixel gray value xFor each pixel gray value xii compute fabs(x compute fabs(xii,m,mjj(ic)) for (ic)) for each j=1,2.each j=1,2.

4.4. Assign xAssign xii to the cluster C to the cluster Cjj with the nearest mean. with the nearest mean.

5.5. ic =ic+1; update the means to get a new set mic =ic+1; update the means to get a new set m11(ic), (ic), mm22(ic), … m(ic), … mKK(ic).(ic).

6.6. Repeat 3..5 until Cj(ic+1) = Cj(ic) for all j.Repeat 3..5 until Cj(ic+1) = Cj(ic) for all j.

This can be derived from the original image or from the histogram.

Page 58: Chapter 3 cont’d.

Iterative K-Means Clustering Iterative K-Means Clustering AlgorithmAlgorithm

Example.Example.

m1(1)=m1(1)=260.83, m2(1)=539.00260.83, m2(1)=539.00m1(2)=39.37, m2(2)=1045.65m1(2)=39.37, m2(2)=1045.65m1(3)=52.29, m2(3)=1098.63m1(3)=52.29, m2(3)=1098.63m1(4)=54.71, m2(4)=1106.28m1(4)=54.71, m2(4)=1106.28m1(5)=55.04, m2(5)=1107.24m1(5)=55.04, m2(5)=1107.24m1(6)=m1(6)=55.10, m2(6)=1107.4455.10, m2(6)=1107.44m1(7)=55.10, m2(7)=1107.44m1(7)=55.10, m2(7)=1107.44......

Demo K-Means.Demo K-Means.

Page 59: Chapter 3 cont’d.

Demo of K Means methodDemo of K Means method

beforebefore

Page 60: Chapter 3 cont’d.

Demo of K Means methodDemo of K Means method

K MeansK Means

reports forreports for

K=2 & K=3K=2 & K=3

Page 61: Chapter 3 cont’d.

Demo of K Means methodDemo of K Means method

K MeansK Means

t=56t=56

Page 62: Chapter 3 cont’d.

Demo of K Means methodDemo of K Means method

K MeansK Means

t=54t=54

Page 63: Chapter 3 cont’d.

Demo of K Means methodDemo of K Means method

K MeansK Means

t=128t=128

Page 64: Chapter 3 cont’d.

Otsu vs. K-MeansOtsu vs. K-Means

Otsu’s method as presented determines Otsu’s method as presented determines the single best threshold.the single best threshold.

How many objects can it discriminate?How many objects can it discriminate?

Suggest a modification to discriminate more.Suggest a modification to discriminate more.

Page 65: Chapter 3 cont’d.

Otsu vs. K-MeansOtsu vs. K-Means

How is Otsu’s method similar to K-Means?How is Otsu’s method similar to K-Means?

What does Otsu’s method determine?What does Otsu’s method determine?

What does K-Means determine (for K=2)?What does K-Means determine (for K=2)?

Page 66: Chapter 3 cont’d.

Otsu vs. K-MeansOtsu vs. K-Means

How is Otsu’s method similar to K-Means?How is Otsu’s method similar to K-Means?

What does Otsu’s method determine?What does Otsu’s method determine?a single threshold, ta single threshold, t

What does K-Means determine (for K=2)?What does K-Means determine (for K=2)?m1 and m2 are cluster means (centers)m1 and m2 are cluster means (centers)Once m1 and m2 are determined, how can they be Once m1 and m2 are determined, how can they be

used to determine the threshold?used to determine the threshold?

Page 67: Chapter 3 cont’d.

Otsu vs. K-MeansOtsu vs. K-Means

A final word, . . .A final word, . . .

K-Means readily generalizes to:K-Means readily generalizes to:1.1. arbitrary number of classes (K)arbitrary number of classes (K)

2.2. can easily be extended to many, many features (i.e., can easily be extended to many, many features (i.e., feature vectors instead of only gray values/higher feature vectors instead of only gray values/higher dimensions)dimensions)

K-Means will find a local optimum, but that may not K-Means will find a local optimum, but that may not be the global optimum!be the global optimum!