MIPR Lecture 4 Copyright Oleh Tretiak, 2004 1 Medical Imaging and Pattern Recognition Lecture 4...

Post on 21-Jan-2016

216 views 0 download

Transcript of MIPR Lecture 4 Copyright Oleh Tretiak, 2004 1 Medical Imaging and Pattern Recognition Lecture 4...

MIPR Lecture 4Copyright Oleh Tretiak, 2004

1

Medical Imaging and Pattern Recognition

Lecture 4 Visibility and Noise, Certainty

in Medical DecisionsOleh Tretiak

MIPR Lecture 4Copyright Oleh Tretiak, 2004

2

Lecture Overview

• Factors affecting visibility of objects in images

• Noise as a factor in image quality• Probability and experimental

findings• Types of errors in medical

diagnosis

MIPR Lecture 4Copyright Oleh Tretiak, 2004

3

How many blobs?

contrast = 1

contrast = 8contrast = 4

contrast = 2

MIPR Lecture 4Copyright Oleh Tretiak, 2004

4

How many flowers?

MIPR Lecture 4Copyright Oleh Tretiak, 2004

5

Visibility of Objects

• If contrast is to small, object can’t be seen– Increase contrast!

• If object is too small, it can’t be seen– Magnify!

MIPR Lecture 4Copyright Oleh Tretiak, 2004

6

Visual Pathway - Anatomy

MIPR Lecture 4Copyright Oleh Tretiak, 2004

7

Two-Dimensional Systems

• We would like to have a system model for vision.

hx(u,v) y(u,v)

• Input: Image• Output: Our mind’s perception

MIPR Lecture 4Copyright Oleh Tretiak, 2004

8

‘Typical’ Visual Spatial Response

low contrast

high contrast

MIPR Lecture 4Copyright Oleh Tretiak, 2004

10

Objective value (intensity)

Subjective (perceived) value

Mach Bands

MIPR Lecture 4Copyright Oleh Tretiak, 2004

11

The circles have the same objective intensity.

MIPR Lecture 4Copyright Oleh Tretiak, 2004

12

MIPR Lecture 4Copyright Oleh Tretiak, 2004

14

Image Noise

• Variations of intensity that have no bearing on the information in the image are called noise

• White noise means that the variation is uncorrelated from pixel to pixel

MIPR Lecture 4Copyright Oleh Tretiak, 2004

15

‘White Noise’ Pattern

MIPR Lecture 4Copyright Oleh Tretiak, 2004

16

Noise Patterns

White (left), low frequency middle), and high frequency noises. All have same standard deviation

The standard deviation is a measure of noise intensity.

MIPR Lecture 4Copyright Oleh Tretiak, 2004

17

Effect of noise on image quality: UL ~ original 8-bit image; UR ~ white noise; LL ~ low pass noise; LR ~ high pass noise. Noise standard deviation is equal to 8.

MIPR Lecture 4Copyright Oleh Tretiak, 2004

18

Effect of noise on image quaity: UL ~ original 8-bit image; UR ~ white noise; LL ~ low pass noise; LR ~ high pass noise. Noise standard deviation is equal to 32.

MIPR Lecture 4Copyright Oleh Tretiak, 2004

19

Conclusions

• Object visibility can be improved by increasing contrast or object size

• This is effective only when object is free of noise

• All physical systems have noise, and this places a limit on visibility

low noiselow noise,contrast

high noisehigh noise,

contrast

MIPR Lecture 4Copyright Oleh Tretiak, 2004

21

Noise Limited Resolution

0.4 photons/pixel 4 photons/pixel

40 photons/pixel

MIPR Lecture 4Copyright Oleh Tretiak, 2004

22

Noise Tradeoff

• In X-ray and radionuclide systems, reduced noise produces higher radiation dose

• In Magnetic Resonance, reduced noise requires longer time

• Higher resolution produces more noise

MIPR Lecture 4Copyright Oleh Tretiak, 2004

23

Probability and Decisions

• We poll 100 people about whether they will vote for Bush of Kerry. 60 say they will vote for Kerry, 40 for Bush. Will Kerry0 win?

• We give vitamin C to a group of 10 people who have colds: 6 get better. In a group of 10 people who did not get vitamin C, 4 got better. Is vitamin C effective against the common cold?

MIPR Lecture 4Copyright Oleh Tretiak, 2004

24

Sampling

• Two possible outcomes in a trial (Bush/Kerry, Healthy/Sick)

• A very large population of individuals

• We select a small number of individuals, and find their outcomes.

• Can we conclude about the large group from the small group?

MIPR Lecture 4Copyright Oleh Tretiak, 2004

25

Bernoulli Trials

• Probability of ‘success’ = p– In the whole population, the fraction

of ‘success’ is p

• Number of observations is n• Number of successes is k• Probability of this result is

P(n, k) = (1-p)n-kpk n!/[k!(n-k)!]

MIPR Lecture 4Copyright Oleh Tretiak, 2004

26

Probability plot, n = 10, p = 0.5

0

0.05

0.1

0.15

0.2

0.25

0.3

0 1 2 3 4 5 6 7 8 9 10

Probability of any specific outcome is pretty low. The result 6/10 successes with vitamin C, 4/10 successes without could be due to benefit of vitamin C, or it could be chance. It is not convincing.

MIPR Lecture 4Copyright Oleh Tretiak, 2004

27

Probability plot, n = 100, p = 0.5• Probability of any individual outcome is very low• Probability of getting 60 or more out of 100 if the

probability were 0.5 is 0.03. That’s unlikely.• The result does not support that half the voters support

each candidate.

0

0.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

0 10 20 30 40 50 60 70 80 90 100

MIPR Lecture 4Copyright Oleh Tretiak, 2004

28

Probability and Experimental Conclusions

• We would like to predict what will be the effect of a treatment on a large population on the basis of a sample.

• Chance can give a misleading outcome• Probability theory can tell us if the

result of the test is1. Strongly supports the apparent outcome2. Fails to support the outcome (could be due

to chance)

MIPR Lecture 4Copyright Oleh Tretiak, 2004

29

Medical Diagnosis

• A good test is one that tells us the truth• In medical tests, there are two kinds of

errors– Predict the patients are healthy when they

are sick– Predict that the patients are sick when they

are healthy

• Both kinds of error are undesirable

MIPR Lecture 4Copyright Oleh Tretiak, 2004

30

Definition• SPECIFICITY is accuracy for diagnosing

healthy patients• SENSITIVITY is accuracy for diagnosing

sick patients

QuickTime™ and aTIFF (Uncompressed) decompressor

are needed to see this picture.

MIPR Lecture 4Copyright Oleh Tretiak, 2004

31

Comparing Tests

• Method A: Specificity = 0.95, Sensitivity = 0.80

• Method B: Specificity = 0.90, Sensitivity = 0.85– Which is better?

• Cannot conclude which test is better on the basis of this information

MIPR Lecture 4Copyright Oleh Tretiak, 2004

32

Diagnostic Decisions

• We can have very high sensitivity by deciding every piece of data indicates disease (aggressive treatment). This will lead to low specificity.

• We can have very high specificity by requiring very strong evidence of disease (conservative treatment). This will lead to low sensitivity.

• The goal of improved diagnostic technology is to improve both sensitivity and specificity.

MIPR Lecture 4Copyright Oleh Tretiak, 2004

33

Summary

• Probability theory and statistics are important tools in the study of medical imaging and pattern recognition.

• Imaging systems require tradeoff between image resolution, noise, dose, and many other factors.

• Evaluation of diagnostic systems can only be done from by using probability theory and statistics.