Post on 07-Jan-2016
description
8/16/99
Computer Vision: Vision and ModelingComputer Vision: Vision and Modeling
8/16/99
• Lucas-Kanade Extensions
• Support Maps / Layers:
Robust Norm, Layered Motion, Background Subtraction, Color Layers
• Statistical Models (Forsyth+Ponce Chap. 6, Duda+Hart+Stork: Chap. 1-5)
- Bayesian Decision Theory
- Density Estimation
Computer Vision: Vision and ModelingComputer Vision: Vision and Modeling
8/16/99
A Different View of Lucas-KanadeA Different View of Lucas-Kanade
I (1) - I(1) v t 1
I (2) - I(2) v t 2
I (n) - I(n) vt n
...
2
E = ( )
=
I (i) - I(i) v t i
2
i
White board
High Gradient hasHigher weight
8/16/99
Constrained OptimizationConstrained Optimization
V V
Constrain-
I (1) - I(1) v t 1
I (2) - I(2) v t 2
I (n) - I(n) vt n
...
2
8/16/99
Constraints = SubspacesConstraints = Subspaces
E(V)V V
Constrain-
Analytically derived:Affine / Twist/Exponential Map
Learned:Linear/non-linear
Sub-Spaces
8/16/99
Motion ConstraintsMotion Constraints
• Optical Flow: local constraints
• Region Layers: rigid/affine constraints
• Articulated: kinematic chain constraints
• Nonrigid: implicit / learned constraints
8/16/99
V = M( )
Constrained Function Minimization
= E(V)
V V
Constrain-
I (1) - I(1) v t 1
I (2) - I(2) v t 2
I (n) - I(n) vt n
...
2
8/16/99
2D Translation: Lucas-Kanade
= E(V)
V V
Constrain-
dx, dy
dx, dy
...
dx, dy
V =
2D
I (1) - I(1) v t 1
I (2) - I(2) v t 2
I (n) - I(n) vt n
...
2
8/16/99
2D Affine: Bergen et al, Shi-Tomasi
= E(V)
V V
Constrain-
a1, a2
a3, a4v =
6D
dx
dy
x
yi
ii +
I (1) - I(1) v t 1
I (2) - I(2) v t 2
I (n) - I(n) vt n
...
2
8/16/99
Affine Extension
6
5
43
21
a
a
y
x
aa
aa
vy
ux
Affine Motion Model:
- 2D Translation- 2D Rotation- Scale in X / Y- Shear
Matlab demo ->
8/16/99
Affine Extension
Affine Motion Model -> Lucas-Kanade:
Matlab demo ->
ROIyx
yxGvyuxFvuE,
2)),(),((),(
321 ayaxa 654 ayaxa
D
a
a
C
6
1
...
8/16/99
2D Affine: Bergen et al, Shi-Tomasi
V V
Constrain-
6D
8/16/99
K-DOF Models
= E(V)
V V
Constrain-
K-DOF
V = M( )
I (1) - I(1) v t 1
I (2) - I(2) v t 2
I (n) - I(n) vt n
...
2
8/16/99
V = M( )
Quadratic Error Norm (SSD) ???
= E(V)
V V
Constrain-
I (1) - I(1) v t 1
I (2) - I(2) v t 2
I (n) - I(n) vt n
...
2
White board (outliers?)
8/16/99
Support Maps / LayersSupport Maps / Layers
- L2 Norm vs Robust Norm
- Dangers of least square fitting:
L2
D
8/16/99
Support Maps / LayersSupport Maps / Layers
- L2 Norm vs Robust Norm
- Dangers of least square fitting:
L2 robust
D D
8/16/99
Support Maps / LayersSupport Maps / Layers
- Robust Norm -- good for outliers
- nonlinear optimization
robust
D
8/16/99
Support Maps / LayersSupport Maps / Layers
- Iterative Technique
Add weights to each pixel eq (white board)
8/16/99
Support Maps / LayersSupport Maps / Layers
- how to compute weights ?
-> previous iteration: how good does G-warp matches F ?
-> probabilistic distance: Gaussian:
8/16/99
Error Norms / Optimization TechniquesError Norms / Optimization Techniques
SSD: Lucas-Kanade (1981) Newton-Raphson
SSD: Bergen-et al. (1992) Coarse-to-Fine
SSD: Shi-Tomasi (1994) Good Features
Robust Norm: Jepson-Black (1993) EM
Robust Norm: Ayer-Sawhney (1995) EM + MRF
MAP: Weiss-Adelson (1996) EM + MRF
ML/MAP: Bregler-Malik (1998) Twists / EM
ML/MAP: Irani (+Ananadan) (2000) SVD
8/16/99
• Lucas-Kanade Extensions
• Support Maps / Layers:
Robust Norm, Layered Motion, Background Subtraction, Color Layers
• Statistical Models (Forsyth+Ponce Chap. 6, Duda+Hart+Stork: Chap. 1-5)
- Bayesian Decision Theory
- Density Estimation
Computer Vision: Vision and ModelingComputer Vision: Vision and Modeling
8/16/99
Support Maps / LayersSupport Maps / Layers
- Black-Jepson-95
8/16/99
Support Maps / LayersSupport Maps / Layers
- More General: Layered Motion (Jepson/Black, Weiss/Adelson, …)
8/16/99
Support Maps / LayersSupport Maps / Layers
- Special Cases of Layered Motion:
- Background substraction
- Outlier rejection (== robust norm)
- Simplest Case: Each Layer has uniform color
8/16/99
Support Maps / LayersSupport Maps / Layers
- Color Layers:
P(skin | F(x,y))
8/16/99
• Lucas-Kanade Extensions
• Support Maps / Layers:
Robust Norm, Layered Motion, Background Subtraction, Color Layers
• Statistical Models (Duda+Hart+Stork: Chap. 1-5)
- Bayesian Decision Theory
- Density Estimation
Computer Vision: Vision and ModelingComputer Vision: Vision and Modeling
8/16/99
• Statistical Models: Represent Uncertainty and Variability
• Probability Theory: Proper mechanism for Uncertainty
• Basic Facts White Board
Statistical Models / Probability TheoryStatistical Models / Probability Theory
8/16/99
General Performance CriteriaGeneral Performance Criteria
Optimal BayesOptimal Bayes
With Applications to ClassificationWith Applications to Classification
Optimal BayesOptimal Bayes
With Applications to ClassificationWith Applications to Classification
8/16/99
Bayes Decision TheoryBayes Decision Theory
Example: Character Recognition:Example: Character Recognition:
Goal: Goal: Classify new character in a way as to Classify new character in a way as to
minimize probability of misclassificationminimize probability of misclassification
Example: Character Recognition:Example: Character Recognition:
Goal: Goal: Classify new character in a way as to Classify new character in a way as to
minimize probability of misclassificationminimize probability of misclassification
8/16/99
Bayes Decision TheoryBayes Decision Theory
)( kCP )( kCP• 1st Concept: Priors
a a b a b a a b ab a a a a b a a b aa b a a a a b b a b a b a a b a a
P(a)=0.75P(b)=0.25
?
8/16/99
Bayes Decision TheoryBayes Decision Theory
• 2nd Concept: Conditional Probability
)|( kCXP)|( aXP
)|( bXP# black pixel
# black pixel
8/16/99
Bayes Decision TheoryBayes Decision Theory
• Example:
)|( aXP )|( bXP
X=7
?kC
8/16/99
Bayes Decision TheoryBayes Decision Theory
• Example:
)|( aXP )|( bXP
X=8
?kC
8/16/99
Bayes Decision TheoryBayes Decision Theory
• Example:
)|( aXP )|( bXP
X=8
Well…P(a)=0.75P(b)=0.25
aCk
8/16/99
Bayes Decision TheoryBayes Decision Theory
• Example:
)|( aXP )|( bXP
X=9 P(a)=0.75P(b)=0.25
?kC
8/16/99
Bayes Decision TheoryBayes Decision Theory
• Bayes Theorem:
)(
)()|()|(
XP
CPCXPXCP kk
k
8/16/99
Bayes Decision TheoryBayes Decision Theory
• Bayes Theorem:
)(
)()|()|(
XP
CPCXPXCP kk
k
jjj
kk
CPCXP
CPCXP
)()|(
)()|(
8/16/99
Bayes Decision TheoryBayes Decision Theory
• Bayes Theorem:
Posterior = Likelihood x prior
Normalization factor
8/16/99
Bayes Decision TheoryBayes Decision Theory
• Example:
)|( aXP )|( bXP
8/16/99
Bayes Decision TheoryBayes Decision Theory
• Example:
)()|( aPaXP)()|( bPbXP
8/16/99
Bayes Decision TheoryBayes Decision Theory
• Example:
)|( XbP)|( XaP
X>8 class b
8/16/99
Bayes Decision TheoryBayes Decision Theory
Goal: Goal: Classify new character in a way as to Classify new character in a way as to
minimize probability of misclassificationminimize probability of misclassification
Decision boundaries:Decision boundaries:
Goal: Goal: Classify new character in a way as to Classify new character in a way as to
minimize probability of misclassificationminimize probability of misclassification
Decision boundaries:Decision boundaries:
kjxCPxCP jk allfor )|()|(
8/16/99
Bayes Decision TheoryBayes Decision Theory
Goal: Goal: Classify new character in a way as to Classify new character in a way as to
minimize probability of misclassificationminimize probability of misclassification
Decision boundaries:Decision boundaries:
Goal: Goal: Classify new character in a way as to Classify new character in a way as to
minimize probability of misclassificationminimize probability of misclassification
Decision boundaries:Decision boundaries:
kjxCPxCP jk allfor )|()|(
kjCPCxPCPCxP jjkk allfor )()|()()|(
8/16/99
Bayes Decision TheoryBayes Decision Theory
Decision Regions: Decision Regions:
Decision Regions: Decision Regions:
cRR ,...,1
R1 R2 R3
8/16/99
Bayes Decision TheoryBayes Decision Theory
Goal:Goal: minimize probability of misclassificationminimize probability of misclassification
Goal:Goal: minimize probability of misclassificationminimize probability of misclassification
),(),()error( 2112 CRxPCRxPP
8/16/99
Bayes Decision TheoryBayes Decision Theory
Goal:Goal: minimize probability of misclassificationminimize probability of misclassification
Goal:Goal: minimize probability of misclassificationminimize probability of misclassification
),(),()error( 2112 CRxPCRxPP
)()|()()|( 221112 CPCRxPCPCRxP
8/16/99
Bayes Decision TheoryBayes Decision Theory
Goal:Goal: minimize probability of misclassificationminimize probability of misclassification
Goal:Goal: minimize probability of misclassificationminimize probability of misclassification
),(),()error( 2112 CRxPCRxPP
)()|()()|( 221112 CPCRxPCPCRxP
2 1
)()|()()|( 2211
R R
dxCPCxpdxCPCxp
8/16/99
Bayes Decision TheoryBayes Decision Theory
Goal:Goal: minimize probability of misclassificationminimize probability of misclassification
Goal:Goal: minimize probability of misclassificationminimize probability of misclassification
2 1
)()|()()|( 2211
R R
dxCPCxpdxCPCxp
8/16/99
Bayes Decision TheoryBayes Decision Theory
Discriminant functions:Discriminant functions:
• class membership solely based on relative sizesclass membership solely based on relative sizes
• Reformulate classification process in terms of Reformulate classification process in terms of
discriminant functions: discriminant functions:
x x is assigned tois assigned to Ck Ck ifif
Discriminant functions:Discriminant functions:
• class membership solely based on relative sizesclass membership solely based on relative sizes
• Reformulate classification process in terms of Reformulate classification process in terms of
discriminant functions: discriminant functions:
x x is assigned tois assigned to Ck Ck ifif
)(),...,(1 xyxy k
kjxyxy jk allfor )()(
8/16/99
Bayes Decision TheoryBayes Decision Theory
Discriminant function examples:Discriminant function examples:Discriminant function examples:Discriminant function examples:
)|()( xCPxy kk
)()|()( kkk CPCxpxy
)( ln )|( ln)( kkk CPCxpxy
8/16/99
Bayes Decision TheoryBayes Decision Theory
Discriminant function examples: 2-class problemDiscriminant function examples: 2-class problemDiscriminant function examples: 2-class problemDiscriminant function examples: 2-class problem
))()(( 0)( 21 xyxyxy
)|()|()( 21 xCPxCPxy
)(
)( ln
)|(
)|( ln )(
2
1
2
1
CP
CP
Cxp
Cxpxy
8/16/99
Bayes Decision TheoryBayes Decision Theory
Why is such a big deal ?Why is such a big deal ?Why is such a big deal ?Why is such a big deal ?)()|( kk CPCxp
8/16/99
Bayes Decision TheoryBayes Decision Theory
Why is such a big deal ?Why is such a big deal ?
Example #1: Speech RecognitionExample #1: Speech Recognition
Why is such a big deal ?Why is such a big deal ?
Example #1: Speech RecognitionExample #1: Speech Recognition
)()|( kk CPCxp
7189
= x y [/ah/, /eh/, .. /uh/]FFTmelscalebank
apple, ...,zebra
8/16/99
Bayes Decision TheoryBayes Decision Theory
Why is such a big deal ?Why is such a big deal ?
Example #1: Speech RecognitionExample #1: Speech Recognition
Why is such a big deal ?Why is such a big deal ?
Example #1: Speech RecognitionExample #1: Speech Recognition
)()|( kk CPCxp
FFTmelscalebank
/t/ /t/ /t/ /t/
/aal/ /aol/ /owl/
8/16/99
Bayes Decision TheoryBayes Decision Theory
Why is such a big deal ?Why is such a big deal ?
Example #1: Speech RecognitionExample #1: Speech Recognition
Why is such a big deal ?Why is such a big deal ?
Example #1: Speech RecognitionExample #1: Speech Recognition
)()|( kk CPCxp
How do Humans do it?
8/16/99
Bayes Decision TheoryBayes Decision Theory
Why is such a big deal ?Why is such a big deal ?
Example #1: Speech RecognitionExample #1: Speech Recognition
Why is such a big deal ?Why is such a big deal ?
Example #1: Speech RecognitionExample #1: Speech Recognition
)()|( kk CPCxp
“This machine can recognize speech” ??
8/16/99
Bayes Decision TheoryBayes Decision Theory
Why is such a big deal ?Why is such a big deal ?
Example #1: Speech RecognitionExample #1: Speech Recognition
Why is such a big deal ?Why is such a big deal ?
Example #1: Speech RecognitionExample #1: Speech Recognition
)()|( kk CPCxp
“This machine can wreck a nice beach” !!
8/16/99
Bayes Decision TheoryBayes Decision Theory
Why is such a big deal ?Why is such a big deal ?
Example #1: Speech RecognitionExample #1: Speech Recognition
Why is such a big deal ?Why is such a big deal ?
Example #1: Speech RecognitionExample #1: Speech Recognition
)()|( kk CPCxp
7189
= x y FFTmelscalebank
)|( kCxp
8/16/99
Bayes Decision TheoryBayes Decision Theory
Why is such a big deal ?Why is such a big deal ?
Example #1: Speech RecognitionExample #1: Speech Recognition
Why is such a big deal ?Why is such a big deal ?
Example #1: Speech RecognitionExample #1: Speech Recognition
)()|( kk CPCxp
7189
= x y FFTmelscalebank
)|( kCxp
P(“wreck a nice beach”) = 0.001P(“recognize speech”) = 0.02
Language Model
)( kCP
8/16/99
Bayes Decision TheoryBayes Decision Theory
Why is such a big deal ?Why is such a big deal ?
Example #2: Computer VisionExample #2: Computer Vision
Why is such a big deal ?Why is such a big deal ?
Example #2: Computer VisionExample #2: Computer Vision
)()|( kk CPCxp
Low-LevelImageMeasurements
High-LevelModelKnowledge
)|( kCxp )( kCP
8/16/99
BayesBayes
Why is such a big deal ?Why is such a big deal ?
Example #3: Curve FittingExample #3: Curve Fitting
Why is such a big deal ?Why is such a big deal ?
Example #3: Curve FittingExample #3: Curve Fitting
)()|( kk CPCxp
E + ln p(x|c) + ln p(c)
8/16/99
BayesBayes
Why is such a big deal ?Why is such a big deal ?
Example #4: Snake TrackingExample #4: Snake Tracking
Why is such a big deal ?Why is such a big deal ?
Example #4: Snake TrackingExample #4: Snake Tracking
)()|( kk CPCxp
E + ln p(x|c) + ln p(c)
8/16/99
• Lucas-Kanade Extensions
• Support Maps / Layers:
Robust Norm, Layered Motion, Background Subtraction, Color Layers
• Statistical Models (Forsyth+Ponce Chap. 6, Duda+Hart+Stork: Chap. 1-5)
- Bayesian Decision Theory
- Density Estimation
Computer Vision: Vision and ModelingComputer Vision: Vision and Modeling
8/16/99
Probability Density EstimationProbability Density Estimation
)|( Cxp
Collect Data: x1,x2,x3,x4,x5,...
x
x
?
Estimate:
8/16/99
Probability Density EstimationProbability Density Estimation
• Parametric Representations• Non-Parametric Representations• Mixture Models
8/16/99
Probability Density EstimationProbability Density Estimation
• Parametric Representations- Normal Distribution (Gaussian)- Maximum Likelihood- Bayesian Learning
8/16/99
Normal DistributionNormal Distribution
mean
variance
8/16/99
Multivariate Normal DistributionMultivariate Normal Distribution
8/16/99
Multivariate Normal DistributionMultivariate Normal Distribution
Why Gaussian ?
• Simple analytical properties:- linear transformations of Gaussians are Gaussian- marginal and conditional densities of Gaussians are Gaussian- any moment of Gaussian densities is an explicit function of
• “Good” Model of Nature:- Central Limit Theorem: Mean of M random variables is distributed
normally in the limit.
8/16/99
Multivariate Normal DistributionMultivariate Normal Distribution
Discriminant functions:
)(ln )|(ln )( kkk CPCxpxy
8/16/99
Multivariate Normal DistributionMultivariate Normal Distribution
Discriminant functions:
)(ln )|(ln )( kkk CPCxpxy
equal priors + cov:Mahalanobis dist.
8/16/99
Multivariate Normal DistributionMultivariate Normal Distribution
How to “learn” it from examples:
• Maximum Likelihood
• Bayesian Learning
8/16/99
Maximum LikelihoodMaximum Likelihood
How to “learn” density from examples:
x
x
?
?
8/16/99
Maximum LikelihoodMaximum Likelihood
Likelihood that density model generated data X:
)|()|()( Likelihood1
n
N
nxpXpL
8/16/99
Maximum LikelihoodMaximum Likelihood
Likelihood that density model generated data X:
)|()|()( Likelihood1
n
N
nxpXpL
N
nnxpLE
1
)|(ln )(ln :convenient more
8/16/99
Maximum LikelihoodMaximum Likelihood
Learning = optimizing (maximizing likelihood / minimizing E):
N
nnxpLE
1
)|(ln )(ln :convenient more
8/16/99
Maximum LikelihoodMaximum Likelihood
Maximum Likelihood for Gaussian density:
N
nnxpLE
1
)|(ln )(ln :convenient more
N
nnx
N 1
1̂Close-form solution:
N
n
Tnn xx
N 1
)ˆ)(ˆ(1ˆ