Foreground Detection : Combining Background Subspace...
Transcript of Foreground Detection : Combining Background Subspace...
Foreground Detection : Combining Background
Subspace Learning with Object Smoothing Model
Author: Gengjian Xue, Li Song, Jun Sun, Jun Zhou
Reporter: Li Song
Shanghai Jiao Tong University
Outline
Introduction 1
Our method 2
Experiments 3
Conclusion 4
Introduction
an active research subject in computer vision
challenging in dynamic background
Foreground Detection
Gaussian Mixture Models (GMM)
Previous Works
Kernel Density Estimation (KDE)
signals separation methods
Introduction
Basic Form:
FDY
observed signals Y
F,D background and foreground signals
Introduction
Typical methods
Robust Principal Component Analysis (RPCA)
Independent Component Analysis (ICA)
Sparse method(Sparse)
Introduction
Outline
Introduction 1
Our method 2
Experiments 3
Conclusion 4
Motivation
subspace learning for D 1
spatial clustered
property in F 2
Contribution
A novel framework for foreground detection
simultaneously uses the properties of D and F
An effective solution method
subspace learning for D
an object smoothing model on F
Background subspace learning
The PCA based method computationally intensive
The based method PCAD 2)2(
1. mean image computation
N
i
iAN
A1
1
2. covariance matrices construction
N
i
i
T
i AAAAN 1
row )()(1
C
N
i
T
ii AAAAN 1
column ))((1
C
Background subspace learning
3. projection matrices construction and
respectively select M eigenvectors
column row
4. new image projection
))(()(Z row
t
Tcolumn AA
Background subspace learning
5. new image reconstruction
AZ Trowcolumn )(At
6. getting the difference matrix
tAAG t
Background subspace learning
Results by thresholding the matrix G
(a) (b) (c) (d)
(a) : 1 eigenvector (b) : 10 eigenvectors
(c) : 30 eigenvectors (d) : 40 eigenvectors
Background subspace learning
Foreground refinement
Properties in G
1
Foreground
clustered
3
isolated
noises exist
2
number,
position, and
size of
clusters are
not known
Foreground refinement
Object smoothing model
r
i
c
i
iiii EHG1 1'
11
2
',',H
)(2
1minargH
r
i
r
i
c
i
iiii
c
i
iiii HHHHE2 1 2'
1',',
1'
',1',1 ||||
C on numerical matrices
D global optimization technique
E more flexible
Properties
Final results::
Foreground refinement
22
1 1'
11
2
',',H
)(2
1minargH EEHG
r
i
c
i
iiii
r
i
c
i
iiHE1 1'
',2 ||
Th|H|
only use the spatial smoothing constraint E1
Outline
Introduction 1
Our method 2
Experiments 3
Conclusion 4
Experiments
FPFNTP
TPscore
2
2F
Three sequences for testing
GMM , KDE, Sparse methods for comparison
F-score metric is used for evaluation
20 training frames, 3 eigenvectors
, Th = 25
20 training frames, Th = 25
351
7.0,3 bTK
3.0,100thWindowLeng bT
Experiments
Our method:
Sparse method:
GMM method:
KDE method:
Parameter selection
Experiments
wavingtrees
ripplingwater
campus
Results comparisons
GMM KDE Sparse Ours
Experiments
F-score evaluation Method GMM KDE Sparse Ours
wavingtrees 68.07 73.08 76.31 86.81 ripplingwater 75.24 67.17 78.12 80.88
campus 34.42 51.05 41.93 68.37
based background subspace learning
Conclusion
PCAD 2)2(
A framework coming subspace learning and object smoothing model
A flexible object smoothing model for foreground refinement
Backup slices
Why do you select 3 eigenvectors for projection matrices
construction ?
This value is set according to our experience. Actually, the
number of eigenvectors to be selected in the range from 3 to 10
would yield comparable results. For simplicity and fair
comparisons, we select 3 eigenvectors.
How do you solve the object smoothing model ?
The model can be considered as a simplified version of 2D
fused Lasso model, whose solution package ‘flsa’ using ‘R’
language has been provided. Thus, we use this package for our
model solution.
Some questions
How much time does this method cost for
processing a frame?
In this paper, we provide an alternative solution for this
framework, which first learns the background subspace, and
then refines the foreground. As these two steps are separated
and implemented using different languages(Matlab and R), it is
difficult to give the accurate time cost. However, this is one
our future works.
Some questions
Some questions
What’s the difficult to solve the novel framework?
Currently, we are difficult to give an effective way to
quantitatively describe the performance of background
subspace learning, so we provide an alternative solution
method instead with fixed parameters. However, we are
studying it and hoping to solve the problem.