Understanding and evaluating blind deconvolution algorithms Anat Levin 1,2, Yair Weiss 1,3, Fredo...
-
date post
15-Jan-2016 -
Category
Documents
-
view
217 -
download
0
Transcript of Understanding and evaluating blind deconvolution algorithms Anat Levin 1,2, Yair Weiss 1,3, Fredo...
Understanding and evaluating blind
deconvolution algorithms
Anat Levin1,2, Yair Weiss1,3, Fredo Durand1, Bill
Freeman1,4
1MIT CSAIL, 2Weizmann Institute, 3Hebrew University,
4Adobe
Blind deconvolution
??Rich literature, no perfect solution
Fergus et al. 06, Levin 06, Jia 07, Joshi et al. 08, Shan et al. 08
In this talk:
• No new algorithm
• What makes blind deconvolution hard?
• Quantitatively evaluate recent algorithms on the same dataset
kernelblurred image sharp image
blur kernel
Blind deconvolution
nxky blurred image
sharp image
noise
Input (known)
Unknown, need to estimate
?
?
Natural image priors
Derivative histogram from a natural image
Parametric models
1 ,)(log
i ixxp
Derivative distributions in natural images are sparse:
Lo
g p
rob
xx
Gaussian:
-x2
Laplacian:
-|x|-|x|0.5
-|x|0.25
Sparse priors in image processing
• Denoising
Simoncelli et al., Roth&Black
• Inpainting
Sapiro et al., Levin et al.
• Super resolution
Tappen et al.
• Transparency
Levin et al.
• Demosaicing
Tappen et al., Hel-Or et al.
• Non blind deconvolution
Levin et al.
Naïve MAPx,k estimation
1 ,||1
)|,(log 22
i ixyxkykxp
Find a kernel k and latent image x minimizing:
Should favor sharper x explanations
Convolution constraint
Sparse prior
The MAPx,k paradox
P( , )>P( , )Claim 1:
Let be an arbitrarily large image sampled from a sparse prior , and
Then the delta explanation is favored
)(xpx
nxky *
)kernel,im()deltakernel,im( *kxPyP
Latent imagekernel
Latent imagekernel
?
The MAPx,k failure sharp blurred
i
i
The MAPx,k failure
Red windows = [ p(sharp x) >p(blurred x) ]
15x15 windows 25x25 windows 45x45 windows
simple derivatives
[-1,1],[-1;1]
FoE filters
(Roth&Black)
1|| d5.0|| 1 d
5.0|| 2 d
P(blurred step edge)
sum of derivatives:cheaper
11 5.0 41.15.05.0 5.05.0
The MAPx,k failure - intuition
P(blurred impulse) P(impulse)
5.01 d 5.02 d11 d 12 d
211 5.05.0 41.15.05.0 5.05.0 sum of derivatives:
cheaper
>P(step edge)
<
k=[0.5,0.5]
P(blurred real image)
Blur reduces derivative contrast
Real image row: Noise and texture behave as impulses - total derivative contrast reduced by blur
<P(sharp real image)
cheaper8.5
5.0i ix 5.4
5.0i ix
Why does MAPx,k fail?
• Too few measurements? Fails even with infinitely large image
• Wrong prior? Fails even for signals sampled from the prior
• Choice of estimator
argmax P( , | )
= P( , | )dx
argmax P( | )
MAPk estimation
MAPx,k- estimate x,k simultaneously
x
MAPk- estimate k alone, marginalize x
yk
x yk
y k
Results in this paper:
Let be an arbitrarily large image sampled from a sparse prior , and
Then
Claim 1- MAPx,k estimator fails:
The delta explanation is favored
Claim 2- MAPk estimator succeeds:
is maximized by the true kernel )|( ykp *kk
)(xpx
nxky *
)kernel,im()deltakernel,im( *kxpyp
Intuition: dimensionality asymmetry
MAPx,k– Estimation unreliable. Number of measurements always lower than number of unknowns: #y<#x+#k
MAPk – Estimation reliable. Many measurements for large images: #y>>#k
Large, ~105 unknowns Small, ~102 unknowns
blurred image ykernel k
sharp image x
~105 measurements
Approximate MAPk strategies
Marginalization on x is challenging to compute
Approximation strategies:
- Independence assumption in derivatives space:
Levin NIPS06
- Variational approximation:
Miskin and Mackay 00, Fergus et al. SIGGRAPH06
- Laplace approximation:
Brainard and Freeman 97, Bronstein et al. 05
dxykxpykp )|,()|(
Evaluation on 1D signals
MAPk, variational approximation (Fergus et al.)
Exact MAPk MAPx,kFavors delta solution
MAPk, Gaussian prior
Favor correct solution despite
wrong prior!
Ground truth data acquisition
4 images x 8 kernels = 32 test images
Data available online: http://www.wisdom.weizmann.ac.il/~levina/
Fergus et al. SIGGRAPH06 MAPk, variational approx.
Comparison
Shan et al. SIGGRAPH08 adjusted MAPx,k
MAPx,k
MAPk, Gaussian prior
Ground truth
Evaluation
Cumulative histogram of deconvolution successes :
bin r = #{ deconv error < r }
Su
cces
ses
per
cen
t MAPk, Gaussian prior
Shan et al. SIGGRAPH08Fergus, variational MAPk
MAPx,k sparse prior
100
80
60
40
20
Problem: uniform blur assumption is unrealistic
Variation of dot traces at 4 corners
Note: opposite conclusion by Fergus et al., 2006
Summary
• Good estimator is more important than correct prior:
- MAPk approach can do deconvolution even with Gaussian prior
- MAPx,k approach fails even with sparse prior
• Spatially uniform blur assumption is invalid
• Compare blind deconvolution algorithms on the same dataset, Fergus et al. 06 significantly outperforms all alternatives
Ground truth data available online