Probabilistic Inference and Learning in Computer Vision An extract from the BMVC2000 pre-conference...

6
Probabilistic Inference and Learning in Computer Vision An extract from the BMVC2000 pre-conference tutorial given by: Prof. Andrew Blake Microsoft Research

Transcript of Probabilistic Inference and Learning in Computer Vision An extract from the BMVC2000 pre-conference...

Page 1: Probabilistic Inference and Learning in Computer Vision An extract from the BMVC2000 pre-conference tutorial given by: Prof. Andrew Blake Microsoft Research.

Probabilistic Inference and Learning in Computer Vision

An extract from the BMVC2000pre-conference tutorial given by:

Prof. Andrew Blake

Microsoft Research

Page 2: Probabilistic Inference and Learning in Computer Vision An extract from the BMVC2000 pre-conference tutorial given by: Prof. Andrew Blake Microsoft Research.

Learning low-level vision, Freeman and Pasztor, Proc. ICCV99.This paper proposes a persuasive general approach to inference in image arrays. The classic application is restoration of degraded images, including super-resolution. This is a classic Bayesian piece of work, the latest in an honourable succession that began with “intrinsic images” (Barrow and Tanenbaum 1978,) and moved on the regularisation (Poggio et al. 1983,) via Markov random fields (MRF) and Gibbs sampling (Geman and Geman 1984,) and probabilistic graphical models (Pearl 1988.) It characterises the striking new trend towards exemplar-based learning. It’s certainly bracing stuff- where’s the catch?

Learning graphical models of images, videos and their spatial transformations, Frey and Jojic, Proc. UAI2000.They have put together an exciting story that uses “latent variable modelling,” second nature in the probabilistic inference (NIPS) community, to explain and analyse images and image sequences. The exciting part is that, apparently, all you have do is describe how an image is constructed, and you automatically get an analysis of the images. The trick is, you just take the same description and push it through EM machine. It seems almost miraculous, in the same way that declarative programming (PROLOG) seems miraculous, that the analytical machinery is generated for you automatically. Is there a catch here, or should we all be doing this?

Page 3: Probabilistic Inference and Learning in Computer Vision An extract from the BMVC2000 pre-conference tutorial given by: Prof. Andrew Blake Microsoft Research.

Probabilistic Graphical Models for image motion analysis (Frey and Jojic, 99/00)

x

z

c

z

•Latent image model.x is the unknown (or latent) image. z is the image produced by the model, or found in real life. e.g. p(z|x)=N(x, )

•Mixture Model.c is the unknown cluster centre. z is the sampled value. e.g. p(z|c)=N(c, c)

z cContinuous random variable Discrete random variable

Page 4: Probabilistic Inference and Learning in Computer Vision An extract from the BMVC2000 pre-conference tutorial given by: Prof. Andrew Blake Microsoft Research.

z

Transformed latent image model.P(l=L) = l ,p(z|x,l)=N(Tlx, )

Principle Components/Factor Analysis.p(y) = N(0,1) parametersx=y+ expansionp(z)=N(x, ) noise addition

x

z

l

x

y

Page 5: Probabilistic Inference and Learning in Computer Vision An extract from the BMVC2000 pre-conference tutorial given by: Prof. Andrew Blake Microsoft Research.

p(z|x,l)=N(Tlx, l+)

c y

x

z

l

c

c=1 c=2 c=3

Results: Image motion analysis by PGM

),(N)|p( cTccc ycx

Page 6: Probabilistic Inference and Learning in Computer Vision An extract from the BMVC2000 pre-conference tutorial given by: Prof. Andrew Blake Microsoft Research.

p(z|x,l)=N(Tlx, l+)

c y

x

z

l

c

c=1 c=2 c=3

Results: Image motion analysis by PGM

),(N)|p( cTccc ycx

Video summary

Image segmentation

Sensor noise removalImage stabilisation