1 Formation et Analyse d’Images Session 7 Daniela Hall 7 November 2005.

Post on 24-Dec-2015

216 views 0 download

Tags:

Transcript of 1 Formation et Analyse d’Images Session 7 Daniela Hall 7 November 2005.

1

Formation et Analyse d’ImagesSession 7

Daniela Hall

7 November 2005

2

Course Overview

• Session 1 (19/09/05)– Overview– Human vision – Homogenous coordinates– Camera models

• Session 2 (26/09/05)– Tensor notation– Image transformations– Homography computation

• Session 3 (3/10/05)– Camera calibration– Reflection models– Color spaces

• Session 4 (10/10/05)– Pixel based image analysis

• 17/10/05 course is replaced by Modelisation surfacique

3

Course overview

• Session 5 + 6 (24/10/05) 9:45 – 12:45– Contrast description– Hough transform

• Session 7 (7/11/05)– Kalman filter

• Session 8 (14/11/05)– Tracking of regions, pixels, and lines

• Session 9 (21/11/05)– Gaussian filter operators

• Session 10 (5/12/05)– Scale Space

• Session 11 (12/12/05)– Stereo vision – Epipolar geometry

• Session 12 (16/01/06): exercises and questions

4

Session overview

1. Kalman filter

2. Robust tracking of targets

5

Kalman filter

• The Kalman filter ist a optimal recursiv estimator. • Kalman filtering has been applied in areas such as

– aerospace,

– marine navigation,

– nuclear power plant instrumentation,

– manufactring, and many others.

• The typical problem tries to estimate position and speed from the measurements.

Tutorial:http://www.cs.unc.edu/~welch/kalman/kalmanIntro.html G. Welch, G. Bishop: An Introduction to the Kalman Filter, TR 95-041, Univ of N. Carolina, USA

6

Kalman filter

• The Kalman filter tries to estimate the state x of a discrete time controlled process that is governed by the equation

• with measurement• Process noise• Measurement noise

• Process noise covariance Q and measurement noise covariance R might in practise change over time, but are assumed constant

• Matrix A relates the state at time k to the state at previous time k-1, in absence of noise. In practise A might change over time, but is assumed constant.

• Matrix B relates the optional control input to the state x. We let it aside for the moment.

• Matrix H relates the state to the measurement.

),0(~)(

),0(~)(

111

RNvp

QNwp

vHxz

wBuAxx

kkk

kkkk

7

Notations

• Measurement

• Measurement noise covariance

• Process noise

• Process noise covariance

• Kalman gain K

Q

w

R

z

k

k

8

Kalman filter notations

A priori state estimate

A posteriori state estimate

A priori estimate error

A posteriori estimate error

A priori estimate error covariance

A posteriori estimate error covariance

}{

}{

ˆ

ˆ

ˆ

ˆ

T

kkk

T

kkk

kkk

kkk

k

k

eeEP

eeEP

xxe

xxe

x

x

9

Kalman filter

• Goal: find a posteriori state estimate ^xk as a linear combination of an a priori estimate ^xk

- and a weighted difference between the actual measurement zk and a measurement prediction H^xk

-

• The difference (zk – H^xk-) is called innovation or

residual• K is the gain or blending factor that minimizes the

a posteriori error covariance.

)ˆ(ˆˆ kkkk xHzKxx

10

Kalman gain K

• Matrix K is the gain that minimizes the a posteriori error covariance. – The equations that need to be minimized

– How to minimize}{

ˆ

)ˆ(ˆˆ

Tkkk

kkk

kkkk

eeEP

xxe

xHzKxx

Kfor solve ,0)(

}))ˆ(ˆ))(ˆ(ˆ{(}{

)ˆ(ˆˆ

k

Tkkkkkkkk

Tkkk

kkkkkkk

PtracedK

d

xHzKxxxHzKxxEeeEP

xHzKxxxxe

11

Kalman gain K

• One form of the result is

• Measurement cov small

weights residual heavier

• A priori estimate error

small weights residual little

0lim

lim

)(

0

1

0

1

kP

kR

Tk

Tkk

K

HK

RHHPHPK

k

12

Kalman gain K

• When the error covariance R approaches 0, the actual measurement zk is trusted more, while the predicted measurement ^xk

- is trusted less

• When the a priori estimate error covariance approaches 0, the actual measurement zk is trusted less, while the predicted measurement ^xk

- is trusted more.

13

Discrete Kalman filter algorithm

• The Kalman filter estimates a process by using a form of feedback control.

• The filter estimates the process state at some time and then obtains feedback in form of noisy measurements.

• The Kalman filter equation fall in 2 groups– time update equations– measurement update equations

• Time update equations project forward in time the current state and error covariance estimates to obtain a priori estimates.

• The measurement update equations implement feedback. They incorporate a new measurement into the a priori estimate to form an improved a posteriori estimate.

14

Kalman filter algorithm

Time Update(« Predict »)

Measurement update(« Correct »)

The time update projects current state estimate ahead in time.The measurement update adjusts the projected estimate by an actual measurement.

15

Kalman filter

• Time update equations (predict)

– Project state and covariance estimates forward in time

• Measurement update equations (correct)

– Compute Kalman gain K– Measure process zk– Compute a posteriori

estimate xk– Compute a posteriori error

covariance estimate Pk

QAAPP

xAxT

kk

kk

1

1ˆˆ

kkk

kkkkk

Tk

Tkk

PHKIP

xHzKxx

RHHPHPK

)(

)ˆ(ˆˆ

)( 1

Initial estimates for^xk-1 and Pk-1

16

Filter parameters and tuning

• R: measurement noise covariance can be measured a priori to the filter operation (off-line)

• Q: process noise covariance. Can not be measured, because we can not directly observe the process we are measuring. If we choose Q large enough (lots of uncertainty), a poor process model can produce acceptable results.

• Parameter tuning: We can increase filter performance by tuning the parameters R and Q. We can even use a distinct Kalman filter for tuning.

• If R and Q are constant, the estimation error cov Pk and the Kalman gain Kk will stabilize quickly and stay constant. In this case, Pk and Kk can be precomputed.

17

Session overview

1. Kalman filter

2. Robust tracking of targets

18

Robust tracking of objects

Trigger regions

Detection New targets

List of targets

PredictList of predictions

Correct

Detection

Measurements

19

Robust tracking of objects

• Measurement• State vector

• State equation

• Source: M. Kohler: Using the Kalman Filter to track Human interactive motion, Research report No 629/Feb 1997, University of Dortmund Germany

0010

0001,

',',,

HvHxz

yxyxx

y

xz

kkk

Tk

k

20

Robust Tracking of objects

• Measurement noise error covariance

• Temporal matrix

• Process noise error covariance

• a affects the computation speed (large a increases uncertainty and therefore the search regions)

ItI

tItItaQ

t

t

A

R

k

ye

xe

k

63

3)(2

6

1000

0100

010

001

0

0

22

2,

,2

21

Form of the temporal matrix A

• Matrix A relates a posteriori state estimate ^xk-1 to the a priori state estimate ^xk

-

• The new a priori state estimate requires the temporal derivative

• According to a Taylor serie we can write

k

xxx

dk

dx kkk

kkk

ˆˆlimˆˆ

0

1000

0100

010

001

with ,

'

'

'

'

ˆˆˆ^^

k

k

A

y

x

y

x

A

y

x

y

x

rkxxx

kkk

kkkkk

1ˆˆ kk xAx

22

Kalman filter notations

A priori state estimate

A posteriori state estimate

A priori estimate error

A posteriori estimate error

A priori estimate error covariance

A posteriori estimate error covariance

}{

}{

ˆ

ˆ

ˆ

ˆ

T

kkk

T

kkk

kkk

kkk

k

k

eeEP

eeEP

xxe

xxe

x

x

23

Kalman filter

• Time update equations (predict)

– Project state and covariance estimates forward in time

• Measurement update equations (correct)

– Compute Kalman gain K– Measure process zk

– Compute a posteriori estimate xk

– Compute a posteriori error covariance estimate Pk

QAAPP

xAxT

kk

kk

1

1ˆˆ

kkk

kkkkk

Tk

Tkk

PHKIP

xHzKxx

RHHPHPK

)(

)ˆ(ˆˆ

)( 1

Initial estimates for^xk-1 and Pk-1

24

Example

• A 1D point moves with a certain speed on a continuous scale

• We have a sensor that gives only integer values

• Compute a Kalman filter for the process.

25

Example results

p 2.2 6.4 10.6 14.8 19 true position

p’ 4.2 4.2 4.2 4.2 true speed

z 2 6 11 15 19 measured pos

z’ 4 5 4 4 measured grad

^x 2 5.6 10.9 15.1 estimated pos

^x’ 0 4.2 5.3 4.2 estimated grad

K and P converge quickly

K