Estimation of Fast-rate Dynamics using Slow-rate Image ...

12
SUBMISSION TO IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY 1 Estimation of Fast-rate Dynamics using Slow-rate Image Sensor Measurements Jacopo Tani, Sandipan Mishra, John T. Wen Rensselaer Polytechnic Institute Troy, NY 12180 Abstract—Image-based control or visual servoing is gaining popularity as cameras become key feedback measurement mech- anisms. State estimation of motion systems through these image sensors is typically performed by studying the evolution of the centroid of an image feature in time. Any blur in the image feature is treated as undesirable noise and is discarded. We here propose an estimation scheme for fast rate systems with slow rate image sensor measurements that provides fast-rate estimates of the states by exploiting the additional dynamical information encoded in the image blur. Using the information from the blur for estimation provides two advantages (1) faster estimate convergence, and (2) robustness to noise in the camera (such as stray light). By modeling the image sensor as a non linear integral time-domain to pixel-domain functional we recast the estimation problem as a multi-rate estimation problem from non linear output measurements. Based on this formulation we discuss a progression of estimation schemes, from moment based estimators (MBEs), to the image blur based estimator (IBE). We analyze the convergence properties of MBEs and extend them to the IBE algorithm. Experimental results on a fast-rate beam steering mirror and a slow-rate image sensor verify that using the integrative sensor model and exploiting its structure for state estimation results in (1) faster convergence of the estimation error and (2) lower estimation errors in the presence of stray light. I. I NTRODUCTION In typical image-based control (or visual servoing), when an image sensor is used to estimate the trajectory of a moving object, the most intuitive approach is acquiring a sequence of sharp images of the target and inferring the motion from the “sampled” position evolution in time, as is typically performed in optical flow methods [1], [2]. This process works better if image measurements are faster, as motions with higher bandwidths can be estimated and sharper, as position estimates can be obtained more accurately. But image sensors are integrative sensors, i.e., they deliver temporally integrated measurements over an exposure period. This results in motion blur when observing a target that is moving fast compared to the exposure window duration [3]. However, a key limitation of image sensors is their frame update rate, constrained by the hardware readout time. Indeed, qualitatively speaking, image measurements tend to be slow and blurred. Image blur makes position estimates ambiguous, therefore the centroid - or first moment of the intensity distribution (image) - is used to extract the desired information from the measurement, which is conventionally interpreted as the Jacopo Tani ([email protected]) and Sandipan Mishra ([email protected]) are with the Mechanical, Aerospace, and Nuclear Eng. Dept. John Wen ([email protected]) is with the Electrical, Computer, and Systems Eng. Dept. time-average of the position of the feature of interest. Image moments condense the dynamical information contained in the motion blur, and while this information is sufficient to provide estimates of the observed motion a significant amount of extra information, i.e., the blur distribution itself, is discared. However, by modelling the motion blur formation process and considering the full blur intensity distribution we show how to achieve (1) better convergence rates and (2) better robustness to image sensor noises, such as stray lights, compared to moment based estimators. In [4], the blur formation process was modeled as a non- linear integral transform, i.e., the image sensor transforms temporal information about the motion of the object being imaged into a spatial intensity distribution. This property of the image sensor can be used to extract output time-history and hence reconstruct motion (output dynamics) during the exposure time. There is substantial literature on motion blur and motion extraction from image sensor measurements. Deblurring and extraction of motion from blur are ill-posed inverse prob- lems [5], [6]. As an example, the image trace of an object moving left to right is indistinguishable from that of the object moving right to left. All algorithms that have been proposed for extracting motion from blur thus require some regularizing assumptions to eliminate this ill-posedness (such as assuming a known motion profile, constant velocity, constant acceleration, etc.). In the image processing community motion blur is tradition- ally considered an undesirable effect and several deblurring algorithms [5], [7]–[9] have been proposed to sharpen natural images. While these algorithms are effective for image restora- tion, they are inadequate for accurate dynamics reconstruction since they focus primarily on determining the deblurred image and not the motion field. Motion is instead typically inferred from image sensor mea- surements by means of optical flow methods, that: “estimate the apparent motion as either instantaneous image velocities or discrete image displacements” [2]. While these methods efficiently estimate the motion field they usually require a sequence of (sharp) frames and only deliver linear approxima- tion of motion limited by the rate of image acquisition [10]. In [11] optical flow is computed from a single blurred image and in [12] an approach that bares similarity to optical flow is used to deliver linear approximations of motion fields. In [13], an alternative approach to motion field estimation is provided that recognizes the motion information enclosed in image blur, and deals with the ill-posedness of the problem by enclosing

Transcript of Estimation of Fast-rate Dynamics using Slow-rate Image ...

Page 1: Estimation of Fast-rate Dynamics using Slow-rate Image ...

SUBMISSION TO IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY 1

Estimation of Fast-rate Dynamics using Slow-rate Image SensorMeasurements

Jacopo Tani, Sandipan Mishra, John T. WenRensselaer Polytechnic Institute

Troy, NY 12180

Abstract—Image-based control or visual servoing is gainingpopularity as cameras become key feedback measurement mech-anisms. State estimation of motion systems through these imagesensors is typically performed by studying the evolution of thecentroid of an image feature in time. Any blur in the imagefeature is treated as undesirable noise and is discarded. Wehere propose an estimation scheme for fast rate systems withslow rate image sensor measurements that provides fast-rateestimates of the states by exploiting the additional dynamicalinformation encoded in the image blur. Using the informationfrom the blur for estimation provides two advantages (1) fasterestimate convergence, and (2) robustness to noise in the camera(such as stray light). By modeling the image sensor as a nonlinear integral time-domain to pixel-domain functional we recastthe estimation problem as a multi-rate estimation problem fromnon linear output measurements. Based on this formulation wediscuss a progression of estimation schemes, from moment basedestimators (MBEs), to the image blur based estimator (IBE). Weanalyze the convergence properties of MBEs and extend themto the IBE algorithm. Experimental results on a fast-rate beamsteering mirror and a slow-rate image sensor verify that usingthe integrative sensor model and exploiting its structure for stateestimation results in (1) faster convergence of the estimation errorand (2) lower estimation errors in the presence of stray light.

I. INTRODUCTION

In typical image-based control (or visual servoing), whenan image sensor is used to estimate the trajectory of a movingobject, the most intuitive approach is acquiring a sequenceof sharp images of the target and inferring the motion fromthe “sampled” position evolution in time, as is typicallyperformed in optical flow methods [1], [2]. This process worksbetter if image measurements are faster, as motions withhigher bandwidths can be estimated and sharper, as positionestimates can be obtained more accurately. But image sensorsare integrative sensors, i.e., they deliver temporally integratedmeasurements over an exposure period. This results in motionblur when observing a target that is moving fast compared tothe exposure window duration [3]. However, a key limitationof image sensors is their frame update rate, constrained by thehardware readout time. Indeed, qualitatively speaking, imagemeasurements tend to be slow and blurred.

Image blur makes position estimates ambiguous, thereforethe centroid - or first moment of the intensity distribution(image) - is used to extract the desired information fromthe measurement, which is conventionally interpreted as the

Jacopo Tani ([email protected]) and Sandipan Mishra([email protected]) are with the Mechanical, Aerospace, and NuclearEng. Dept. John Wen ([email protected]) is with the Electrical, Computer,and Systems Eng. Dept.

time-average of the position of the feature of interest. Imagemoments condense the dynamical information contained in themotion blur, and while this information is sufficient to provideestimates of the observed motion a significant amount ofextra information, i.e., the blur distribution itself, is discared.However, by modelling the motion blur formation process andconsidering the full blur intensity distribution we show how toachieve (1) better convergence rates and (2) better robustnessto image sensor noises, such as stray lights, compared tomoment based estimators.

In [4], the blur formation process was modeled as a non-linear integral transform, i.e., the image sensor transformstemporal information about the motion of the object beingimaged into a spatial intensity distribution. This property ofthe image sensor can be used to extract output time-historyand hence reconstruct motion (output dynamics) during theexposure time.

There is substantial literature on motion blur and motionextraction from image sensor measurements. Deblurring andextraction of motion from blur are ill-posed inverse prob-lems [5], [6]. As an example, the image trace of an objectmoving left to right is indistinguishable from that of the objectmoving right to left. All algorithms that have been proposedfor extracting motion from blur thus require some regularizingassumptions to eliminate this ill-posedness (such as assuming aknown motion profile, constant velocity, constant acceleration,etc.).

In the image processing community motion blur is tradition-ally considered an undesirable effect and several deblurringalgorithms [5], [7]–[9] have been proposed to sharpen naturalimages. While these algorithms are effective for image restora-tion, they are inadequate for accurate dynamics reconstructionsince they focus primarily on determining the deblurred imageand not the motion field.

Motion is instead typically inferred from image sensor mea-surements by means of optical flow methods, that: “estimatethe apparent motion as either instantaneous image velocitiesor discrete image displacements” [2]. While these methodsefficiently estimate the motion field they usually require asequence of (sharp) frames and only deliver linear approxima-tion of motion limited by the rate of image acquisition [10].In [11] optical flow is computed from a single blurred imageand in [12] an approach that bares similarity to optical flow isused to deliver linear approximations of motion fields. In [13],an alternative approach to motion field estimation is providedthat recognizes the motion information enclosed in image blur,and deals with the ill-posedness of the problem by enclosing

Page 2: Estimation of Fast-rate Dynamics using Slow-rate Image ...

SUBMISSION TO IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY 2

the blurred measurements between two sharp ones.In [4], blurred image measurements are used to perform

motion reconstruction and regularization is performed byassuming a known signal model (i.e., the output of an LTIsystem) to perform system state estimation. An extendedKalman filter was used for multi-rate estimation using thefirst and second moments of a blurred image. While thisgives performance enhancement over the simple first moment-estimation schemes used for AO systems [14], [15], there isstill a substantial amount of information unused and discardedfrom the intensity distribution.

The fundamental idea underlying this research is exploitingthe natural integrative characteristics of image sensors to inferdynamics of systems with a camera as a feedback mechanism(i.e., visual servoing). This approach was used for systemidentification in [16] and for determining state trajectories in[17]. Specifically, in this paper, we will consider the generalproblem of state estimation of a multi-rate system with afast system dynamics and a slow rate image sensor. We thenpose the multi-rate non-linear estimation problem by using anintegrative-sensor model of the camera. To illustrate the ap-proach and experimentally validate our results, we use a setupconsisting of a tip-tilt fast steering mirror and a CCD imagesensor [16]. The algorithm proposed, based on our preliminaryresults published in [17], takes advantage of the full intensitydistribution provided by an image measurement, i.e., the imageblur, as opposed to the typical position metric of the observedmoving feature, as the centroid. The state estimate is obtainedfrom the minimization of a cost function that minimizes theerror between the predicted image and the measured imagein the pixel domain. Our regularization assumptions use thedynamic model of the underlying system to remove the ill-posed-ness. Therefore, we propose and experimentally validatea multi-rate algorithm that uses a model of the underlyingsystem to estimate the state at the fast system dynamics rate,from noisy measurements, at much slower rates.

Extraction of time-history at a fast-rate from the slow-rateintegrative sensor promises to break the barrier of controlbandwidths limited to frame update rates of the image sensorin applications that rely on image sensor feedback.

II. THE INTEGRATIVE IMAGE SENSOR MODEL

We here briefly present the model of an integrative imagesensor. We refer to [16] for a more detailed discussion.

A. Image Sensor Model

We let: η ∈N :={(ηx,ηy)|η ∈ [(0,0),(ηxmax ,ηymax),η ∈ℜ2]

}parametrize the spatial dimension (pixel domain) N of theimage sensor array. We define Y y(·) ∈N to be the output ofthe image sensor, a 2D piecewise continuous intensity map(image), having non zero values only in a finite region ofthe η plane, generated by y(·), the output path of the systemduring the exposure window of the camera. Specifically, y(·)is defined over t ∈ T := [Ta,Ta + Te], with y(·) ∈ L2(T ),where Ta and Te are the image sensor activation and exposuretimes respectively.

Remark 1: y(·) here is the system output as it is seen bythe image sensor, i.e., it is defined in the pixel domain byincluding a scaling factor, which scales the image plane motionto the object plane motion.

The image corresponding to the output path y, Y y(·), isformed by integrating the image kernel Ψ(·), correspondingto the image Y y≡0(·), over the exposure period Te with theacquisition command issued at Ta:

Y y(·) = C yΨ(·)+n(·) (1)

where n(·) includes the read and shot noises of the imagesensor. The camera transformation C

(·)Ψ

: L2(T ) → L2(N )maps from time to pixel domain and provides the capturedimage:

C yΨ,Te,Ta,Ts

(η) :=∫ Ta+Te

Ta

Ψ(η− y(t))dt. (2)

To simplify notation, we will hereafter include all or part ofthe subscripts of C and superscripts of Y as necessary. Notethat Ψ(η − y(t)) is simply the image kernel centered at y(t)and it is assumed to be space invariant, as it depends on thedifference η − y(t). In the ideal case the image kernel canbe assumed to be a point centered at the origin, Ψ(·) = δ (·).Fig. 1 shows an example of the relationship between the time-domain signal y(·) and the pixel domain image C y

Ψ(·).

yt

Time Domain Pixel Domain

Ψ(⋅)

Intensity Distribution

⇒ 𝒞Ψ(∙)

Ta Ta + Te ηx0

ηy0

ηx ηy

yx t

yy t

Exposure

Fig. 1. Top left: time domain signals. Top right: the corresponding 2Dintensity mapping in the pixel domain of the image sensor. Bottom left showsthe image kernel Ψ(·) and bottom right the 3D representation of the CΨ(y).

B. Image Kernel

In the case of an image generated by a focused laser sourceor a guide star (in adaptive optics applications), the imagekernel, Ψ, is simply the point spread function (PSF). In theideal case, the image of the point source is also a point andΨ is then a delta function. In the case of a laser light source,the intensity profile is typically a 2D Gaussian. The opticalaberration of a point source may also be approximated by a2D Gaussian profile, which is a good approximation whenclose to the optical axis. The image kernel model could bedefined to account for the aberration when far from the optical

Page 3: Estimation of Fast-rate Dynamics using Slow-rate Image ...

SUBMISSION TO IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY 3

axis. In first approximation though, the image kernel Ψ canbe considered as:

Ψ(η) = ae−12 (η−η0)

T Σ−1(η−η0) (3)

where η0 = (ηx0 ,ηy0) is the center of the Gaussian, a is themaximum intensity at the peak, and Σ is the 2×2 covariancematrix. The image kernel may be determined experimentallyby holding y(t) at zero and obtaining Ψ(η) = Y y≡0(η)/Te.The effect of noise may be reduced by averaging over multipleexposures. Ψ is then the best fit of (3) to the experimental Ψ.

Remark 2: In the image processing community motion bluris considered an undesirable artifact and is removed throughdeblurring algorithms. e.g., [3], [18]. Assuming a translationalmotion in the image sensor plane, a blurred image By(·) canbe expressed as the pixel domain convolution between a sharpimage S(·) and the blur kernel Y y(·):

By(η) = (S∗Y y)(η). (4)

While the purpose of the blur kernel in these algorithms is toobtain the sharp image, our approach uses it to extract theinstantaneous motion field within the exposure window of theimage sensor, as qualitatively shown in Fig. 2.

Blind Deblurring Algorithm

t

y

yx(t)

yy(t) Exposure Window

Proposed Motion

Recovery

S(η) By(η)

Yy(η)

Blurred Image Sharp Image

Blur Kernel

Output dynamics

Fig. 2. In the general case a (blind) deblurring algorithm uses the iterativelydetermined blur kernel (Y y(·)) to produce a sharp image (S(·)) from the blurredmeasurement (By(·)). The blur kernel contains motion information on thesystem output dynamics though, that can be estimated with the proposedformulation. Images in this figure are taken from [19].

III. PROBLEM FORMULATION

We pose the state estimation problem for systems with fast-rate dynamics and slow-rate measurements from a camera (anintegrative sensor).

Let the following SISO system sampled with fast periodTf be the fast rate system, (F.S.), that captures the systemdynamics:

x(k+1) = Ax(k)+Bu(k)+Bww(k)

y(k) =Cx(k)+Du(k), (F.S.)

with x ∈ ℜn, y ∈ ℜ and u ∈ ℜ. The fast rate output y is notdirectly measured at the fast rate; instead it is to be inferred

u(k) Fast System

Image Sensor

B ⋅

Image Blur based Estimator

y(k)

Yjy (⋅)

x j(k)

Image Pre-processing

Bjy (⋅)

Fig. 3. Block diagram of the image blur based state estimation problem.

from slow-rate integrative sensor measurements, Yj, obtainedat a rate Ts:

Yy jj (·) = C jTs(y j)(·)+n j(·), j = 0,1, . . .

where y j = y(Ta + jN+ l) l ∈ {1,2, . . .Ne}, i.e. the output ofthe fast system within the jth exposure window. The cameratransformation C is as defined in (2) and N,Ne are the numberof fast time steps in a slow one and in an exposure windowrespectively,defined in (5). The general estimation problemposed in this paper is to estimate x(k) (or equivalently x(0))from the set of measurements {Yj}J

j=0, for some J =⌊ k

N

⌋.

Fig. 3 shows a block diagram of the problem while Fig. 4shows an example of the image acquisition process.

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6 1.8 2−1

−0.5

0

0.5

1

1.5

2

2.5

Time

y(t)

yu(t) yv(t) Exposure

. . .

Fig. 4. A stream of image measurements {Y j}Jj=0 is acquired as the underlying

fast system evolves. In this example the (F.S.) output y(t) = [yu(t) yv(t)],where the (u,v) directions correspond to (ηx,ηy) respectively. The longexposure window of the image sensor produces blurred images, which area measurement of the system output evolution within the exposure windows(y j).

IV. MULTI-RATE SYSTEMS

A multi-rate system is a system in which two or moredifferent sampling times are used (e.g. actuators and sensorshave different sampling rates). The problem defined in Sec. IIIis a typical “dual-rate” system where the inherent dynamicsof the (F.S.) are at a fast rate Tf while the measurements areavailable at a slow rate Ts

Ts = NTf , Te = NeTf (5)

such that there are N fast steps in a slow step and Ne fast stepsin an exposure window. Clearly N >Ne and N,Ne ∈N. We firstintroduce a notation to take into account the slow and fast ratesin a common framework and then the image sensor predictormodel cast in this framework. Let time instant t be expressedas t = Ta+ jTs+kTf , k = 1, . . . ,N−1 and j = 1, . . . ,J. Let the

Page 4: Estimation of Fast-rate Dynamics using Slow-rate Image ...

SUBMISSION TO IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY 4

Y(t)

y(t)

Te Tr

Ts

kTf time (t)

Exposure Window

jTs (j − 1)Ts

Fig. 5. Time parameters of the multi-rate system defined in Section III: thefast sampling rate Tf , the slow sampling rate Ts, the exposure window Te andthe readout time Tr .

notation v( j,k) indicate the evaluation of any variable v at thetime instant corresponding to the kth fast step within the jth

slow step. The (F.S.) can then be rewritten as:

x( j,k+1) = Ax( j,k)+Bu( j,k)+Ww( j,k)

y( j,k) =Cx( j,k)+Du( j,k) (6)

Remark 3:

1) x( j,N + k) = x( j+1,k) (⇒ x( j+1,0) = x( j,N)), ∀ j,k

2) x( j,0) represents the slow system state evolution.

A. Multi-Rate Lifting

Lifting a system involves writing it out in the slow ratetime reference. By applying (6) iteratively for k fast steps itis possible to write:

x( j,k) = As,kx( j,0)+Bs,kus,k( j)+Ws,kws,k( j) (7)

where:

As,k = Ak, Bs,k =[

Ak−1B Ak−2B . . . B]

Ws,k =[

Ak−1W Ak−2W . . . W].

We can therefore define the slow system (S.S.), when k = N,as:

x( j+1,0) = Asx( j,0)+Bsus( j)+Wsws( j)

y( j+1,0) =Cx( j+1,0)+Du( j+1,0), (S.S.)

and As,N = As, Bs,N = Bs:

As = AN , Bs =[

AN−1B AN−2B . . . B].

Similarly, the slow input us,k is the stacked version of k fastrate values (as is ws,k):

us,k( j) = [u( j,0) u( j,1) . . . u( j,k−1)]T (8)

us( j) = us,N( j) = [u( j,0) u( j,1) . . . u( j,N−1)]T . (9)

B. Discretized Camera Model

The camera transformation defined in (2) can be discretizedto yield an approximated captured image:

C y j(η)'Ne−1

∑i=0

Ψ(η− y( j, i))Tf . (10)

Remark 4:1) Expression (10) above is used to formulate an image

predictor once a model of the image kernel Ψ(·) has beenobtained, say by fitting a Gaussian to the experimentalΨ(·) obtained as described in Sec. II.

2) We define the jth predicted image C j as the imagegenerated by the discretized camera model (10) movingwith predicted ouput path y j, or:

C yj (η) =

∫ jTs+Te

jTs

Ψ(η− y j(t))dt 'Ne−1

∑i=0

Ψ(η− y( j, i))Tf

(11)

3) Unless specified otherwise, we will assume Ψ(·) = Ψ(·).

V. IMAGE MOMENTS AND MOMENT BASED ESTIMATORS(MBES)

In this section, we present estimation schemes that useimage moments for estimation. Image moments were firstintroduced in [20] and have been frequently used in imagedeblurring algorithms [3], [21]. Image moments condense theinformation on the dynamics of a moving feature, which isencoded within the image blur. In the following subsections,we first recall image moments definitions (Sec. V-A) and thenthe definitions of estimators [4] that use the first (Sec. V-B)and higher (Sec. V-C) order moments to determine the fastrate system state variables evolution while receiving imagemeasurements at the slow rate of the integrative image sensor.We then proceed to investigate their convergence propertiesin Sec. VI. For clarity of development, the state estimationschemes will be presented for the univariate intensity distri-bution case. The case of a typical 2D intensity sensor (suchas a CCD array) can be developed from a direct extension ofthese results.

A. Moments of the Intensity Distribution

Letting y(t) = [yu(t) yv(t)], where yu(·) and yv(·) are thecomponents of the motion in the ηx,ηy directions respectively,the pqth moment mC

pq of an intensity distribution C (·)∈N isdefined as:

mCpq =

∫∞

−∞

∫∞

−∞η

px η

qy C (ηx,ηy)dηxdηy∫

−∞

∫∞

−∞C (ηx,ηy)dηxdηy

. (12)

For Ψ(·) = δ (·), the Dirac Delta, this expression reduces to:

mδpq =

1Te

∫ Ta+Te

τ=Ta

ypu(τ)y

qv(τ)dτ. (13)

Therefore, for this special case, higher spatial moments ofthe intensity distribution are time-averages of the productof the powers of the output within the exposure window. In

Page 5: Estimation of Fast-rate Dynamics using Slow-rate Image ...

SUBMISSION TO IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY 5

the more general case, given an arbitrary image kernel Ψ, thepqth moment of C y

Ψcan be expressed as:

mCpq =

p

∑r=0

q

∑s=0

(pr

)(qs

)mΨ

p−r,q−s mδrs (14)

where the mΨpq moments of the image kernel Ψ are given by:

mΨpq =

∫∞

−∞

∫∞

−∞η

px η

qy Ψ(ηx,ηy)dηxdηy∫

−∞

∫∞

−∞Ψ(ηx,ηy)dηxdηy

(15)

and the mδpq moments are given by (13). When the motion

is monodimensional in the, e.g., ηx direction, the above (12)-(15) can be particularized by considering yv(t) ≡ const. andq = 0, yielding:

mp =

∫∞

−∞

ηpx C (η)dη∫

−∞

C (η)dη

, (16)

mδp =

1Te

∫ Ta+Te

τ=Ta

ypu(τ)dτ, (17)

mCp =

p

∑r=0

(pr

)mΨ

p−r mδr , (18)

and

mΨp =

∫∞

−∞

ηpx Ψ(η)dη∫

−∞

Ψ(η)dη

:= Ψp. (19)

In the above (16)-(19) the q = 0 pedix and the doubleintegral have been dropped for economy of notation. Sincethe estimation algorithms are designed in discrete time, wemay approximate the moment equation, (18) as a finite sumwith a sampling time Tf (we are assuming here that acquisitiontime Ta is zero):

mp( j)'p

∑r=0

(pr

)ηxmax

∑n=0

np−rΨ(n)

ηxmax

∑s=0

Ψ(s)

· 1Ne

Ne−1

∑m=0

(y( j,m))r. (20)

It is interesting to note that the spatial moments are projectionsof the intensity profile onto the polynomial basis set B ={bn : bn (η) = ηn,n ∈ Z+}. Alternative basis functions may bedesigned to improve computational tractability based on theimage feature of interest (i.e. the image kernel) and/or thenature of information to be extracted.

B. First moment based estimator (MBE1)

Since the first moment is effectively a linear output at theslow rate, the first moment based observer (MBE1) is a Luen-berger observer that uses the difference of the measured andpredicted first moments (m1− m1) to perform state estimateupdates at the slow rate. Letting x( j,k) be the state estimate

at the kth fast step within the jth slow one, the state estimatedelivered by the MBE1 is:

x( j+1,0) = x( j,N)+L(m1( j)− m1( j)), (21)

where L is a constant weight and x( j,N) is the forwardpropagation (k = N) of:

x( j,k) = As,kx( j,0)+Bs,kus,k( j).

From (20) (with p = 1) the discretized approximation of thefirst moment of the jth image is:

m1 'Ψ1 +1

Ne

Ne−1

∑i=0

y( j, i). (22)

By substituting (6) in (22) and assuming no process noise(W = 0), the first moment becomes:

m1 = Ψ1 +Csx( j,0)+Dsus( j) (23)

where:

Cs =[C+CA+ . . .+CANe−1]

D =[

∑Ne−1i=0 CAiB ∑

Ne−2i=0 CAiB . . . CB

]Ds =

[D 01×(N−Ne)

](24)

By defining e( j,0) = x( j,0)− x( j,0) and recalling (S.S.),(21) and (23) it is easy to show that:

e( j+1,0) = (As−LCs)e( j,0). (25)

Therefore, if the pair (As,Cs) is observable, and it is if thefast rate system (A,C) is observable [4], the convergence rateof the error to zero can be guaranteed by choosing L so toplace the poles of (As−LCs).

C. Higher moments based estimators (MBEp)

Using the first moment (center of mass) of the intensity dis-tribution only provides a fraction of the information availablein the image. For example, by using the second moment of theintensity distribution, we have an additional output equation:

y(2)( j) =h(x( j, i))+n2( j)

where h(x( j, i)) is (20) with p = 2. Thus, we now have a non-linear output (m2) in addition to the linear output (m1). In [4],an extended Kalman filter was designed based on the first andsecond moments of the intensity measurement. Following asimilar procedure to obtain the time-varying observer gainsLp( j), higher moments of the intensity distribution, mp, maybe used as additional output for better state estimation.

xs( j+1) =Asxs( j)+Bsus( j)

+

L1( j)L2( j)· · ·

Lp( j)

T

m1( j)− m1( j)m2( j)− m2( j)

· · ·mp( j)− mp( j)

Page 6: Estimation of Fast-rate Dynamics using Slow-rate Image ...

SUBMISSION TO IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY 6

VI. LOCAL CONVERGENCE PROPERTIES OF MBEp

In order to gain a better understanding of the conditionsunderlying the successful utilization of a MBE, we nowanalyze the convergence properties of the MBEp introducedin Sec. V-C, specifically in relation to what conditions allowfor unique estimate of the system states while evaluating pmoments for a single image (Proposition 1). We determine,interestingly, that sufficient conditions for local observabilityencompass notions of “diversity of measurements” and“quantitative sufficiency of measurement”, as well as usualobservability condition of the (F.S.). The general sense ofthese findings is that as the (F.S.) order n grows higher,longer exposure windows (Ne) (quantitative sufficiency ofmeasurement) and higher moments order (p) computations arenecessary in order to solve the estimation problem througha single image measurement. Additionally the measuredpath of the output, i.e., the output within the exposurewindow y j, should not “wrap” too much during the exposurewindow (diversity of measurements), as it would impair theestimation capabilities of the MBEp. We then move forwardfrom the single image studying the convergence conditionswhen computing p moments for a succession of J images,determining an additive property.

Proposition 1 (Necessary and sufficient conditions forMBEp local convergence, using a single image):Let (A,B,C,0) characterize a discrete LTI system of order nwith x0 initial conditions and time step Tf . Let moreoverC y

Ψ,Te,Ta,Ts(η) =

∫ Ta+TeTa

Ψ(η−y(t))dt be a blurred image gen-

erated by an ideal image kernel, Ψ(·)= δ (·), with Ne =TfTe∈N.

The necessary condition to ensure local convergence to x0 fora moment based estimator using p moments (MBEp) and asingle intensity measurement (image) is:

• rank(OONe

)≥ n, (N.1.1)

where:

O =

C

CA...

CAp−1

, ONe =

C

CA...

CANe−1

, (26)

C =[

1 1 . . . 1]∈ℜ

1×Ne (27)

Ap−1 =p

Nediag{Cx0, CAx0, . . . ,CANe−1x0}p−1

=p

Nediag{y(0), y(1), . . . ,y(Ne−1)}p−1 ∈ℜ

Ne×Ne

(28)

Sufficient conditions instead are:

• (A,C) is observable; (S.1.1)• min{p,q} ≥ n; (S.1.2)

where q is the number of distinct eigenvalues of A.

Proof Refer to Appendix A.

It is interesting to observe that extending the exposurewindow, and thus increasing the amount of blur, enriches

the measurement increasing the potential for making the fastrate system locally observable. On the other hand outputwrapping, i.e. the number of repetitions of the eigenvaluesof A (which are the y( j, i), i = {0, . . . ,Ne− 1}) reduces suchpotential, as the nullity of O increases.

Proposition 2 (Necessary and sufficient conditions forMBEp local convergence, using a sequence of multiple im-ages):Let the hypothesis of Proposition 1 hold but J intensitymeasurement be available. By letting

Ap−1j =

pNe

diag{y( j,0), y( j,1), . . . ,y( j,Ne−1)}p−1, (29)

then the necessary condition for local convergence becomes:

• rank(Oi→ jOi→ j,Ne

)≥ n, (N.2.1)

where:

C =

[C 00 C

], Ai→ j =

[Ai 00 A j

], Oi→ j,Ne =

[ONeAiTr

ONe A jTr

],

(30)

and:

Oi→ j =

C

CAi→ j...

CAp−1i→ j

. (31)

Sufficient conditions for local convergence are, instead:

• (A,C) is observable; (S.2.1)

•J

∑j=1

min{p,q j} ≥ n. (S.2.2)

where q j is the number of distinct eigenvalues of A j.

Proof Refer to Appendix B.

Proposition 2 highlights that the sufficiency conditions de-termined in Proposition 1 must hold over all the imagemeasurements considered, and not for each image.

VII. IMAGE BLUR BASED ESTIMATOR (IBE)

In this section we introduce the Image Blur based Estimator(IBE) and extend the convergence properties determined inSec. VI for the moment based estimators to it through Hu’suniqueness theorem [20]. As showed in Sec. VI for moments,extending the exposure window of the images sensor providesan enriched measurement in form of image blur and thereforeincreased potential for estimation. In the most general caseof an intensity distribution, it is logical to use the entireintensity profile for obtaining the best possible state estimateinstead of spatial moments, which compress the dynamicinformation of the system output y in a sequence of numbers.Hu’s Uniqueness Theorem in [20] provides the link we needto extend the properties discussed in Sec. VI to full intensitydistributions. We report here Hu’s theorem by simplifying itto the one-dimensional case and adjusting the notations to

Page 7: Estimation of Fast-rate Dynamics using Slow-rate Image ...

SUBMISSION TO IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY 7

those used in this paper:

Theorem 7.1 (Hu’s Uniqueness Theorem): Letting C y(η)be a piecewise continuous therefore bounded function, thatcan have nonzero values only in the finite part of the η

plane; then moments of all orders exist and the followinguniqueness theorem can be proved: The moment sequence{y(p)}∞

p=0 is uniquely determined by C y(η); and conversely,C y(η) is uniquely determined by {y(p)}∞

p=0.

Remark 5: From Hu’s theorem we understand that consider-ing the difference in the intensity distributions is equivalent toconsidering the difference in the countable infinite sequenceof moments, i.e.,:

(Yj(·)−C y j(·)) = limp→∞

m1(x0)m2(x0)

...mp(x0)

m1(x0)m2(x0)

...mp(x0)

.

But from the MBEs properties analysis we concluded that afinite number (p) of moments is sufficient for achieving localobservability (Sec. VI), therefore IBE estimators will convergeto the correct estimate when the appropriate conditions ex-pressed in Propositions 1, 2 are satisfied.

A. IBE

An image blur based estimation algorithm can now bedevised that uses the error between the measured andpredicted intensity profiles, i.e.,

(Yj(·)−C y j(·)

)to drive

the estimation of the state. Note that Yj(·) is the measuredimage at the jth slow step and C y j(·) is the predicted oneusing the estimated state x( j,0). Adopting the notation[·]( j,0) = [·]0( j), we let the fast rate estimate x( j,k) be theforward propagation of the estimated state at the start of eachslow step: xopt

0 ( j) = x0( j)+δ xopt0 ( j), where x0(0) is an initial

guess of the state and:

δ xopt0 ( j) = arg min

δ x0( j)∈ℜn

∥∥Yj(·)−C y j(·)∥∥2

subject to:y j(i) =CAs,i(x0( j)+δ x0( j))+Bs,ius,i( j)

i = {0, . . . , Ne−1}, for any j. (32)

Where ‖ · ‖ is an appropriate norm and y j(i) the output ofthe (F.S.) within the jth exposure window. The predictedimages C y j are obtained through the image predictor definedin (10). The initial estimate at the next slow step is thentaken as: x( j+1,0) = xopt( j,N) = Asx

opt0 ( j)+Bsus( j). Fig. 6

shows a block diagram of the IBE.

This estimation scheme is posed as finding the states atthe beginning of each slow step, x0( j), that match the blurkernel prediction to the measurement at each slow step. In anoiseless scenario, solving the above minimization problemfor any jth measurement, if the conditions introduced inProposition 2 are met, will grant deadbeat convergence to the

u(k) Fast System

Image Sensor and Processing

Y ⋅

PSF Predictor

𝒞 ⋅

Image Blur

Estimator

y(k) Yjy (⋅)

𝒞jy (⋅)

x j(k)

Fast System Model

y (k)

Fig. 6. Block diagram of the image blur based estimator.

real state.

VIII. EXPERIMENTAL APPARATUS AND PROCEDURE

To validate the proposed estimation scheme we have con-structed an experimental setup shown in Fig. 7. A laser beamis bounced off a fast steering mirror (FSM) with the resultingimage captured by a CCD camera. The mirror is also equippedwith a high bandwidth position sensing diode (PSD) allowingthe comparison between the image based techniques proposedin this paper with a high quality reference, i.e., the PSDreadings will be assumed to be the “true” fast-rate systempositions and used for validation. The FSM is actuated at afast-rate Tf = 1ms while image measurements are available ata slow rate of Ts = 400ms.

(a) (b)

Fig. 7. Experimental apparatus: (a) a CCD camera, a laser source (bottomleft), a fast steering mirror (right) and a beam splitter; (b) shows optical pathrepresentation. The PSDs are built into the FSM.

For sake of brevity we refer to [16] for a more detaileddescription of the experimental apparatus and procedure,where the same setup was used for demonstrating systemidentification using slow rate image sensors.

IX. EXPERIMENTAL RESULTS

In this section we first briefly the algorithm, a Newtonmethod with line search, that was used to solve the opti-mization problem (32). We then present the results of itsapplication to a synthetic image (Sec. IX-B) in order tovalidate the IBE. Finally, we present experimental resultsshowing convergence in Sec. IX-C, and robustness to syntheticstray light in Sec. IX-D.

A. IBE minimization problem solver algorithmThe IBE minimization problem (32) is solved through an

iterative Newton algorithm with line search, i.e., for every ith

Page 8: Estimation of Fast-rate Dynamics using Slow-rate Image ...

SUBMISSION TO IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY 8

iteration x(i+1)0 = x(i)0 +δ x(i)0 , with:

δ x(i)0 = α

(∇

x(i)0C (x(i)0 )T

∇x(i)0

C (x(i)0 ))−1

∇x(i)0

C (x(i)0 )T E(x(i)0 )

where ∇x(i)0

C (x(i)0 ) is obtained by stacking up the evaluationof the gradient of the predicted image with respect to theestimated initial conditions at every pixel:

∇x(i)0

C (x(i)0 ) =

∂C (x(i)0 )

∂ x01(1) ∂C (x(i)0 )

∂ x02(1)

......

∂C (x(i)0 )

∂ x01(ηxmax)

∂C (x(i)0 )

∂ x02(ηxmax)

.Similarly E(x(i)0 ) is the stacked vector of the errors eC (η) =C (x0)(η)− C (x0)(η) for η = 1, . . . ,ηxmax . At every ith it-eration the α ∈ [0,1] is determined so to minimize the costfunction at that iteration.

B. IBE validation

For this simulation and the following experimental results,the following timing parameters were used Tf = 1ms, Ta = 1s,Ne = 100, N = 400. The (F.S.) is a second order system (n= 2),with its continuous time version characterized by:

A =

[0 −4001 −0.6

],B =

[10

],C =

[0 900

],D = 0.

Fig. 8 shows the evolution of the cost function J(i)= ‖C (x0)−C (x(i)0 )‖2

2 and the respective state estimation errors em(i) =|x0,m− x(i)0,m|, m = 1,2 as a function of the solver iterations.The zero-th iteration represents the initial conditions. In thisideal scenario perfect convergence is achieved.

C. Convergence Results

We here provide experimental results of the application ofthe estimation algorithms discussed in this manuscript. Wecompare the performances of: the MBE1, first order momentbased estimator described in Sec. V-B; the MBE2, momentbased estimator as described in Sec. V-C (p = 2) that usesthe first and second moments of the intensity distributionsto yield the state estimate; and the IBE, image blur basedestimator (Sec. VII-A) that uses the full intensity distributionto determine the state estimate. We distinguish between theIBEapriori estimator, being the a priori state estimate using thewhole intensity distribution of the image measurement, whichis delayed of one slow step with respect to the measurementson which the estimate is based, and the IBEaposteriori, being thea posteriori state estimate. The relevant parameters employedare defined in Sec. IX-B.

Since during these experiments y(t) =[yu(t)ηy0

], we con-

sider as image sensor measurements only the univariate in-tensity distributions {Y yu(ηx,ηy0)}J

j=0, with ηy0 being the ηycoordinate of the peak of the Gaussian image kernel Ψ andthe total number of slow steps considered are J = 30.

Fig. 9 shows an overview of the first 15 experimental outputmeasurements and a detailed highlight of the output motionswithin second slow step ( j = 2) as there is when the first

0 1 2 3 4 5 6 70

2

4

6

8

10

12

14

16x 10

5

Iterations (i)

J=‖C

(x0)−

C(x

(i)0)‖2 2

J(i)

2 4 6 80

1

2

x 10−4

2 4 6 80

0.2

0.4

0.6

0.8

1

1.2

1.4x 10

−7

Iterations (i)

e1=|x 0

,1−x(i)0,1|2

e1(i)

Iterations (i)

e2=|x 0

,2−x(i)0,2|2

e2(i)

Fig. 8. Convergence of the cost function J = ‖C (x0)−C (x(i)0 )‖22 and tracking

errors em(i)= |x0,m− x(i)0,m|, m= 1,2, for a single image measurement, applyingthe IBE estimator (Sec. VII-A) in the ideal scenario, i.e., assuming perfectknowledge of the system model and of the image sensor. This figure showsthat perfect convergence of both state variables to the true values is achievedunder ideal conditions with a single image.

estimate update is implemented for all the estimators. Wedo not include the IBEaposteriori output estimate for clarity ofrepresentation as it would be undistinguishable from the “true”output motion yPSD .

The evolution of the cost function J( j) = ‖Y yu, j(ηx,ηy0)−C yu, j(ηx,ηy0)‖2

2 for the observers as a function of the slowsteps is intead shown in Figure 10. It is noted that whilethe IBE observers provide their estimate based on the min-imization of such cost function, the MBEs do not and J( j)is computed after the estimate update for sake of comparisononly. These results highlight that the IBE provides deadbeatestimate, while the MBE will always display a transient in theconvergence, regardless of the choice of the Lp gains.

Figure 11 reports the evolutions of the state estimation errorsfor the different estimators, normalized with respect to theinitial error. The estimation error is defined as the two normof the difference of the fast rate evolutions, i.e.:

ei( j) =‖xiPSD( j,k)− xi(·)( j,k)‖2

‖xiPSD(0,k)− xi(·)(0,k)‖2, i = 1,2, (33)

The second state x2 corresponds to the system output, whichis measured by the PSD sensors embedded in the fast steeringmirror and assumed as true position of the (F.S.), as more

Page 9: Estimation of Fast-rate Dynamics using Slow-rate Image ...

SUBMISSION TO IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY 9

0 1 2 3 4 5 6−3

−2

−1

0

1

2

3

Time (s)

y u(t)

yPSD yMBE1 yMBE2 yIBEapriori Exposure

1.4 1.45 1.5 1.55 1.6 1.65 1.7 1.75 1.8

−2

−1.5

−1

−0.5

0

0.5

1

1.5

2

Time (s)

yu(t)

Fig. 9. System output PSD measurements (yPSD) and estimation estimates(y(·)) are shown in the top figure. A detail of the second slow step (highlightedin the box), when the first correction is performed based on the informationgathered at the first measurement, in shown in the bottom figure. Theperformance of the IBEpost is not shown for clarity of representation, as ityields deadbeat estimate.

2 4 6 8 10 12 140

0.5

1

1.5

2

2.5

3

3.5

4x 10

6

Slow Steps ( j)

J(j)

MBE1MBE2IBEaprioriIBEaposteriori

Fig. 10. Convergence of the cost function J( j) = ‖Y y j,x (ηx,ηy0 ) −C y j,x (ηx,ηy0 )‖2

2 for the different estimators. As shown in previous results(Fig. 11), this figure highlights the significantly lower cost function residualthan the MBE1,2 after even only a single image measurement has beenreceived.

thoroughly explained in [16]. Since we do not have a velocitysensor available in our hardware setup, we numerically dif-ferentiate and filter yPSD to obtain a reference measurementx1,PSD.

D. Robustness Results

An additional advantage of the IBE is the increased robust-ness to disturbances, such as stray light on the image sensor, ascompared to the MBEp’s. We show this by adding a syntheticintensity source Yd(η) to the experimental {Yj(η)}J

j=1. Wemodel the stray light disturbance as a Gaussian intensitysource:

Yd(η) = de−12 (η−ηd)

T Σ−1d (η−ηd) (34)

0 5 10 150

0.25

0.5

0.75

1

0 5 10 150.1

0.25

0.5

0.75

1

Slow Steps ( j) Slow Steps ( j)

e1(j)

e2(j)

MBE1MBE2IBEaprioriIBEaposteriori

Fig. 11. Evolution of the state estimation errors (33) for different estimators.These results highlight the superior transient performance of the IBE over theMBEs.

where the center is placed at ηd = (0,ηy0) and Σd = diag(σd).We recall that ηy0 is the ηy coordinate of the center of theimage kernel Ψ, as introduced in Sec. II-B. Two scenarios arestudied: increasing the disturbance intensity d while keepingσd constant and vice-versa.

Figure 12 shows the first intensity measurement Y1(·,ηy0)in the presence of an increasingly intense stray light source.Stray light causes a systematic bias in the image momentscalculations and hence greatly reduce the efficacy of theMBE’s, even with a dim disturbaces (d = 4), as shown inFig. 13. The dotted line in Fig. 12 represents the initialpredicted distribution C y1

1 (·,ηy0) based on the state estimateguess x0(0). It is observed that the disturbance distributionYd(·,ηy0) does not superimpose to the initial guess C y1

1 (·,ηy0),therefore the IBE estimates are not affected by it.

0 200 400 600 800 1000 1200 1400 16000

200

400

600

800

1000

1200

1400

1600

ηx

Y(ηx,ηy 0)

d = 5d = 10d = 15d = 25d = 50d = 150d = 500d = 1500

Fig. 12. First intensity measurement with added stray light Y1,d(ηx,ηy0 ) =Y1(ηx,ηy0 ) +Yd(ηx,ηy0 ) as a function of the disturbance intensity d. Thedotted line shows the initial intensity distribution prediction C y1 . In thesecases the disturbance distribution does not superimpose to the motion blurmeasurement.

As the intensity of the disturbance source increases theMBE estimates rapidly deteriorate while the IBE state estimateis substantially unaffected, as shown in Fig. 14, where themean of the normalized steady state estimation errors arereported for the different estimators, along with their standarddeviations. The steady state errors means ess,(·) are obtainedby averaging the ei( j) in (33) for j = 3, . . . ,30.

A different scenario unfolds when the disturbance intensity

Page 10: Estimation of Fast-rate Dynamics using Slow-rate Image ...

SUBMISSION TO IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY 10

0 10 20 300

0.5

1

1.5

2

2.5

3

3.5

4

0 10 20 300

0.5

1

1.5

2

2.5

3

3.5

4

4.5

Slow Steps ( j)Slow Steps ( j)

e 1(j)=‖x1PSD(j,k)−x 1

(·)(j,k)‖2

e 2(j)=‖x2PSD(j,k)−x 2

(·)(j,k)‖2MBE1

MBE2IBEaposteriori

Fig. 13. Normalized state estimation errors 33 for different observers inpresence of stray light for d = 4. The moment based observers designed inSec. IX-C provide poorer estimates than the IBE, which is not significantlyaffected by the disturbance.

0 2 4 6 80

1

2

3

4

5

6

7

8

MBE1

MBE2

IBE

Disturbance Peak Intensity (d)

SteadyStateEstimationErroress,(·)

Fig. 14. Mean steady state estimation errors with standard deviations fordifferent observers, as a function of the increasing stray light peak amplituded. It is observed how the image blur based estimator is unaffected by theincreasing disturbance while the MBE estimates deteriorate quickly. Thecurves are slightly offset for clarity of visualization.

d is kept constant (d = 500) while changing the distributionthrough σd . Fig. 15 shows the resulting first image sensormeasurement for σd = 100,350,600,850. As the disturbanceincreasingly modifies the motion blur intensity distribution,the IBE estimates clearly deteriorate, as shown in 16 for threecases of σd . Fig. 17 reports the mean steady state estimationerrors (33) with their standard deviations.

X. CONCLUSIONS AND FUTURE WORK

In this paper local convergence conditions of pth ordermoment based estimators (MBEp) are first formalized. Animage blur based estimator (IBE) is then proposed that inheritsthe convergence properties of the MBEp’s providing deadbeatstate estimation by means of minimization of the spatial (pixeldomain) error between the intensity distributions of image pre-dictions and measurements. Experimental results are providedcomparing the estimation performances of a first order momentbased estimator (MBE1), a second order estimator (MBE2)using first and second moments of the intensity distributionsand an IBE, proving the enhanced performances of the latter.

0 200 400 600 800 1000 1200 1400 16000

50

100

150

200

250

300

350

400

450

500

ηx

Y( ηx,ηy 0)

σd ↑

Fig. 15. Intensity measurement Y (ηx,ηy0 ) = Y1(ηx,ηy0 )+Yd(ηx,ηy0 ) as afunction of σd .

0 10 20 300

0.05

0.1

0.15

0.2

0.25

0.3

0.35

0 10 20 300.1

0.15

0.2

0.25

0.3

0.35

σd = 350σd = 600σd = 850

Slow steps ( j) Slow steps ( j)

e 1(j)

e 2(j)

Fig. 16. Example of IBE estimation errors as a function of the slow stepsfor different stray light distributions.

100 350 600 8500.115

0.12

0.125

0.13

0.135

0.14

0.145

0.15

0.155

0.16

0.165

100 350 600 8500.01

0.02

0.03

0.04

0.05

0.06

0.07

0.08

0.09

0.1

σdσd

e1,IBE

e2,IBE

IBE

Fig. 17. IBE Steady state estimation errors in the presence of superimposingstray light.

We conclude from the above analysis and experiments thatexploiting the motion information encoded in image blurprovides enhanced estimation performance compared to es-timators based on first or higher order moments of the mea-sured intensity distributions. The most significant drawbackof the proposed IBE is clearly the intense computationaleffort required to completely solve the minimization problemwhich allows only for an offline implementation. This efficient

Page 11: Estimation of Fast-rate Dynamics using Slow-rate Image ...

SUBMISSION TO IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY 11

solution of such minimization problem was beyond the scopeof this manuscript. In future work, an iterative version of theIBE inspired by moving horizon estimation techniques [22],will be studied allowing for online implementation and use ofthe provided estimate for feedback control purposes.

APPENDIX AMBEp LOCAL CONVERGENCE FOR A SINGLE IMAGE

PROOF

Proof Let us consider, without loss of generality, u(t) ≡ 0and Ta = 0. The pth moment of an intensity distribution canbe expressed, from (17), as:

mδp(x0) =

1Te

∫ Te

τ=0yp(τ)dτ ' 1

Ne

Ne−1

∑m=0

(y(m))p (35)

=1

Ne

Ne−1

∑m=0

(CAmx0)p , (36)

where the dependency on the initial condition was explicitlyadded to the moment notation. We recall that x∈ℜn and y∈ℜ.Moments higher than the first one are clearly nonlinear inthe initial state x0, we therefore proceed to linearize thesenonlinear outputs localizing our study through a Taylor seriesapproximation. The first order variation of the pth momentwith respect to the initial state x0 can be expressed as:

mδp(x0 +δx0) = mδ

p(x0)+∇x0mδp(x0)δx0 +o(δx2

0) (37)

where the gradient operator ∇x0(·) : ℜ→ℜn is the row vector:

∇x0(·) =[

∂ (·)∂x0,1

∂ (·)∂x0,2

. . .∂ (·)∂x0,n

]. (38)

The gradient of the pth moment of an intensity distribution istherefore:

∇x0 mδp(x0) =

pNe

Ne−1

∑m=0

(CAmx0)p−1 CAm (39)

=p

Ne

Ne−1

∑m=0

(y(m))p−1 CAm, (40)

and by using (26) and (28), it can be re-written as:

∇x0mδp(x0) = CAp−1ONe . (41)

Evaluating a collection of moments from an image thenyields, neglecting the o(δx2

0) terms (the pedix [·]x0 is added tohighlight the dependency of [·] on x0):

mδ1 (x0 +δx0)

mδ2 (x0 +δx0)

...mδ

p(x0 +δx0)

'

mδ1 (x0)

mδ2 (x0)

...mδ

p(x0)

+

CCA

...CAp−1

x0

C

CA...

CANe−1

δx0.

Then the problem of local observability finally depends on therank of the “extended” observability matrix:

O = OONe :=

C

CA...

CAp−1

x0︸ ︷︷ ︸

p×Ne

C

CA...

CANe−1

︸ ︷︷ ︸

Ne×n

(42)

from which follows the necessity condition (N.1.1).In order to derive the sufficient conditions we first observethat if (S.1.1) is verified then rank(ONe) = n. Therefore ifrank(O) ≥ n condition (N.1.1) would be satisfied. The rankof O is the dimension of the observable subspace of thepseudo-system ˙x = Ax, y = Cx. Letting r be the dimension ofthe unobservable subspace, it is always possible to find statetransformation matrix T ∈ ℜNe×Ne to separate the observablesubsystem from the unobservable one. T is such that:

T−1 =[

w1 . . . wNe−r wNe−r+1 . . . wNe

](43)

where {wNe−r+1 . . . wNe} are a basis for I = ker(O) and{w1 . . . wNe−r} an arbitrary complimentary base such that T−1

is invertible. Moreover wi ∈ℜNe×1, ∀i = 1, . . . , Ne. The appli-cation of this transformation yields A= T AT−1 and C = CT−1.It is easy to show that:

A︸︷︷︸Ne×Ne

=

A11︸︷︷︸

(Ne−r)×(Ne−r)

0︸︷︷︸(Ne−r)×r

A21︸︷︷︸r×(Ne−r)

A22︸︷︷︸r×r

,C︸︷︷︸

1×Ne

=

[C11︸︷︷︸

1×(Ne−r)

0︸︷︷︸1×r

]

and therefore:

O︸︷︷︸p×Ne

= OT−1 T︸︷︷︸Ne×Ne

=

CT−1

CAT−1

...CAp−1T−1

T =

C

CA...

CAp−1

T

(44)

=

C11 0

C11A11 0...

C11Ap−111 0︸︷︷︸

1×r

T (45)

=

[O︸︷︷︸

p×(Ne−r)

0︸︷︷︸p×r

] T︸︷︷︸(Ne−r)×Ne

(46)

= O︸︷︷︸p×(Ne−r)

T︸︷︷︸(Ne−r)×Ne

, (47)

The “extended” observability matrix (42) therefore becomes:

O = O︸︷︷︸p×(Ne−r)

T︸︷︷︸(Ne−r)×Ne

ONe︸︷︷︸Ne×n

(48)

where ONe has full column rank n for hypothesis (S.1.1). Tis a flat matrix obtained by partitioning the first (Ne− r) rowsof the inverse of T−1 and has full row rank Ne− r, becauseT is invertible and therefore all its rows are linearly inde-pendent. Furthermore, (C11, A11) is observable by definitionand rank(O) = min{p,Ne− r}. Therefore min{p,Ne− r} ≥ nimplies rank(O)= n, which yields satisfaction of the necessarycondition (N.1.1).

Page 12: Estimation of Fast-rate Dynamics using Slow-rate Image ...

SUBMISSION TO IEEE TRANSACTIONS ON CONTROL SYSTEMS TECHNOLOGY 12

We finally note that r is the number of eigenvalues repeti-tions of A and can therefore be expressed as r = Ne−q, withq being the number of distinct eigenvalues of A. Since:

CCA

...CAp−1

=

1 1 . . . 1

y( j,0) y( j,1) . . . y( j,Ne−1)y2( j,0) y2( j,1) . . . y2( j,Ne−1)

......

...yp−1( j,0) yp−1( j,1) . . . yp−1( j,Ne−1)

,

its columns will definitely be equal every time the outputhas the same value at different time instants (y( j, i) = y( j,k),i,k = {0, . . . ,Ne−1}). But y( j,m), m = {0, . . .Ne−1} are theeigenvalues of A, because it is a diagonal matrix with they( j,m)’s on the diagonal. So r =Ne−q, where q is the numberof distinct eigenvalues of A. It follows finally that rankO =min{p,Ne− (Ne−q)}= min{p,q}, i.e., condition (S.1.2). �

APPENDIX BMBEp LOCAL CONVERGENCE FOR A MULTIPLES IMAGES

PROOF

Proof Let us consider, without loss of generality, u(t)≡ 0 andTa = 0. The pth moment of the jth ( j = 0, . . . ,J−1) image is:

mδp, j(x0) =

1Ne

Ne−1

∑m=0

(y( j,m))p .

The gradient of the moments with respect to x0, similarlyto (41), can be expressed as:

∇x0mδp, j(x0) = CAp−1

j ONeA jTr ,

where A j is defined in (29). For economy of notation we willhereafter elaborate on the case of only two images, but theextension to more images is straightforward. Moreover we willassume that the number of moments considered for each imageis the same (p), as well as the length of the exposure window(Ne). Let δmδ

p, j = mδp, j(x0 +δx0)−mδ

p, j(x0), we can stack thepth moment of the two images and consider:

δmδ

p =

[δmδ

p,iδmδ

p, j

]'[

CAp−1i ONeAiTr

CAp−1j ONeA jTr

]δx0

= CAp−1i→ j Oi→ j,Neδx0

where the higher order o(δx2) terms have been neglected, and:

C =

[C 00 C

], Ai→ j =

[Ai 00 A j

], Oi→ j,Ne =

[ONeAiTr

ONeA jTr

].

Then the evaluation of a collection of p moments becomes:δmδ

1

δmδ

2...

δmδ

p

'

CCAi→ j

...CAp−1

i→ j

Oi→ j,Neδx0 = Oi→ jOi→ j,Neδx0,

from which the necessary condition (N.2.1) stems.It is easy to verify though that when (S.2.1) is true

then rankOi→ j,Ne = n. Moreover, rank(Oi→ j) = rank(Oi) +rank(O j), where rank(Ok) = min{p,qk}, k = i, j from whichfollows the thesis. �

ACKNOWLEDGMENTS

This work was supported in part by the National ScienceFoundation grant CMMI-1130231, and Smart Lighting En-gineering Research Center, grant EEC-0812056. This workwas also supported in part by the Center for AutomationTechnologies and Systems (CATS) under a block grant fromthe New York State Empire State Development Division ofScience, Technology and Innovation (NYSTAR).

REFERENCES

[1] H.-H. Nagel, “On the estimation of optical flow: Relations betweendifferent approaches and some new results,” Artificial Intelligence,vol. 33, no. 3, pp. 299–324, 1987.

[2] S. Beauchemin and J. Barron, “The computation of optical flow,” ACMComputing Surveys, vol. 27, no. 3, pp. 433–466, 1995.

[3] W.-G. Chen, N. Nandhakumar, and W. N. Martin, “Image motion estima-tion from motion smear-a new computational model,” IEEE Transactionson Pattern Analysis and Machine Intelligence, vol. 18, pp. 412–425,1996.

[4] S. Mishra and J. T. Wen, “Extracting dynamics from blur,” 50th IEEEConference on Decision and Control and European Control Conference,pp. 5995–6000, Dec. 2011.

[5] J. Biemond, R. Lagendijk, and R. Mersereau, “Iterative methods forimage deblurring,” Proceedings of the IEEE, vol. 78, no. 5, pp. 856–883, May 1990.

[6] M. Bertero, T. A. Poggio, and V. Torre, “Ill-posed problems in earlyvision,” Proceedings of the IEEE, vol. 76, pp. 869–889, 1988.

[7] A. Levin, Y. Weiss, F. Durand, and W. T. Freeman, “Understandingand evaluating blind deconvolution algorithms,” IEEE Conference onComputer Vision and Pattern Recognition (CVPR), pp. 1964–1971,2009.

[8] N. Hea, K. Lub, B. K. Baoc, L. L. Zhanga, and J. B. Wanga, “Single-image motion deblurring using an adaptive image prior,” InformationSciences, 2014.

[9] T. Jiang, F. Yang, Y. Fan, and D. Evans, “A parallel genetic algorithmfor cell image segmentation,” Electronic Notes in Theoretical ComputerScience, vol. 46, no. 2, pp. 138–149, 2001.

[10] J. L. Barron, D. J. Fleet, and S. S. Beauchemin, “Systems and experiment– performance of optical flow techniques,” International Journal ofComputer Vision, vol. 12, no. 1, pp. 43–77, 1994.

[11] I. M. Rekleitis, “Optical flow recognition from the power spectrum ofa single blurred image,” Image Processing, Proceedings, InternationalConference on, vol. 3, pp. 791–794, 1996.

[12] S. Dai and Y. Wu, “Motion from blur,” in Computer Vision and PatternRecognition, 2008. CVPR 2008. IEEE Conference on, 2008, pp. 1 –8.

[13] A. Sellent, M. Eisemann, B. Goldlucke, D. Cremers, and M. Magnor,“Motion field estimation from alternate exposure images,” IEEE Trans-actions on Pattern Analysis and Machine Intelligence, vol. 33, no. 8,pp. 1577–1589, 2011.

[14] A. Glindemann, S. Hippler, T. Berkefeld, and W. Hackenburg, “Adaptiveoptics on large telescopes,” Kluver Academic Publisher.

[15] B. C. Platt and R. Shack, “History and principles of shack-hartmannwavefront sensing.” Journal of Refractive Surgery, vol. 17, no. 5, pp.573–577, 2001.

[16] J. Tani, S. Mishra, and J. T. Wen, “Identification of fast-rate systemsusing slow-rate image sensor measurements,” IEEE/ASME Transactionson Mechatronics, vol. 19, no. 4, pp. 1343–1351, 2014.

[17] ——, “State estimation of fast-rate systems using slow-rate imagesensors,” American Control Conference (ACC), pp. 6193–6198, 2013.

[18] Q. Shan, J. Jia, and A. Agarwala, “High-quality motion deblurring froma single image,” ACM Transactions on Graphics – Proceedings of ACM,vol. 27, no. 73, 2008.

[19] L. Yuan, J. Sun, L. Quan, and H. Shum, “Image deblurring withblurred/noisy image pairs,” ACM Trans. Graph., vol. 26, no. 3, Jul.2007.

[20] M. K. Hu, “Visual pattern recognition by moment invariants,” IRE Trans.Info. Theory, vol. IT-8, pp. 179–187, 1962.

[21] Y. Zhang, C. Wen, and Y. Zhang, “Estimation of motion parametersfrom blurred images,” Pattern Recognition Letters, vol. 21, no. 5, pp.425 – 433, 2000.

[22] P. Kuhl, M. Diehl, T. Kraus, J. P. Schloder, and H. G. Bock, “A real-time algorithm for moving horizon state and parameter estimation,”Computers and Chemical Engineering, vol. 35, pp. 71–83, 2011.