Dead Reckoning of a Fixed-Wing UAV with Inertial ...folk.ntnu.no/torarnj/paper_ICUAS17.pdf · Dead...

10
Dead Reckoning of a Fixed-Wing UAV with Inertial Navigation Aided by Optical Flow Lorenzo Fusini, Tor A. Johansen and Thor I. Fossen Norwegian University of Science and Technology Centre for Autonomous Operations and Systems Department of Engineering Cybernetics Trondheim, Norway E-mail: {lorenzo.fusini, tor.arne.johansen, thor.fossen}@ntnu.no Abstract—This paper provides experimental results for dead reckoning of a fixed-wing UAV using a non- linear observer (NLO) and a more recent tool called eXogenous Kalman Filter (XKF), which uses the NLO itself as a first-stage filter. The sensors used are an IMU (accelerometers, inclinometers, and rate gyros), a camera, and an altimeter; the observed states are position, velocity, and attitude. A machine vision system provides the body-fixed velocity of the UAV. Although the calculated velocity results affected by a bias, it is necessary both for estimating the attitude and for bounding the rate of divergence of the position during dead reckoning. Gyro, accelerometer, and optical flow (OF) velocity biases are estimated, but only as long as GNSS is available. When dead reckoning begins, they are frozen at their last calculated value. The experimental results show that the position error grows at a bounded rate with the proposed estimators. I. I NTRODUCTION In the field of navigation, the most popular methods for estimating position, velocity, and atti- tude of a vehicle have been the Kalman filter and its variants [1]–[3]: they are characterized by good performance at the cost of a relatively challenging tuning process, heavy computational footprint, and unclear stability properties in the non-linear case. More recently, non-linear observers have been developed to overcome the drawbacks of solutions based on the Kalman filter, offering computation- ally light algorithms with well-defined stability properties, often global, that make them robust to disturbances and uncertain initialization [4]– [10]. A recent addition to this set of tools is the exogenous Kalman filter [11]–[13], a two-stage filter made of a non-linear observer in cascade with a linearized time-varying Kalman filter that combines the advantages of each, while also over- coming their invididual weaknesses. When direct information about the current po- sition of the vehicle is not available, one way to obtain an estimate of it is through dead reckon- ing. At its core the method requires to integrate twice a measured acceleration, usually available via an IMU, and/or once a measured velocity: in either case, the method suffers from inaccu- racies caused by the integration of noisy and biased measurements and by the approximation of the calculations themselves, leading to inevitable drifts in the calculated position. Moreover the IMU measurements must be rotated from body to Earth- fixed frame, which requires attitude estimates. For these reasons it is important to have a good dead reckoning algorithm, in order to reduce as much as possible the displacement between two successive position fixes. A camera can provide invaluable help in this context. It allows to obtain an estimate of the motion of the vehicle, a process known as vi- sual odometry. There exist many contributions to such topic, especially in the field of vision-aided navigation [14]–[17]. Machine vision can be use to calculate the OF from recorded images and to integrate it with other sensors to obtain the body- fixed velocity [9], [18]. The authors have previously proposed a glob- ally exponentially stable (GES) non-linear ob- server for vision-aided UAV dead reckoning [19],

Transcript of Dead Reckoning of a Fixed-Wing UAV with Inertial ...folk.ntnu.no/torarnj/paper_ICUAS17.pdf · Dead...

Page 1: Dead Reckoning of a Fixed-Wing UAV with Inertial ...folk.ntnu.no/torarnj/paper_ICUAS17.pdf · Dead Reckoning of a Fixed-Wing UAV with Inertial Navigation Aided by Optical Flow Lorenzo

Dead Reckoning of a Fixed-Wing UAV withInertial Navigation Aided by Optical Flow

Lorenzo Fusini, Tor A. Johansen and Thor I. FossenNorwegian University of Science and TechnologyCentre for Autonomous Operations and Systems

Department of Engineering CyberneticsTrondheim, Norway

E-mail: {lorenzo.fusini, tor.arne.johansen, thor.fossen}@ntnu.no

Abstract—This paper provides experimental resultsfor dead reckoning of a fixed-wing UAV using a non-linear observer (NLO) and a more recent tool calledeXogenous Kalman Filter (XKF), which uses the NLOitself as a first-stage filter. The sensors used are anIMU (accelerometers, inclinometers, and rate gyros),a camera, and an altimeter; the observed states areposition, velocity, and attitude. A machine vision systemprovides the body-fixed velocity of the UAV. Althoughthe calculated velocity results affected by a bias, itis necessary both for estimating the attitude and forbounding the rate of divergence of the position duringdead reckoning. Gyro, accelerometer, and optical flow(OF) velocity biases are estimated, but only as long asGNSS is available. When dead reckoning begins, they arefrozen at their last calculated value. The experimentalresults show that the position error grows at a boundedrate with the proposed estimators.

I. INTRODUCTION

In the field of navigation, the most popularmethods for estimating position, velocity, and atti-tude of a vehicle have been the Kalman filter andits variants [1]–[3]: they are characterized by goodperformance at the cost of a relatively challengingtuning process, heavy computational footprint, andunclear stability properties in the non-linear case.More recently, non-linear observers have beendeveloped to overcome the drawbacks of solutionsbased on the Kalman filter, offering computation-ally light algorithms with well-defined stabilityproperties, often global, that make them robustto disturbances and uncertain initialization [4]–[10]. A recent addition to this set of tools is theexogenous Kalman filter [11]–[13], a two-stagefilter made of a non-linear observer in cascade

with a linearized time-varying Kalman filter thatcombines the advantages of each, while also over-coming their invididual weaknesses.

When direct information about the current po-sition of the vehicle is not available, one way toobtain an estimate of it is through dead reckon-ing. At its core the method requires to integratetwice a measured acceleration, usually availablevia an IMU, and/or once a measured velocity:in either case, the method suffers from inaccu-racies caused by the integration of noisy andbiased measurements and by the approximation ofthe calculations themselves, leading to inevitabledrifts in the calculated position. Moreover the IMUmeasurements must be rotated from body to Earth-fixed frame, which requires attitude estimates. Forthese reasons it is important to have a good deadreckoning algorithm, in order to reduce as much aspossible the displacement between two successiveposition fixes.

A camera can provide invaluable help in thiscontext. It allows to obtain an estimate of themotion of the vehicle, a process known as vi-sual odometry. There exist many contributions tosuch topic, especially in the field of vision-aidednavigation [14]–[17]. Machine vision can be useto calculate the OF from recorded images and tointegrate it with other sensors to obtain the body-fixed velocity [9], [18].

The authors have previously proposed a glob-ally exponentially stable (GES) non-linear ob-server for vision-aided UAV dead reckoning [19],

Page 2: Dead Reckoning of a Fixed-Wing UAV with Inertial ...folk.ntnu.no/torarnj/paper_ICUAS17.pdf · Dead Reckoning of a Fixed-Wing UAV with Inertial Navigation Aided by Optical Flow Lorenzo

demonstrating that the use of a velocity aidinghelps to bound the rate of divergence of thecalculated position.

A. Contribution of this paper

In this paper an NLO and an XKF for deadreckoning are tested on experimental data and theirresults compared. Both estimators use the sameset of sensors, described in Sec. II-A. The NLO isthe GES observer proposed in [19]. Estimators foraccelerometer bias and body-fixed velocity bias,together with compensation for the rotation ofEarth, are included in the design to increase theaccuracy of the estimates and reduce the positiondrift. The XKF is designed using the same NLOas the first-stage filter. The results help understandthe problems related to each estimator in thiscontext, and how to possibly overcome them. Themethod is experimentally validated by 10 minutesof dead reckoning of a fixed-wing UAV.

II. NOTATION AND PRELIMINARIES

Vectors and matrices are represented by lower-case and uppercase letters, respectively. X−1, X+,and tr(X) denote the inverse, pseudo-inverse, andtrace of a matrix, respectively, and XT and xT thetranspose of a matrix and vector, respectively. Anestimated value of x is represented as x and theestimation error is defined as x = x− x; similarly,for a matrix, X = X − X . The operator ‖ · ‖denotes the Euclidean norm for vectors, and Inis the identity matrix of order n. The functionsat(·) performs a component-wise saturation of itsvector or matrix argument to the interval [−1, 1].The operator S(x) transforms the vector x into thecorresponding skew-symmetric matrix, such thatS(x)y = x × y. The inverse operation is denotedas vex(·), such that vex(S(x)) = x. For a squarematrix A, its skew-symmetric part is representedby Pa(A) = 1

2(A− AT ).

The reference frames considered in the paperare the body-fixed frame {B} and the North-East-Down (NED) frame {N} (Earth-fixed, consideredinertial). The rotation from frame {B} to {N} isrepresented by the matrix Rn

b ≡ R ∈ SO(3),where SO(3) represents the Special Orthogonalgroup.

A vector decomposed in {B} and {N} hassuperscript b and n respectively. The gravity vectoris defined as gn = [0, 0, g], with g being the localgravitational acceleration. The position vector de-composed in {N} is pn = [pnN , p

nE, p

nD]T . The greek

letters φ, θ, and ψ represent the roll, pitch, and yawangles respectively, defined according to the zyxconvention for principal rotations [20]. Subscriptm indicates a quantity measured by a sensor. Thesymbol by identifies the bias on the measurementy. The Earth’s rotation rate is ωe = 7.292115·10−5

rad/s; decomposed in the NED frame it assumesa value that depends on the latitude µ, that isωne = [cosµ, 0,− sinµ]Tωe.

A. Measurements and Sensors

The sensor system consists of an IMU (ac-celerometers, inclinometers, and rate gyros), acamera, an altimeter, and a GNSS receiver, pro-viding the following information:

• IMU: biased angular velocity ωbm = ωb+bω;biased specific force f bm = f b + bf , whichincludes the effect of gravity; roll φ andpitch θ angles;

• altimeter: altitude pnD;

• camera: biased body-fixed velocity vbm =vb + bv (depending on the machine vi-sion system implemented, additional sen-sors might be required to be able to calcu-late vbm without bias);

• GNSS receiver: NED position pn and ve-locity vn.

Further information on the actual sensors used inthe experiment is presented in Section VI.

B. Problem formulation

The kinematic system to observe is

R = RS(ωbm − bω −RTωne ) (1a)

bω = 0 (1b)pn = vn (1c)

vn = R(f bm − bf ) + gn − 2S(ωne )vn (1d)

bf = 0 (1e)

Page 3: Dead Reckoning of a Fixed-Wing UAV with Inertial ...folk.ntnu.no/torarnj/paper_ICUAS17.pdf · Dead Reckoning of a Fixed-Wing UAV with Inertial Navigation Aided by Optical Flow Lorenzo

with the output map

y1 = pnD (2a)

y2 = RTvn + bv (2b)

y3 = −RTgn+(ωb−bω−RTωne )× vb (2c)y4 = pn (2d)y5 = vn (2e)

where y1 is the measurement from the altimeter,y2 the body-fixed velocity from machine vision,y3 the specific force from the IMU, y4 the NEDposition from the GNSS, and y5 the NED velocityfrom the GNSS. Although not explicitly definedin the equations, all sensors are assumed to beaffected by Gaussian white noise, which is never-theless considered when designing and tuning theXKF. The presence of the cross-product term in y3is justified in [13]. The OF velocity bias bv is notincluded as a state, but is obtained as described inSec. IV. The GNSS is used in an initial phase tolet the estimators converge, after which the GNSSis disabled, y4 and y5 are removed from the outputmap, all biases are frozen at their last known valueand not estimated any more, and the only availableposition fix is on the Down component pnD fromthe altimeter, such that dead reckoning becomesnecessary and the North and East componentshave to be calculated by direct integration of theestimated linear velocity.

III. OPTICAL FLOW

The method to obtain the optical flow velocityfrom camera images was already presented in [9],only a summary is reported here.

There exist several methods for computing theOF. The ones chosen for the experiment pre-sented in Section VI are SIFT [21] and regionmatching [22], for they can jointly provide asufficient amount of reliable OF vectors. For theOF computations to be useful in the estimators, atransformation to body-fixed velocity is necessary.The transformation is motivated by [23], and thepinhole camera model is used [24]. The camera-fixed coordinate system is related to the body-fixed coordinate system as illustrated in Fig. 1,where Ors is the centre of the image plane, andOb the origin of the {B}. The optical axis of

xb

yb zb

Ob

image plane

Ors

r

s

Fig. 1. Pinhole camera model. The camera is oriented downwards,while xb is the direction of flight. Ors and Ob are the origins ofthe image plane and body frame, respectively.

the downward-looking camera is aligned with thebody z-axis, and the focal point of the camera isassumed to coincide with the origin of {B}.

All features tracked by the camera are assumedto be stationary with respect to {N}, hence theUAV’s linear and angular velocities, vbm and ωbF(the underscript F is used to distinguish it fromthe ωbm measured by the rate gyros), relative toa feature tracked by the OF algorithm, will beequal for every tracked feature at a given instant intime. Furthermore, it is assumed that the terrain isflat, such that every feature is located at the samealtitude: this assumption can be restrictive for ap-plications that cover an uneven surface, but it canbe satisfactory for localized flights, depending onthe elevation profile over which the UAV operates;it does, however, greatly simplify the analysis andcalculations, such that it is worth to keep it andevaluate the performance in experiments, in Sec.VI.

The angular and linear velocities can be com-puted by

[vbmωbF

]= −M+(l, r1..k, s1..k, φ, θ, c

nz )

r1

s1...rk

sk

(3)

where each pair [rj, sj], j = 1...k, represents an

Page 4: Dead Reckoning of a Fixed-Wing UAV with Inertial ...folk.ntnu.no/torarnj/paper_ICUAS17.pdf · Dead Reckoning of a Fixed-Wing UAV with Inertial Navigation Aided by Optical Flow Lorenzo

image feature, [rj, sj] its corresponding OF, l is thefocal length of the camera, cnz is the distance of Ob

from the ground, and M ∈ R2k×6 is a rectangularmatrix. The matrix M is built by stacking k sub-matrices Mj , j = 1...k [9], where

Mj =l

zbj

0 1 −

ybjzbj−ybj

2

zbj− zbj

ybjxbj

zbjxbj

−1 0xbjzbj

xbjybj

zbj−xbj

2

zbj− zbj ybj

(4)

whose elements depend on

xbj

ybj

zbj

=

sjcnz

sj sin(θ) + cos(θ)(l cos(φ) + rj sin(φ))

− rjcnz

sj sin(θ) + cos(θ)(l cos(φ) + rj sin(φ))

− fcnzsj sin(θ) + cos(θ)(l cos(φ) + rj sin(φ))

(5)

Each point [xbj, ybj , z

bj ]T is the representation in

body-fixed coordinates of the corresponding pointfrom the world. The solution to (3) exists only ifMTM has full rank, such that it can be expressedas M+ = (MTM)−1MT . This can only happenif the number of flow vectors are greater than orequal to three. Since rate gyros are part of thesensor payload and can provide measurements ofωb more accurately than the OF method, only vbmwill be considered from now on.

This method can calculate the body-fixed veloc-ity of a vehicle, but presents additional challenges.The matrix M often results ill-conditioned and itspseudo-inversion produces computational errorsthat will be included in vbm. Moreover, the accuracyof the computation of OF vectors is stronglydependent on the frame rate of the camera used:the higher the frame rate, the more accurate theresults. For the present work, the maximum framerate was limited to 10 because of the need to havegood synchronization between all the sensors. Thisproved to be not high enough for accurate resultsand led to a large bias bv in the measured body-fixed velocity, which the estimators presented nextmust estimate.

IV. NON-LINEAR OBSERVERS

The NLO considered when GNSS is active isan extension of the one in [9], described by

˙R = RS(ωbm−bω−RTωne ) + σKP J (6a)˙bω = Proj(bω,−kIvex(Pa(RTs KP J))) (6b)

˙pn = vn +Kpp(pn−pn) +Kpv(v

n−vn) (7a)˙vn = fn+gn+Kvp(p

n−pn)+Kvv(vn−vn)

− 2S(ωne )vn (7b)

ξ = −σKP J(fbm−bf ) +Kξp(p

n−pn) (7c)

+Kξv(vn−vn) (7d)

fn = R(f bm−bf ) + ξ (7e)˙bf1 = Kf R

T (pn−pn) (7f)

bf = bf0 + bf1 (7g)

where the injection term J is

J := AnATb − RAbATb (8a)

Ab :=

[f b

‖f b‖,

f b×vb

‖f b×vb‖,

f b×(f b×vb)‖f b×(f b×vb)‖

](8b)

An :=

[fn

‖fn‖,

fn×vn

‖fn×vn‖,

fn×(fn×vn)

‖fn×(fn×vn)‖

](8c)

with the vectors f b = f bm − bf and vb = vbm − bv.KP is a symmetric positive definite gain matrix,Rs = sat(R), σ ≥ 1 is a scaling factor tuned toachieve stability, kI > 0 is a scalar gain, Proj(·, ·)represents a parameter projection that ensures that‖bω‖ not exceed a design constant Lb > Lb (seeAppendix). The estimator for the accelerometerbias bf is (7f)–(7g) and is taken from [25]: bf0is a static value for the bias obtained by readingthe measurements of the accelerometers at rest,and ˙

bf1 represents how the bias varies around thisvalue. The OF velocity bias bv is obtained bycalculating vn − Rvbm and filtering it with a low-pass filter. It could also be added to the NLO as astate and estimated, but it is not considered in thepresent work.

When the GNSS is disabled and dead reckoningbegins, some parts of the observer change slightly

Page 5: Dead Reckoning of a Fixed-Wing UAV with Inertial ...folk.ntnu.no/torarnj/paper_ICUAS17.pdf · Dead Reckoning of a Fixed-Wing UAV with Inertial Navigation Aided by Optical Flow Lorenzo

and become˙pn = vn +K ′

pp(pnD−pnD) (9a)

˙vn = fn+gn+K ′vp(p

nD−pnD)+K ′

vv(R(vbm−bv)−vn)

− 2S(ωne )vn (9b)

ξ = −σKP J(fbm−bf ) +K ′

ξpR(pnD−pnD) (9c)

+K ′ξv(R(v

bm−bv)−vn) (9d)

fn = R(f bm−bf ) + ξ (9e)

The three biases are not estimated any more,since the reduced set of measurements is notsufficient to make them observable. Consequently,the values used during dead reckoning are constantand correspond to the last values estimated whenGNSS was available. Only the forward and lateralcomponents of vbm are used in (9), since thevertical component is typically significantly lessaccurate than the other two. This also explains why(9a) has no velocity correction term.

A. Assumptions

Three conditions are sufficient to ensure a cor-rect functioning of both versions of the NLO.

Assumption 1: the machine vision system isable to provide vbm.

Assumption 2: the gyro biases bω, bf , and bv areconstant.

Assumption 3: there exists a constant cobs > 0such that, ∀t ≥ 0, ‖f b × vb‖ ≥ cobs.

Assumption 1 can be satisfied by implementingone of the many methods to obtain body-fixed ve-locity from camera images, for example [9]. Thisassumption is also necessary. Assumption 2 can berelaxed to slowly time-varying, in particular whenGNSS is available. Assumption 3 is a conditionof non-collinearity for the vectors vb and f b, i.e.the angle between them is non-zero and none ofthem can be identically zero (see, e.g., [26], [5]).For a fixed-wing UAV this means that the observercannot work while the vehicle stands still on theground, but presents no problems during flight.

As the focus of this paper is the experimentalapplication of the proposed estimators, proofs ofstability are not presented here. The NLOs, how-ever, are built from [9], [19] by adding the biases

Fig. 2. Block diagram showing the cascade interconnectionbetween an NLO and an LKF, compactly called XKF [12].

and the compensation for the rotation of Earth,about which a few arguments can be advanced.The accelerometer bias is estimated in a way thatwas already proved to be GES [20], [25], so itcertainly improves the NLO. The velocity bias de-pends on R: the NLO with uncorrected body-fixedvelocity could be let converge to avoid possibleinstabilites caused by uncertain initialization of R,and only afterwards include the bias for improvedperformance. The same can be said about thecompensation for Earth’s rotation, which dependson vn and R. Both NLOs are therefore guaranteedto converge (clearly with the exception of pnN andpnE in dead reckoning).

V. EXOGENOUS KALMAN FILTER

The XKF is a novel tool that combines theadvantages of non-linear observers and Kalmanfilters while overcoming their individual short-comings [12]. The first-stage filter of the XKFis either of the NLOs described in Section IV,depending on whether GNSS is available or not,whereas the second-stage filter is a linear time-varying Kalman filter (LKF) based on a linearapproximation of the state and output equationsabout the trajectory generated by the NLO. Inorder to reduce the number of states, the kinematicsystem considered for this second-stage filter isrepresented differently from (1), as

Θ = T (Θ)(ωbm − bω + nω) (10a)pn = vn (10b)

vn = R(f bm − bf + nf ) + gn

− 2S(ωne )vn (10c)

Page 6: Dead Reckoning of a Fixed-Wing UAV with Inertial ...folk.ntnu.no/torarnj/paper_ICUAS17.pdf · Dead Reckoning of a Fixed-Wing UAV with Inertial Navigation Aided by Optical Flow Lorenzo

The attitude is now represented by the vector ofEuler angles Θ = [φ, θ, ψ]T with its dynamic equa-tion, where T (Θ) ∈ R3×3 is the state-dependenttransformation matrix [20]. nω and nf are noiseon rate gyros and accelerometers, respectively. Theoutput map is the same as (2), with Gaussian whitenoise added to all measurements. A block diagramof a generic XKF is represented in Fig. 2. Thesteps necessary to obtain the XKF equations areomitted, as they were already presented in [12],[13].

The following assumptions are standard condi-tions that ensure that the XKF inherits the GESproperty of the NLO [27], [28].

Assumption 4: The LKF tunable parameters(process noise covariance matrix, measurementnoise covariance matrix, and initial estimationerror covariance matrix) are positive definite andsymmetric.

Assumption 5: The system (10), (2) linearizedabout the trajectory generated by the NLO isuniformly completely observable and controllable.

Assumption 4 can be satisfied by design. Therequirements of Assumption 5 are hard to verifyanalytically a priori. However, it is possible tomonitor the LKF covariance matrix, and since it isalways bounded it can be inferred that Assumption5 is satisfied.

VI. EXPERIMENTAL SETUP

The UAV employed is a UAV Factory Penguin-B, equipped with a custom payload that includesall the necessary sensors. The IMU is a Sen-sonor STIM300, a low-weight, tactical grade,high-performance sensor that includes gyroscopes,accelerometers, and inclinometers, all recorded ata frequency of 300 Hz. The chosen GPS receiveris a uBlox LEA-6T, which gives measurements at5 Hz. The video camera is an IDS GigE uEye5250CP provided with an 8mm lens. The camerais configured for a hardware-triggered capture at10 Hz: the uBlox sends a digital pulse-per-secondsignal whose rising edge is accurately synchro-nized with the time of validity of the recorded GPSposition, which guarantees that the image capture

is synchronized with the position measurements.The experiment was carried out on 6 February2015 at the Eggemoen Aviation and TechnologyPark, Norway, in a sunny day with good visibility,very little wind, an air temperature of about -8◦C.The terrain is covered with snow and flat enoughto let all features be considered as lying at zeroaltitude relative to altitude measurements.

The NLOs are implemented using forward Eu-ler discretization with a time-varying step de-pending on the interval of data acquisition ofthe fastest sensor, namely the STIM300, and itis typically around 0.003 seconds. The variousparameters and gains are chosen as Lbb = 2◦/s,Lbb = 2.1◦/s, σ = 1, KP = diag[0.08, 0.04, 0.06],kI = 0.02, Kpp = 30I3, Kpv = 2I3, Kvp = 0.01I3,Kvv = 20I3, Kξp = I3, and Kξv = 50I3.During dead reckoning some gains are different,namely K ′pp = diag(0, 0, 300), K ′vp = (0, 0, 200)T ,K ′vv = diag(30, 30, 0), K ′ξp = (0, 0, 200)T , andK ′ξv = diag(100, 100, 0). The gains for the NLOwith GNSS are obtained by running the observerseveral times and correcting the gains until asatisfactory performance was achieved. For deadreckoning, the gain relative to the altimeter ishigh because it is a very accurate sensor. The OFvelocity is less accurate than the GNSS velocityfor the reasons explained in Sec. III, so it wouldbe reasonable to use lower gains for it, but thepresence of some residual bias requires to use ahigher gain instead. This provides some additionalfiltering of the OF velocity, at the cost of worsenoise rejection in the estimated position and ve-locity. The covariance matrices for the XKF arefirst tuned based on previous experience with thesame sensors and system model, and then morefinely tuned with trial and error. For both the NLOand XKF, the gyro bias and accelerometer biasestimates are initialized with the standstill values,the other states with zero.

The reference provided for the position, ve-locity, and attitude is the output of the EKFof the autopilot mounted on the Penguin-B; theautopilot uses a different set of sensors thanthe one presented here. References for the gyrobias and accelerometer bias are not available, butapproximations of the real values are calculated

Page 7: Dead Reckoning of a Fixed-Wing UAV with Inertial ...folk.ntnu.no/torarnj/paper_ICUAS17.pdf · Dead Reckoning of a Fixed-Wing UAV with Inertial Navigation Aided by Optical Flow Lorenzo

1200 1300 1400 1500 1600 1700 1800 1900

Time (s)

-10

0

10

Rol

l err

or (

°)

1200 1300 1400 1500 1600 1700 1800 1900

Time (s)

-2

0

2

4

Pitc

h er

ror

(°)

1200 1300 1400 1500 1600 1700 1800 1900

Time (s)

-5

0

5

Yaw

err

or (

°)

Fig. 3. Euler angles estimation error of the two methods withrespect to the autopilot EKF.

by averaging the measurements of the rate gyrosand accelerometers at standstill before and afterthe flight. It is not possible to obtain even anapproximation of a reference for the OF velocitybias, hence the relative results show the actualestimates, not the estimation error.

The time on the x-axes of the figures is thetime elapsed since the computer onboard the UAVwas turned on. For all tests, the GNSS is used bythe estimators until t = 1300 s, after which it isdropped and the estimators run in dead reckoning,with all biases frozen at their last estimated value.

A. Results

The results are presented in Fig. 3–9. The yawangle estimated by the XKF is more stable than theone from the NLO; the pitch angle from the XKFseems less accurate, but the difference betweenNLO and XKF is between 0◦−2◦, so it is legitto wonder whether the reference value is reallythe closest to the real value or not. The gyro, ac-celerometer, and OF velocity biases are in Fig. 4–6. The plots end at t = 1300 s because at that pointdead reckoning begins and the biases are frozenat their last estimated values. The XKF providesslightly better filtering for the OF velocity bias, butit is evident that there are inconsistencies between

1180 1200 1220 1240 1260 1280 1300

Time (s)

-0.2

-0.1

0

0.1

x (°

/s)

1180 1200 1220 1240 1260 1280 1300

Time (s)

-0.02

0

0.02

y (°

/s)

1180 1200 1220 1240 1260 1280 1300

Time (s)

-0.5

0

0.5

z (°

/s)

Fig. 4. Gyro bias estimation error with respect to the valuemeasured at standstill. The estimates are used to pre-condition themeasured angular velocity ωb

m.

1180 1200 1220 1240 1260 1280 1300

Time (s)

0

2

4

x (m

/s2)

10-3

1180 1200 1220 1240 1260 1280 1300

Time (s)

-4

-2

0

2

y (m

/s2)

10-3

1180 1200 1220 1240 1260 1280 1300

Time (s)

-10

-5

0

5

z (m

/s2)

10-3

Fig. 5. Accelerometer bias estimation error with respect to thevalue measured at standstill. The estimates are used to pre-conditionthe measured specific force fb

m.

the outputs of the two estimators for the gyroand accelerometer biases. This is most probablyowing to inaccurate tuning, and a better tuningis expected to lead to better bias estimates thatwould reflect in even better position and velocityestimates. In Fig. 7 the position errors are rep-

Page 8: Dead Reckoning of a Fixed-Wing UAV with Inertial ...folk.ntnu.no/torarnj/paper_ICUAS17.pdf · Dead Reckoning of a Fixed-Wing UAV with Inertial Navigation Aided by Optical Flow Lorenzo

1180 1200 1220 1240 1260 1280 1300

Time (s)

5

10

15

20

x (m

/s)

1180 1200 1220 1240 1260 1280 1300

Time (s)

-2

0

2

y (m

/s)

1180 1200 1220 1240 1260 1280 1300

Time (s)

-2

-1

0

1

z (m

/s)

Fig. 6. OF velocity bias. This is not a plot of the estimationerror, for a reference is not available. The estimates are used topre-condition the measured OF velocity vbm.

1200 1300 1400 1500 1600 1700 1800 1900

Time (s)

100

200

300

400

500

600

Pos

ition

err

or (

m)

NLOXKF

Fig. 7. Position estimation error of the two methods comparedwith the one from the autopilot EKF.

resented. The NLO error shows an improvementwith respect to the results in [19] from the sameauthors, although the oscillations are still present:these are caused by the bias in the body-fixedvelocity combined with the circular motion of theUAV, such that when the error increases in onedirection, it decreases in the opposite direction.Since the estimated bias is kept constant duringdead reckoning, a residual bias will still be presentand affect the results, but evidently to a lesser

-1000 -800 -600 -400 -200 0 200 400 600

East (m)

-600

-400

-200

0

200

400

600

800

1000

Nor

th (

m)

referenceXKFNLODR beginsend referenceend XKFend NLO

Fig. 8. Trajectories on the North-East plane estimated by the NLOand XKF compared with the one from the autopilot EKF.

Fig. 9. Velocities on the North-East plane estimated by the NLOand XKF compared with the one from the autopilot EKF.

degree than in [19]. The XKF position error, onthe other hand, presents significantly smaller vari-ations and is perhaps more suitable than the NLOin a dead reckoning scenario: if the UAV flew in astraight line instead of following circular pattern,the position error of the NLO would be expected

Page 9: Dead Reckoning of a Fixed-Wing UAV with Inertial ...folk.ntnu.no/torarnj/paper_ICUAS17.pdf · Dead Reckoning of a Fixed-Wing UAV with Inertial Navigation Aided by Optical Flow Lorenzo

to increase faster than the one of the XKF. Thedifferences in position error are reflected in thedifferences in velocity error of Fig. 9: the velocityestimated by the NLO is more erratic, more noisythan the one from the XKF, which in turn seemsto manifest just a sistematic error that explains thelack of oscillations in the position error.

VII. CONCLUSIONS

In this paper an NLO and an XKF for deadreckoning have been tested on experimental dataobtained with a UAV equipped with an custompayload of sensors. A machine vision system hasbeen employed to calculate the body-fixed linearvelocity of the UAV using optical flow. Thisvelocity has been used both as a vector for theinjection term of the NLO, and as a correctionterm, combined with the estimated attitude, forthe estimated NED velocity. In comparison toprevious work, estimators for accelerometer biasand body-fixed velocity bias have been proposed,together with a term that compensates for therotation of Earth. The horizontal position was notestimated, but calculated via direct integration ofthe estimated velocity. The results show that theinclusion of compensation for the additional biasesand for the rotation of Earth help reduce theposition error of the NLO, and that the XKF canreduce the error even further by providing a betterestimate of the velocity that is less dependent thanthe NLO on the optical flow velocity bias.

ACKNOWLEDGEMENTS

This work has been carried out at the NTNUCentre for Autonomous Marine Operations andSystems (NTNU AMOS), supported by the Re-search Council of Norway, grants no. 223254 and221666. The authors are grateful for the assistanceprovided by the UAV engineers at NTNU andMaritime Robotics AS, in particular Lars Semband Carl Erik Stephansen. Significant contribu-tions were made to the construction of the UAVpayload by the rest of the navigation team atNTNU, in particular Sigurd M. Albrektsen, JakobM. Hansen and Kasper T. Borup, and to thedevelopment of machine vision by Jesper Hosenand Hakon H. Helgesen.

REFERENCES

[1] M. D. Shuster and S. D. Oh, “Three-axis attitude determina-tion from vector observations,” Journal of Guidance, Controland Dynamics, vol. 4, no. 1, pp. 70–77, 1981.

[2] E. Lefferts, F. Markley, and M. Shuster, “Kalman filteringfor spacecraft attitude,” Journal of Guidance Control andDynamics, vol. 5, no. 5, pp. 417–429, 1982.

[3] J. L. Crassidis, F. L. Markley, and Y. Cheng, “Survey ofnonlinear attitude estimation methods,” Journal of guidance,control, and dynamics, vol. 30, no. 1, pp. 12–28, 2007.

[4] S. Salcudean, “A globally convergent angular velocity ob-server for rigid body motion,” IEEE Transactions on Auto-matic Control, vol. 36, no. 12, pp. 1493–1497, 1991.

[5] R. Mahony, T. Hamel, and J. M. Pflimlin, “Nonlinear com-plementary filters on the special orthogonal group,” IEEETransactions on Automatic Control, vol. 53, no. 5, pp. 1203–1218, 2008.

[6] P. Batista, C. Silvestre, and P. Oliveira, “A GES attitude ob-server with single vector observations,” Automatica, vol. 48,no. 2, pp. 388–395, 2012.

[7] M. D. Hua, P. Martin, and T. Hamel, “Stability analysis ofvelocity-aided attitude observers for accelerated vehicles,”Automatica, vol. 63, pp. 11–15, 2016.

[8] H. F. Grip, T. I. Fossen, T. A. Johansen, and A. Saberi,“Globally exponentially stable attitude and gyro bias estima-tion with application to GNSS/INS integration,” Automatica,vol. 51, pp. 158–166, 2015.

[9] L. Fusini, T. A. Johansen, and T. I. Fossen, “Experimentalvalidation of a uniformly semi-globally exponentially stablenon-linear observer for GNSS- and camera-aided inertialnavigation for fixed-wing UAVs,” Proc. Int. Conf. UnmannedAircraft Syst., 2015, doi: 10.1109/ICUAS.2015.7152371.

[10] P. Martin, I. Sarras, M. Hua, and T. Hamel, “A globalexponential observer for velocity-aided attitude estimation,”Proc. IEEE Conf. Decision and Control, pp. 2047–2053,2016.

[11] T. Johansen and T. Fossen, “Nonlinear filtering with ex-ogenous Kalman filter and double Kalman filter,” Proc.European Control Conf., 2016.

[12] ——, “The eXogenous Kalman filter (XKF),”Int. J. Control, vol. 60, pp. 161–167, 2017, doi:10.1080/00207179.2016.1172390.

[13] L. Fusini, T. I. Fossen, and T. A. Johansen, “Nonlinearcamera- and GNSS-aided INS for fixed-wing UAV usingthe eXogenous Kalman Filter,” in Sensing and Control forAutonomous Vehicles: Applications to Land, Water and AirVehicles. Springer (T. I. Fossen, K. Y. Pettersen and H.Nijmeijer, Eds.), 2017, ch. 1.2.

[14] D. Scaramuzza and F. Fraundorfer, “Visual odometry – parti: The first 30 years and fundamentals,” IEEE Robotics &Automation Magazine, vol. 18, no. 4, pp. 80–92, 2011.

[15] J. Zhang and S. Singh, “INS assisted monocular visualodometry for aerial vehicles,” 9th International Conferenceon Field and Service Robotics (FSR), 2013.

[16] S. Zhao, F. Lin, K. Peng, X. Dong, B. M. Chen, andT. H. Lee, “Vision-aided estimation of attitude, velocity, andinertial measurement bias for UAV stabilization,” Journal ofIntelligent and Robotic Systems, 2015.

Page 10: Dead Reckoning of a Fixed-Wing UAV with Inertial ...folk.ntnu.no/torarnj/paper_ICUAS17.pdf · Dead Reckoning of a Fixed-Wing UAV with Inertial Navigation Aided by Optical Flow Lorenzo

[17] D. Dusha and L. Mejias, “Error analysis and attitude observ-ability of a monocular GPS/visual odometry integrated nav-igation filter,” International Journal of Robotics Research,vol. 31, no. 6, pp. 714–737, 2012.

[18] M. Mammarella, G. Campa, M. L. Fravolini, and M. R.Napolitano, “Comparing optical flow algorithms using 6-dofmotion of real-world rigid objects,” IEEE Trans. on Systems,Man, and Cybernetics, Part C: Applications and Reviews,vol. 42, no. 6, pp. 1752 – 1762, 2012.

[19] L. Fusini, T. A. Johansen, and T. I. Fossen, “A globallyexponentially stable non-linear velocity observer for vision-aided UAV dead reckoning,” Proc. 2016 IEEE AerospaceConf., 2016, doi: 10.1109/AERO.2016.7500606.

[20] T. I. Fossen, Handbook of Marine Craft Hydrodynamics andMotion Control. Chichester, UK: John Wiley & Sons, Ltd,2011.

[21] D. G. Lowe, “Object recognition from local scale-invariantfeatures,” Proc. Int. Conf. Computer Vision, pp. 1150–1157,1999.

[22] C. S. Fuh and P. Maragos, “Region-based optical flow esti-mation,” IEEE Computer Society Conference on ComputerVision and Pattern Recognition. Proceedings CVPR., pp.130–135, 1989.

[23] L. Fusini, T. I. Fossen, and T. A. Johansen, “A uniformlysemiglobally exponentially stable nonlinear observer forGNSS- and camera-aided inertial navigation,” 22nd Mediter-ranean Conf. Control and Automation, pp. 1031–1036, 2014,doi: 10.1109/MED.2014.6961510.

[24] S. Hutchinson, G. Hager, and P. Corke, “A tutorial onvisual servo control,” IEEE Transactions on Robotics andAutomation, vol. 12, no. 5, pp. 651–670, 1996.

[25] R. Rogne, T. Bryne, T. Fossen, and T. Johansen, “MEMS-based inertial navigation on dynamically positioned ships:Dead reckoning,” IFAC Conference on Control Applicationsin Marine Systems, pp. 139–146, 2016.

[26] M. D. Hua, “Attitude estimation for accelerated vehicles us-ing GPS/INS measurements,” Control Engineering Practice,vol. 18, no. 7, pp. 723–732, 2010.

[27] R. Kalman and R. Bucy, “New results in linear filtering andprediction theory,” Trans. ASME, Ser. D, J. Basic Eng., pp.95–109, 1961.

[28] B. Anderson, “Stability properties of Kalman-Bucy filters,”J. Franklin Institute, vol. 291, pp. 137–144, 1971.

[29] M. Krstic, I. Kanellakopoulos, and P. V. Kokotovic, Nonlin-ear and Adaptive Control Design. New York: Wiley, 1995.

APPENDIX

The parameter projection Proj(·, ·) is defined as:

Proj(bb, τ)=

{(I− c(bb)

‖bb‖2bbbbT

)τ, ‖bb‖≥Lb, bbT τ>0

τ, otherwise

where c(bb) = min{1, (‖bb‖2 − L2b)/(L

2b− L2

b)}.This operator is a special case of that from Ap-pendix E of [29].