Topics in Localization and Mapping · in cities using different land bound vehicles, in rural...

133
Linköping studies in science and technology. Thesis. No. 1489 Topics in Localization and Mapping Jonas Callmer RE G L E R T EK NI K A U TOM A T I C C O N T RO L LINKÖPING Division of Automatic Control Department of Electrical Engineering Linköping University, SE-581 83 Linköping, Sweden http://www.control.isy.liu.se [email protected] Linköping 2011

Transcript of Topics in Localization and Mapping · in cities using different land bound vehicles, in rural...

Page 1: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

Linköping studies in science and technology. Thesis.No. 1489

Topics in Localization and Mapping

Jonas Callmer

REGLERTEKNIK

AUTOMATIC CONTROL

LINKÖPING

Division of Automatic ControlDepartment of Electrical Engineering

Linköping University, SE-581 83 Linköping, Swedenhttp://www.control.isy.liu.se

[email protected]

Linköping 2011

Page 2: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

This is a Swedish Licentiate’s Thesis.

Swedish postgraduate education leads to a Doctor’s degree and/or a Licentiate’s degree.A Doctor’s Degree comprises 240 ECTS credits (4 years of full-time studies).

A Licentiate’s degree comprises 120 ECTS credits,of which at least 60 ECTS credits constitute a Licentiate’s thesis.

Linköping studies in science and technology. Thesis.No. 1489

Topics in Localization and Mapping

Jonas Callmer

[email protected]

Department of Electrical EngineeringLinköping UniversitySE-581 83 Linköping

Sweden

ISBN 978-91-7393-152-6 ISSN 0280-7971 LiU-TEK-LIC-2011:28

Copyright © 2011 Jonas Callmer

Printed by LiU-Tryck, Linköping, Sweden 2011

Page 3: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

Till min familj

Page 4: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.
Page 5: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

Abstract

The need to determine ones position is common and emerges in many differentsituations. Tracking soldiers or a robot moving in a building or aiding a touristexploring a new city, all share the questions ”where is the unit?“ and ”where is theunit going?“. This is known as the localization problem.

Particularly, the problem of determining ones position in a map while buildingthe map at the same time, commonly known as the simultaneous localizationand mapping problem (slam), has been widely studied. It has been performedin cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration. In this thesisit is studied how radar signals can be used to both position a naval surface ves-sel but also to simultaneously construct a map of the surrounding archipelago.The experimental data used was collected using a high speed naval patrol boatand covers roughly 32 km. A very accurate map was created using nothing butconsecutive radar images.

A second contribution covers an entirely different problem but it has a solutionthat is very similar to the first one. Underwater sensors sensitive to magnetic fielddisturbances can be used to track ships. In this thesis, the sensor positions them-selves are considered unknown and are estimated by tracking a friendly surfacevessel with a known magnetic signature. Since each sensor can track the vessel,the sensor positions can be determined by relating them to the vessel trajectory.Simulations show that if the vessel is equipped with a global navigation satellitesystem, the sensor positions can be determined accurately.

There is a desire to localize firefighters while they are searching through a burn-ing building. Knowing where they are would make their work more efficient andsignificantly safer. In this thesis a positioning system based on foot mounted in-ertial measurement units has been studied. When such a sensor is foot mounted,the available information increases dramatically since the foot stances can be de-tected and incorporated in the position estimate. The focus in this work hastherefore been on the problem of stand still detection and a probabilistic frame-work for this has been developed. This system has been extensively investigatedto determine its applicability during different movements and boot types. All inall, the stand still detection system works well but problems emerge when a veryrigid boot is used or when the subject is crawling. The stand still detection frame-work was then included in a positioning framework that uses the detected standstills to introduce zero velocity updates. The system was evaluated using local-ization experiments for which there was very accurate ground truth. It showedthat the system provides good position estimates but that the estimated headingcan be wrong, especially after quick sharp turns.

v

Page 6: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.
Page 7: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

Populärvetenskaplig sammanfattning

Problemet att bestämma sin position uppstår i många olika sammanhang. Lokali-seringsproblemet, som det kallas, handlar i grunden om två fundamentala frågor:“var är enheten?” och “vart är enheten på väg?”. Lösningen är problemspecifik ochbaseras på sensorer och kunskaper om problemets natur.

Problemet dyker upp i många olika sammanhang. I flera fall, såsom positionsbe-stämning av en bil eller av ett fartyg i en skärgård, är ett globalt satellitbaseratpositioneringssystem tillgängligt vilket gör problemet trivialt att lösa. I andra fallär det inte lika lätt. En robot som ska genomsöka ett avloppssystem, kartlägga ettkorallrev eller utforska en okänd byggnad saknar möjligheten att bestämma sinposition med hjälp av satelliter eftersom deras signaler är för svaga för att pene-trera hus eller vatten. För dessa fall måste fristående lösningar utvecklas.

I flera sammanhang vill man inte bara positionera sin enhet utan samtidigt ävenskapa en karta över omgivningarna som man positionerar sig i. Detta kallas si-multan lokalisering och kartskapning (slam) och är ett mycket populärt probleminom autonom robotik. slam har utförts i bland annat städer, i havet och i luftenoch anses idag vara ett väl utforskat problem. I denna avhandling studeras förstproblemet att bestämma ett fartygs position, kurs och hastighet samt även denomgivande skärgården genom att använda sekvenser av radar-signaler. Genomatt matcha radar-bild mot radar-bild kan man estimera hur fartyget rör sigöver tiden. Ett experimentdataset med radar-svep från en Stridsbåt 90 användsför att utvärdera systemet. Experimentet är ungefär 32 km långt och visar attpositioneringssystemet fungerar och att den karta som genereras motsvarar verk-ligheten mycket väl.

Ett andra bidrag handlar om ett problem som till synes ligger långt ifrån slammen vars lösning är mycket lik. Problemet handlar om att positionera ett an-tal sensorer utpositionerade på havsbottnen. Syftet med sensornätverket är attspana efter främmande fartyg genom att detektera de magnetfältsstörningar ochljud som dessa ger upphov till. Problemet är att sensorernas exakta positionerär okända eftersom strömmar och annat kan flytta på sensorerna medan de sjun-ker. Alternativet att förankra dem på en känd plats på botten är dyrt och tartid. En lösning har därför tagit fram för att positionera sensorerna genom att lå-ta dem spana på ett fartyg med känd magnetisk signatur. Exempelvis skulle detkunna vara samma fartyg som släppte ner sensorerna. Med hjälp av den mag-netfältsstörning som fartyget ger upphov till när det åker runt i området så kansensorernas positioner bestämmas. Simuleringar visar att systemet kan positione-ra sensorerna med hög noggrannhet, särskilt när fartygets rutt är loggad med ettsatellitpositioneringssystem.

Inom vissa kretsar finns det en önskan om att kunna positionera personer somrör sig i en byggnad. Bland professionella användare såsom poliser eller brandkårtror man att ett system som visar var alla befinner sig skulle ge en stor förbättringav säkerheten för de inblandade. Det är exempelvis mycket enklare att hitta enskadad rökdykare om man vet hur byggnaden ser ut och var han är än om man

vii

Page 8: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

viii Populärvetenskaplig sammanfattning

inte vet det. Tanken är att man ska sy in sensorer i deras uniformer för att medhjälp av dessa följa hur personerna rör sig i byggnaden. I vissa fall kan man antaatt det finns kartor att tillgå (sjukhus, skolor, m.m.), i andra fall gör det intedet (lägenheter och villor). Positioneringssystemet måste fungera för alla typerav byggnader och scenarion och får inte baseras på orealistiska antaganden omexempelvis hur rökdykarna rör sig.

I denna avhandling studeras positioneringssystem för rökdykare som baseras påfotmonterade sensorer. Dessa sensorer mäter skons accelerationer och vinkelhas-tigheter vilket kan användas för att skatta dess position. Kan man dessutom de-tektera varje gång foten står stilla så kan positioneringsramverket uppdaterasmed informationen om att hastigheten är noll. Detta förbättrar positioneringse-genskaperna väldigt mycket. Ett probabilistiskt ramverk som detekterar att skonär stilla genom att studera accelerometerns och gyrots signaler har därför utveck-las. Systemet har genomgått omfattande tester för att utvärdera detektionsförmå-gan för olika rörelser, olika skor och olika sensorpositioner. En slutsats är att enväldigt kraftig känga gör att det är svårare att detektera när skon är still på mar-ken när man genomför snabba rörelser såsom löpning. Anledningen är att skonär så kraftig att den inte omformar sig mot underlaget utan rullar mot det ochdärför aldrig är stilla. Det verkar också som att en fotmonterad sensor inte räckertill när man ska följa en krypande rökdykare. Systemet fungerar annars väl.

Stå still-detektionen har också använts i ett positioneringssystem som uppdate-ras med att hastigheten är noll varje gång foten nuddar backen. Systemet harutvärderats med hjälp av noggranna mätningar av skons sanna position. Expe-rimenten visar att positionen kan skattas väl även om riktningen som rörelsensker i ibland kan skattas fel, särskilt när stora, snabba svängar utförs. I framti-den kommer systemet förbättras genom att ta hänsyn till mer information såsomexempelvis kompassriktning.

Page 9: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

Acknowledgments

First of all I would like to thank my supervisor Professor Fredrik Gustafsson forthe excellent guidance and the inspiring projects you throw at me. Your insights,knowledge and efficiency never ceases to amaze me. My co-supervisor Dr DavidTörnqvist deserves a special thank for always being so enthusiastic, never beingtoo busy and for putting that bit of pressure on me that I need to perform well. Ican only hope that our collaborations continue to improve and become even morefruitful during the coming years. Also, thank you for proof reading the thesis.

Professor Lennart Ljung and Fredrik had the benevolence of inviting me to jointhe Automatic Control group. It has been a very interesting journey and I thankyou for that. The group is skillfully headed by Professor Svante Gustafsson andNinna Stensgård and before that by Lennart, Åsa Karmelind and Ulla Salaneck.You have managed to create an excellent group with supreme working conditionsand I salute you for that.

My first contact with the world of academic research was the dynamic duo ofDr Juan Nieto and Dr Fabio Ramos of the ACFR in Sydney, Australia. You gave afabulous introduction to the wonders of the academics with your fine wines andbarbecues and let us not forget the nomihodai karaoke. I can only hope our pathsin life will cross again in the near future.

I would also like to thank the rest of my colleagues in the Automatic Controlgroup for our excellent working environment. I especially would like to pointout the very collaborative atmosphere among the PhD students. We are all in thistogether and it makes this all go so much smoother. The makers if this LATEX tem-plate, Dr Henrik Tidefelt and Dr Gustaf Hendeby, deserve a special thank forkeeping the group’s theses so beautiful.

Claes Norell and the rest of the Lambohov Fire Brigade have been kind enough tolet me in on the secrets of firefighting. The input I get is invaluable since it allowsme to always stay on the right track when developing their future positioningsystem. I thank you for this and hope that we can deepen our collaboration inthe future.

Since life should not be all work and no play, fackpampen Lic Christian Lundquist,Lic Karl Granström, MSc Hanna Fager and MSc Martin Skoglund deserve a spe-cial mention. Our friendship, never ending discussions, fikas, travels and revolu-tions big and small make the times so much more fun. May it continue for a longtime.

AFU (away from university), Krav Maga Nordic Linköping provides great physi-cal workout and mental relaxation combined with a very cozy atmosphere. Thinkabout anything else and you get a brand new face - there is no better motivationfor focusing on what is right in front of you. Thank you Jens Berglund and MårtenSzymanowski for teaching me all those things I did not know could be done.

My old friends from the university, Lidingö, around the globe and Broderskapet

ix

Page 10: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

x Acknowledgments

Jonas, I love that we take the time to bridge the geographical distances for neverending fun, travels and get-togethers. I hope we can keep it up well into thefuture since it means the world to me.

I would also like to thank my parents and my sister for your never ending love,support and encouragement despite my difficulties in explaining to you what Iam actually doing. I would also like to take this opportunity to welcome thelittle one who will brighten our lives any day now. Being an uncle is going to beso much fun!

Last but not least, this work could never have been done without the financialsupport from the Strategic Research Centermoviii, funded by the Swedish Foun-dation for Strategic Research, ssf and cadics, a Linnaeus center funded by theSwedish Research Council. I would also like to acknowledge the research schoolForum Securitatis in which I am participating.

Linköping, April 2011Jonas Callmer

Page 11: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

Contents

Notation xv

I Background

1 Introduction 31.1 Problem Formulation . . . . . . . . . . . . . . . . . . . . . . . . . . 4

1.1.1 Indoor Localization . . . . . . . . . . . . . . . . . . . . . . . 41.1.2 Surface Localization . . . . . . . . . . . . . . . . . . . . . . 41.1.3 Underwater Localization . . . . . . . . . . . . . . . . . . . . 5

1.2 Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61.2.1 Additional Publications . . . . . . . . . . . . . . . . . . . . 6

1.3 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

2 Sensor Fusion 92.1 Sensors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.1.1 Inertial Measurement Unit . . . . . . . . . . . . . . . . . . . 102.1.2 RADAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.1.3 Global Navigation Satellite System . . . . . . . . . . . . . . 122.1.4 VICON . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132.2.1 Continuous Models . . . . . . . . . . . . . . . . . . . . . . . 132.2.2 Discrete Time Models . . . . . . . . . . . . . . . . . . . . . . 14

2.3 Estimation Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 162.3.1 Kalman Filter . . . . . . . . . . . . . . . . . . . . . . . . . . 162.3.2 Extended Kalman Filter . . . . . . . . . . . . . . . . . . . . 172.3.3 Nonlinear Filtering Overview . . . . . . . . . . . . . . . . . 18

3 Comparison between the Full and the Reduced Model 213.1 Model Structures . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21

3.1.1 Full Model Structure . . . . . . . . . . . . . . . . . . . . . . 223.1.2 Reduced Model Structure . . . . . . . . . . . . . . . . . . . 23

3.2 Transfer Function Derivations . . . . . . . . . . . . . . . . . . . . . 24

xi

Page 12: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

xii CONTENTS

3.2.1 Transfer Function of Full Model . . . . . . . . . . . . . . . . 243.2.2 Transfer Function of Reduced Model . . . . . . . . . . . . . 25

3.3 Bode Plots Evaluations . . . . . . . . . . . . . . . . . . . . . . . . . 263.4 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28

4 Indoor Localization 314.1 Stand Still Detection . . . . . . . . . . . . . . . . . . . . . . . . . . 33

4.1.1 Related Work . . . . . . . . . . . . . . . . . . . . . . . . . . 334.1.2 Test Statistics Derivation . . . . . . . . . . . . . . . . . . . . 344.1.3 Test Statistic Distribution Validation . . . . . . . . . . . . . 374.1.4 Hidden Markov Model . . . . . . . . . . . . . . . . . . . . . 374.1.5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . 414.1.6 Conclusions and Future Work . . . . . . . . . . . . . . . . . 43

4.2 Stand Still Detection Performance for Different IMU Positions . . . 444.2.1 Experimental Results . . . . . . . . . . . . . . . . . . . . . . 454.2.2 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

4.3 Localization Experiments . . . . . . . . . . . . . . . . . . . . . . . . 504.3.1 Measurements . . . . . . . . . . . . . . . . . . . . . . . . . . 504.3.2 Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 524.3.3 Filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 534.3.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . 544.3.5 Discussion and Future Work . . . . . . . . . . . . . . . . . . 55

5 Conclusions and Future Work 595.1 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

5.1.1 Indoor Localization . . . . . . . . . . . . . . . . . . . . . . . 595.1.2 RADAR SLAM . . . . . . . . . . . . . . . . . . . . . . . . . . 605.1.3 Underwater Sensor Positioning . . . . . . . . . . . . . . . . 60

5.2 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

A Quaternion Properties 63A.1 Operations and Properties . . . . . . . . . . . . . . . . . . . . . . . 63A.2 Describing a Rotation using Quaternions . . . . . . . . . . . . . . . 64A.3 Rotation Matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64A.4 Quaternion Dynamics . . . . . . . . . . . . . . . . . . . . . . . . . . 65

Bibliography 67

II Publications

A RADAR SLAM using Visual Features 731 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 752 Background and Relation to slam . . . . . . . . . . . . . . . . . . 783 Theoretical Framework . . . . . . . . . . . . . . . . . . . . . . . . . 79

3.1 Detection Model . . . . . . . . . . . . . . . . . . . . . . . . . 793.2 Measurement Model . . . . . . . . . . . . . . . . . . . . . . 80

Page 13: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

CONTENTS xiii

3.3 Motion Model . . . . . . . . . . . . . . . . . . . . . . . . . . 813.4 Multi-Rate Issues . . . . . . . . . . . . . . . . . . . . . . . . 823.5 Alternative Landmark Free Odometric Framework . . . . . 83

4 sift Performance on radar Images . . . . . . . . . . . . . . . . . . 854.1 Matching for Movement Estimation . . . . . . . . . . . . . . 864.2 Loop Closure Matching . . . . . . . . . . . . . . . . . . . . . 86

5 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . 875.1 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 875.2 Map Estimate . . . . . . . . . . . . . . . . . . . . . . . . . . 90

6 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

B Silent Localization of Underwater Sensors using Magnetometers 951 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 972 Methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

2.1 System Description . . . . . . . . . . . . . . . . . . . . . . . 992.2 State Estimation . . . . . . . . . . . . . . . . . . . . . . . . . 1012.3 Cramer-Rao Lower Bound . . . . . . . . . . . . . . . . . . . 103

3 Simulation Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1043.1 Magnetometers Only . . . . . . . . . . . . . . . . . . . . . . 1043.2 Magnetometers and GNSS . . . . . . . . . . . . . . . . . . . 1053.3 Trajectory Evaluation using CRLB . . . . . . . . . . . . . . . 1073.4 Sensitivity Analysis, Magnetic Dipole . . . . . . . . . . . . 1073.5 Sensitivity Analysis, Sensor Orientation . . . . . . . . . . . 108

4 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110Bibliography . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

Page 14: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.
Page 15: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

Notation

Abbreviations

Abbreviation Meaning

crlb Cramer-Rao Lower Boundcrm Corrosion Related Magnetismekf Extended Kalman Filteresdf Exactly Sparse Delayed-state Filterfim Fisher Information Matrixgnss Global Navigation Satellite Systemsgps Global Positioning Systemhmm Hidden Markov Modelimu Inertial Measurment Unitpf Particle Filter

radar RAdio Detection And Rangingrmse Root Mean Square Errorsift Scale-Invariant Feature Transformslam Simultaneous Localization And Mappingzupt Zero Velocity Update

xv

Page 16: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

xvi Notation

Estimation

Notation Meaning

xt State at time tyt Measurement at time tT Sampling Time

xt+T |t State estimate at time t + T given measurements up toand including time t

Pt+T |t Covariance of state estimate at time t + T given mea-surements up to and including time t

pt Position state at time tvt Velocity state at time tat Acceleration state at time tqt Quaternion state at time tωt Angular velocity state at time twt Process noise at time trt Measurement noise at time tQ Process noise covarianceR Measurement noise covariance

Page 17: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

Part I

Background

Page 18: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.
Page 19: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

1Introduction

Localization is the problem of determining ones position. The problem rangesfrom robots to humans, from indoors to the oceans and from underground tothe sky and further on into space. Estimating the position of a robot exploring asewer system, determining the position of a ship in an archipelago or tracking afire fighter searching through a burning building, all is localization.

If one is outdoors and a Global Navigation Satellite System (gnss) like the GlobalPositioning System (gps) is available, the positioning problem becomes trivial ifthe provided positioning accuracy is enough for the application. Measurementsof ones position are then readily available making position estimation straightfor-ward.

Unfortunately there are many environments where gnss signals are not avail-able. Indoors, underground or underwater the gnss signals are too weak to bedetected. Even outdoors the gnss signals can be corrupted. This is commonlycaused by reflections against houses or that the lines of sight to the satellites areblocked by trees or high rise buildings. Lately, intentional or unintentional jam-ming of the gps signals has also emerged as a potential major problem. All thismakes systems depending entirely on gnss vulnerable.

Different, redundant means of positioning are therefore required, ones that aretailored for each specific problem. The solutions must be reliable and use allother available information to get the best possible positioning estimate. That isthe problem of localization.

3

Page 20: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

4 1 Introduction

1.1 Problem Formulation

Three problems have been studied in this thesis. Even though they may seemdifferent at first, they all share the problem of localization. The first is indoorlocalization for firefighters and the other two cover maritime localization prob-lems.

1.1.1 Indoor Localization

The problem of indoor localization for professional users has received a lot ofattention lately since it is a fundamental problem in a multitude of situations;Beauregard (2007); Feliz et al. (2009); Foxlin (2005); Ojeda and Borenstein (2007);Godha et al. (2006); Woodman and Harle (2009); Grzonka et al. (2010); Aggarwalet al. (2011); Jiménez et al. (2010); Robertson et al. (2009); Widyawan et al. (2008).Being able to track the position of each individual firefighter, police officer or sol-dier moving around in a building is the dream of every operational management.In case something urgent happens, knowing exactly where all personnel are andwhere they have been enables swift and accurate cooperation to solve the prob-lem. Having a positioning system would therefore greatly enhance the safety ofthe personnel. The problem of positioning firefighters in a smoke filled houseand the envisioned position presentation is shown in Figure 1.1.

There is a tendency to view all three users: firefighters, police officers and sol-diers, as having the same needs for the same problem and therefore require thesame solution. We do not fully see it that way. The level of stress is differentand the tactics of how to search a building are entirely different. This makes theworking conditions different enabling some solutions to work in some situationsbut not in others. In order to achieve acceptable positioning all must be takeninto consideration and exploited - including tactics.

It is the purpose of this thesis and our future work within the area to find a solu-tion to the indoor localization problem for firefighters entering a building of lim-ited size, like a residential house or a small office, of which there is no previousknowledge about the layout of the building. The solution should be as simple aspossible, using as few sensors as possible and based on as few assumptions aboutuser movements and the environment as possible.

1.1.2 Surface Localization

Modern maritime navigation is very gps centered. The system is not only usedfor positioning but often also as a compass, to track communications satellites oras a very accurate time measurement. Since the gps signals are easily jammed, abackup system is needed when navigating in critical environments. We presenta solution based entirely on the measurements from the ship’s radar where thescans are used to estimate the position, velocity and heading of the vessel.

Page 21: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

1.1 Problem Formulation 5

(a) Firefighters getting preparedto enter a burning building.

(b) Presentation of the current and previous po-sitions of the firefighters with uncertainties.

Figure 1.1: When firefighters enter a burning building, left, they areequipped with sensors. Their estimated positions are presented to the op-eration management, right.

1.1.3 Underwater Localization

A passive surveillance sensor network can track a passing vessel using underwa-ter sensors that sense the magnetic field disturbances and noises caused by thevessel. The problem is that the exact sensor positions are seldom known unlessa large amount of time and money has been spent on determining their exactpositions. Currents also cause the sensors to move while sinking making rapidsensor deployment difficult. Without correct sensor positions, accurate trackingof enemy vessels becomes impossible. In this thesis we have studied the prob-lem of determining the positions of the sensors using their sensor readings whena friendly vessel with a known magnetic signature is passing by. By knowingwhere the vessel has been, the positions of the sensors can be accurately deter-mined. Now when the true sensor positions are known the network can startperforming its original task: searching for naval intruders.

Page 22: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

6 1 Introduction

1.2 Contributions

The main contributions in this thesis are

• A probabilistic framework for stand still detection for a foot mounted imu.

• A radar based backup system for positioning a surface vessel in case ofgps outage.

• A sensor positioning framework where the movements of a friendly vesselare used by the sensor network to determine their individual positions.

Published work of relevance to this thesis are listed below.

J. Callmer, D. Törnqvist, H. Svensson, P. Carlbom, and F. Gustafsson.Radar SLAM using visual features. EURASIP Journal on Advances inSignal Processing, 2010c. Under revision.

J. Callmer, M. Skoglund, and F. Gustafsson. Silent localization ofunderwater sensors using magnetometers. EURASIP Journal on Ad-vances in Signal Processing, 2010a.

J. Callmer, D. Törnqvist, and F. Gustafsson. Probabilistic stand still de-tection using foot mounted IMU. In Proceedings of the InternationalConference on Information Fusion (FUSION), 2010b.

J. Rantakokko, P. Strömbäck, J. Rydell, J. Callmer, D. Törnqvist, F. Gustafs-son, P. Händel, M. Jobs, and M. Grudén. Accurate and reliable sol-dier and first responder indoor positioning: Multi-sensor systems andcooperative localization. IEEE Wireless Communications Magazine,2011. Accepted for publication.

1.2.1 Additional Publications

Other publications where the author has contributed that are not covered in thisthesis are listed and briefly described below.

Two publications are about using a three axis magnetometer to track vehiclesusing the magnetic disturbances induced by the vehicles. The publications arebased on a master thesis project undertaken by Niklas Wahlström that was super-vised by the author.

N. Wahlström, J. Callmer, and F. Gustafsson. Single target tracking us-ing vector magnetometers. In Proceedings of the International Con-ference on Acoustics, Speech and Signal Processing (ICASSP), 2011.

N. Wahlström, J. Callmer, and F. Gustafsson. Magnetometers for track-ing metallic targets. In Proceedings of the International Conferenceon Information Fusion (FUSION), 2010.

As backup for gnss, Lindsten et al. (2010) covered the problem of using preexist-ing maps and environmental classification to create a measurement of the global

Page 23: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

1.3 Thesis Outline 7

position of an Unmanned Aerial Vehicle (uav). Photos from a downwards fac-ing camera on the uav were classified into grass, houses, roads etc which werematched to a map of the area. The main contribution of the author was withinthe classification parts.

F. Lindsten, J. Callmer, H. Ohlsson, D. Törnqvist, T. B. Schön, andF. Gustafsson. Geo-referencing for UAV navigation using environmen-tal classification. In Proceedings of 2010 International Conference onRobotics and Automation (ICRA), 2010.

The last two publications are spinoffs from the master thesis project undertakenby Karl Granström and the author in 2007/08. The papers are about loop clo-sure detection in urban SLAM by matching laser scans in Granström et al. (2009)and photos in Callmer et al. (2008). The aim was to find reliable loop closuredetection methods that worked in large scale problems.

K. Granström, J. Callmer, F. Ramos, and J. Nieto. Learning to detectloop closure from range data. In Proceedings of the IEEE Interna-tional Conference on Robotics and Automation (ICRA), 2009.

J. Callmer, K. Granström, J. Nieto, and F. Ramos. Tree of words forvisual loop closure detection in urban slam. In Proceedings of the2008 Australasian Conference on Robotics and Automation (ACRA),2008.

1.3 Thesis Outline

The thesis is divided into three parts. The first one starting on Chapter 2 is a briefintroduction to the sensor fusion tools of modelling, sensors and estimation the-ory that are necessary in a localization framework. Chapter 3 dives deeper intothe modelling problem with a discussion about the differences between two mod-elling approaches. The second part is about indoor navigation for professionalusers and is covered in Chapter 4. A framework for stand still detection usinga foot mounted inertial measurement unit is presented and thoroughly evalu-ated. The framework is thereafter included in localization experiments with zerovelocity updates. The experiments were performed in the vicon lab providingextremely accurate ground truth data which is very unusual to have for thesetypes of experiments, if not unique. Chapter 5 summarizes this part of the thesiswith conclusions and a discussion about future work.

The third part consists of the publications. Paper A covers Simultaneous Local-ization and Mapping (slam ) in an archipelago using the radar sensor of a highspeed naval patrol boat. The second publication, Paper B, is about determiningthe positions of a large number of underwater sensors equipped with magne-tometers and microphones. By tracking the movements of a friendly vessel witha known magnetic signature, the sensor positions can be determined.

Page 24: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.
Page 25: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

2Sensor Fusion

Sensor fusion is the problem of estimating some properties xt of a unit usingsensors that provide measurements yt that depend on those properties. In orderto do this, models of how yt are related to xt and models of how xt change overtime, are used.

The properties xt are called states and can represent any sought system property.The states can for example be related to the sensor platform representing the po-sition or orientation of the unit, they can be some unknown properties of the unitthat is needed but unknown such as its weight or they can be related to the sur-rounding environment such as the positions of some environmental landmarks,among others. In the problem of localization the sought after states are com-monly position, velocity and orientation answering the fundamental questions”where is the unit?” and ”where is the unit going?”.

The states are estimated using a filter that fuses the information from the sensorsand the models. Besides from the estimated states the filter also provides theuncertainties of those state estimates. A schematic overview of a sensor fusionframework is given in Figure 2.1.

2.1 Sensors

The sensors used in localization are usually of two kinds. Either they measurethe movements of the unit or they measure some aspect of the surroundings. Theformer could for example be accelerometers measuring the unit accelerations,gyros measuring its angular velocities or if it is a robot, wheel encoders that mea-sure how much the wheels are rotating. The latter could be cameras filming thesurroundings, radars or laser range finders measuring the distances to objects

9

Page 26: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

10 2 Sensor Fusion

Sensors

State Estimates

System Models

State Estimation

Sensor Fusion

Figure 2.1: Overview of a sensor fusion framework

around the unit, or magnetometers measuring the surrounding magnetic field,among others.

The sensors used in this thesis are an inertial measurement unit in the indoorlocalization experiments and a radar sensor in the maritime radar localizationexperiments. The underwater sensor positioning was only performed using sim-ulated magnetometer and microphone data why no sensors were used.

2.1.1 Inertial Measurement Unit

An inertial measurement unit (imu) contains an accelerometer and a gyroscopemeasuring the accelerations and angular velocities of the unit. These are mea-sured in three dimensions making it theoretically possible to track the orienta-tion of the user by integrating the angular velocity measurements. If the orienta-tion is known, the direction of down is known making it possible to remove thegravity component from the acceleration measurements. Double integrating theremaining user accelerations gives the position of the sensor, in theory that is. Inpractice the noisiness of the signals makes it only possible to accurately track thesensor position and orientation over a short time interval.

The sensor unit is commonly also equipped with a magnetometer, measuring themagnetic field around the sensor. When outdoors, the earth magnetic field is com-monly free from magnetic inference. Indoors, steel structures, electrical wiringand cabinets produce severe magnetic disturbances making the magnetometermore unreliable.

When the imu is at rest, a reading of the uncorrupted earth magnetic field andthe accelerometer can be used to determine the orientation of the sensor unit.The accelerometer gives the direction of up and the magnetometer effectively is acompass giving the direction of north. The orientation can thereafter be trackedduring movements over a short time interval using the gyros.

The imu used in the indoor localization related experiments in Chapter 4 is anXsens MT motion sensor, Figure 2.2, equipped with a three axis accelerometer, athree axis gyro and a three axis magnetometer. The sensor is also equipped with

Page 27: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

2.1 Sensors 11

Figure 2.2: An Xsens MT motion sensor, courtesy of Xsens Technologies B.V.

a thermometer to compensate for the temperature dependency of the differentcomponents. The signals were sampled in 100 Hz using a 16 bit A/D converter.

The gyroscope and accelerometer are based on micro-machined electromechanicalsystems (mems) technology making it small, light, cheap and low on power con-sumption. The main drawback is their quite poor performance even if the sensorshave been thoroughly calibrated. Gyro biases, scaling errors and temperature de-pendent disturbances often remain.

2.1.2 RADAR

A pulse radar (RAdio Detection And Ranging) sends out radio waves in differentdirections which are reflected or scattered when hitting an object. The reflectedsignals are picked up by a receiver, usually at the same place as the transmitter,and the time of flight for the signal is calculated. This time is proportional tothe distance to the object that reflected the signal and the heading of the sensorwhen the signal was transmitted gives the direction to the object. The strengthof the reflected signal can also provide some information about the properties ofthe object.

Two measurements are provided by each reflecting object: range and angle fromthe sensor to the object.

rt =√

(sx,t − px,t)2 + (sy,t − py,t)2 (2.1)

αt = arctansy,t − py,tsx,t − px,t

(2.2)

where s = (sx,t , sy,t) is the position of the reflecting structure and p = (px,t , py,t)is the position of the radar sensor at time instant t. Since the uncertainties inangle and range are independent, the total measurement uncertainties will bebanana shaped.

A naval radar commonly rotates with a constant speed, transmitting and receiv-ing in one direction at a time. The reflections are plotted in the current direction

Page 28: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

12 2 Sensor Fusion

when they are received. This gives a circular image of the surrounding islandsand vessels that is updated one degree at a time. By saving one 360◦ radar sweepas an image, a view of the surroundings is provided.

The radar sensor used in Paper A was a military one making the characteristicsof that sensor secret. What we do know is that it had a range of roughly 5 kmand a range resolution of about 5 meters. It rotates one revolution in 1.5 secondsgiving measurements in roughly 2000 directions.

One way of using a radar in localization is to take some strong reflections in afull 360◦ radar scan and try to detect them again in the next scan. The objectscreating these reflections are called landmarks and are assumed stationary. Bymeasuring the distance and heading to the landmarks and see how these changeover time, an estimate of how the radar equipped unit is moving can be made.If some landmarks move in a manner that is inconsistent with the other land-marks, it is probably a different unit and the reflections should not be used forlocalization.

2.1.3 Global Navigation Satellite System

gnss systems use multiple satellites and triangulation to determine the positionof the user. The most well known system, gps, provides a positioning accuracyof about 10 meters. Besides from location, it also gives accurate time informationmaking it useful also in applications where only accurate time and not positionis needed, for example in cellphone base station synchronization. The systemconsists of 30 satellites and free line of sight to at least 4 of them is required forthe positioning to work.

Other systems do exist or are planned. The Russian glonass system mostlycovers the northern hemisphere, in particular Russia, and is today short of the24 satellites needed to cover the whole planet. The European Galileo system willuse 30 satellites to cover the entire planet and is scheduled to be operationalaround 2014. Also a Chinese system, compass, using 35 satellites to cover theplanet will be deployed in the future. As of today, a smaller system covering onlyChina and the immediate surroundings is in place. A future gnss receiver, usingsignals from all systems will pretty much always have free line of sight to at least4 satellites. This will give accurate positioning also in places that are difficult tocover today such as urban canyons.

One shortcoming with gnss systems is the weakness of the signals. The signal isweaker than the background noise and only because the receivers know what tolook for can the signals be found. This makes the system sensitive to signal dis-turbances due to intentional or unintentional jamming. Today, gps jammers thatcan easily knock out all gps reception in an area of many square kilometers areavailable at a low cost; Economist (2011); Grant et al. (2009). This problem and asuggested solution for maritime vessels is discussed in more detail in Paper A.

Page 29: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

2.2 Models 13

2.1.4 VICON

The vicon1 lab at LiU has been used to collect ground truth for the positioningexperiments. The vicon system uses 10 infrared cameras and infrared lamps totrack reflective balls that are attached to the object of interest. In the indoor po-sitioning experiments, the balls were attached to the boot making the true move-ments of the imu trackable in real time. The positioning precision provided bythe system is on millimeter level. The size of the active area in the lab is about 3by 5 meters forcing the experiments to traverse the same area multiple times inorder to make them large enough.

The author would like to thank Piotr Rudol and the rest of the UAS TechnologiesLab, Artificial Intelligence and Integrated Computer Systems Division (AIICS)2

at the Department of Computer and Information Science (IDA) at LiU for theirassistance during these experiments.

2.2 Models

To compute how a system is behaving, a computer needs mathematical modelsthat describe the properties of the system. Mathematical models are most com-monly described using differential equations but to make the models more usefulfor computers, approximate time discreet difference models are needed.

2.2.1 Continuous Models

The models are commonly on a state space form where a state vector xt describesthe system properties at time t. The states are related to the dynamical propertiesof the system and also to measurements yt . The fundamental time continuousmodel is

xt = f (xt , ut , wt) (2.3)

yt = h(xt , ut , νt) (2.4)

where ut is a known input signal and w and ν are noise terms illustrating themodel errors. f( · ) and h( · ) are in general nonlinear functions relating the dy-namic properties and the measurements to the system states.

These two types of models are fundamental in a sensor fusion framework. Dy-namical models (2.3) describe how the unit states change over time. These areboth used to restrict the type of possible movements of the unit and to physicallyrelate the states to eachother. The former can for example be a vehicle modelthat states that a vehicle moves forward or backward, not sideways. The latteris that velocity estimates over time translate into changes in estimated position,among others. Since each physical model is a simplification of the true systemthe dynamical model is also associated with a process noise w that reflects the un-

1http://www.vicon.com2http://www.ida.liu.se/divisions/aiics/aiicssite/index.en.shtml

Page 30: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

14 2 Sensor Fusion

certainties in the model and also the fundamental simplifications of the systemwhich have been imposed.

Measurement models (2.4) relate the sensor measurements to the unit states. Thiscan for example be range and angle measurements that are related to positionstates in the state vector. Related to each measurement is a measurement noise νrepresenting the ever present measurement noise.

2.2.2 Discrete Time Models

Since continuous models cannot be used by a computer they must be discretized.For most cases discretization is a complex task. One exception is a linear system

xt = Axt + Bwtyt = Cxt + νt (2.5)

where the process noise wt is assumed piecewise constant over the sampling in-terval T . The matrices of the sampled systems can then be computed as

F = eAT (2.6)

G =

T∫0

eAτdτB (2.7)

giving the time discrete linear system

xt+T = Fxt + Gwtyt = Cxt + Dνt . (2.8)

For most other cases sampling a continuous system is quite challenging. Fordetails see Gustafsson (2010).

A general description of a physical system as a state space model in discrete timeis

xt+1 = f (xt , wt)

yt = h(xt , νt). (2.9)

An important special case is when the process and measurement noises are mod-eled as additive

xt+1 = f (xt) + wtyt = h(xt) + νt . (2.10)

It is an intuitively straightforward model with a deterministic part utilizing basicphysical properties and a random part representing everything that cannot beprophesied that still affects the system. An example of a system with partiallynonlinear dynamics and measurements is given in Example 2.1.

The linear system (2.8) is an important special case of modelling since it allowsKalman filter theory to be applied when solving the problem if the noises areassumed Gaussian.

Page 31: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

2.2 Models 15

2.1 ExampleModel of an inertial navigation system estimating the sensor position using an

accelerometer and a gyro and also global measurements of position from for ex-ample a gps receiver.

The states are position p, velocity v and acceleration a, all in global coordinates,followed by orientation in quaternions q and angular velocity ω in the local co-ordinate system. The measurements are acceleration ya and angular velocity yω,both measured in the local coordinate system and global position yp from thegps receiver. All measurements have additive noise r. The model assumes con-stant acceleration and angular velocity. Since this is not true the model is slightlywrong. Process noise w has therefore been added to represent the model errorsthat are introduced by these assumptions.

Quaternion dynamics and properties are described in Appendix A but a verybrief explanation will be given here. S ′(qt)ωt describes how local angular veloci-ties translate into changes in quaternions. R(qt) is the rotation matrix from globalto local coordinate systems based on the quaternions qt .

pt+1vt+1at+1qt+1ωt+1

=

I T I T 2

2 I 0 00 I T I 0 00 0 I 0 00 0 0 I T

2 S′(qt)

0 0 0 0 I

ptvtatqtωt

+

T 3

6 I 0T 2

2 I 0T I 00 T 2

2 S′(qt)

0 T

(wawω

)(2.11)

ya,tyω,typ,t

=

0 0 R(qt) 0 00 0 0 0 II 0 0 0 0

ptvtatqtωt

+

R(qt)g00

+

rarωrp

(2.12)

The acceleration measurements ya contain both user accelerations and the (in-verse) gravity component g why it must be included in the measurement model.

As an alternative, a reduced model form is often used. The acceleration and gyromeasurements are used as input signals in the dynamical model removing theneed of states at and ωt . That leaves us with only the position measurement inthe measurement model.pt+1

vt+1qt+1

=

I T I 00 I 00 0 I

ptvtqt

+

T 2

2 I 0T I 00 T

2 S′(qt)

(RT (qt)ya,t − g + RT (qt)ra + wa

yω,t + rω + wω

)(2.13)

yp = pt + rp (2.14)

What are the practical differences between these two approaches? This will bestudied in Chapter 3.

Page 32: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

16 2 Sensor Fusion

2.3 Estimation Theory

The estimation problem is the problem of estimating the posterior distributionof the states given the measurements, p(xt |y1:t). The states are often intricatelyrelated to the measurements, making them difficult to estimate. With the use ofBayes’ theorem

p(x|y) =p(y|x)p(x)p(y)

(2.15)

the problem can be reformulated into three straightforward parts. p(y|x) is thelikelihood of receiving the measurement y given the states x are true, p(x) is theprior probability of the states incorporating all our previous knowledge about thestates and p(y), the probability of the measurement, normalizes the state proba-bilities.

Estimation theory can be divided into two cases: linear and nonlinear estimation.Linear estimation is straightforward and the results are trustworthy but the prob-lem is in reality rare. Nonlinear estimation is more difficult and the solutions areprone to diverge but unfortunately the problem is very common.

In this section we will present the Kalman filter used for linear estimation prob-lems, the extended Kalman filter used for slightly nonlinear problems and thenbriefly discuss some additional nonlinear filter methods that are not used in thisthesis.

2.3.1 Kalman Filter

The linear estimation problem where a linear system (2.8) is assumed to haveGaussian noise is optimally solved using the Kalman filter, Kalman (1960). Sincethe problem is Gaussian, estimating the mean and the covariance of the statesprovides the entire solution to p(xt |y1:t).

The Kalman filter works in a two step procedure with a time update and a mea-surement update. The time update predicts the future states xt+1|t using the dy-namic model. Since the model is not perfect, the process noise covariance Q isadded to the state covariance P to illustrate the increase in uncertainty.

Q = Cov(wt) (2.16)

The measurement update uses the difference between the measurement and thepredicted measurement, the innovation, to update the states. How much the newmeasurement should affect the states is decided by the Kalman gain, K . K is de-pending on Q and a measurement noise term Rwhich describes how trustworthythe measurements are. The relation between these two parameters determinesthe filter performance. If Q is small in relation to R, the model is deemed morereliable than the measurements and vice versa. The ratio between Q and R affectsthe state estimates while the magnitudes of Q and R determines the size of thestate covariances.

The equations defining the Kalman filter are shown in Algorithm 1.

Page 33: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

2.3 Estimation Theory 17

Algorithm 1 Kalman Filter

Require: Signal model (2.8), initial state estimate x0|0 andcovariance P0|0.

1: Time Updatext+1|t = Axt|t

Pt+1|t = APt|tAT + Q (2.17)

2: Measurement Update

Kt+1 = Pt+1|tCT(CPt+1|tC

T + R)−1

xt+1|t+1 = xt+1|t + Kt+1

(yt+1 − Cxt+1|t

)Pt+1|t+1 = Pt+1|t − Kt+1CPt+1|t (2.18)

2.3.2 Extended Kalman Filter

If the problem is of the form (2.9) or (2.10) and only mildly nonlinear, the ex-tended Kalman filter (ekf) can be applied. It approximates the nonlinearitiesusing a first order Taylor approximation around the latest state estimate andthen applies the Kalman filter to the linearized problem. Convergence cannotbe guaranteed, particularly since the ekf gives the optimal solution to the wrongproblem. That means, the solution to the linearized problem is the optimal one,unfortunately the linearized problem is not the true problem. Despite this, theekfmost often works well.

Starting with the nonlinear function (2.10), a first order Taylor expansion of themeasurement function h( · ) around the linearization point x is

h(xt) ≈ h(xt) + h′x(xt)(xt − xt) (2.19)

where h′x(xt) is the Jacobian

h′x(xt) =∂h(xt)∂xt

∣∣∣∣∣xt=xt

. (2.20)

The measurement model can now be reformulated according to

yt = h(xt) + νt ⇔yt − h(xt) + h′x(xt)xt = h′x(xt)xt + νt ⇔

yt = h′x(xt)xt + νt . (2.21)

Correspondingly the dynamical model can be expanded around xt giving the newmodel

xt+1 = f (xt) + wt ⇔xt+1 − f (xt) + f ′x (xt)xt = f ′x (xt)xt + wt ⇔

xt+1 = f ′x (xt)xt + wt (2.22)

Page 34: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

18 2 Sensor Fusion

where

f ′x (xt) =∂f (xt)∂xt

∣∣∣∣∣xt=xt

. (2.23)

If the signal model is (2.9), the Jacobians with respect to the noise parameters wand ν are also needed.

f ′w(xt) =∂f (xt , wt)∂wt

∣∣∣∣∣xt=xt

h′ν(xt) =∂h(xt , vt)∂νt

∣∣∣∣∣xt=xt

(2.24)

The extended Kalman filter is summarized in Algorithm 2.

Algorithm 2 Extended Kalman Filter

Require: Signal model (2.9), initial state estimate x0|0 and covariance P0|0.1: Time Update

xt+1|t = f (xt|t)

Pt+1|t = f ′x (xt|t)Pt|tf′x (xt|t)

T + f ′w(xt|t)Qf′w(xt|t)

T (2.25)

2: Measurement UpdateSt+1 = h′x(xt+1|t)Pt+1|th

′x(xt+1|t)

T + h′ν(xt+1|t)Rh′ν(xt+1|t)

T

Kt+1 = Pt+1|th′x(xt+1|t)

T S−1t+1

xt+1|t+1 = xt+1|t + Kt+1

(yt+1 − h(xt+1|t)

)Pt+1|t+1 = Pt+1|t − Pt+1|th

′x(xt+1|t)

T S−1t+1h

′x(xt+1|t)Pt+1|t (2.26)

2.3.3 Nonlinear Filtering Overview

For very nonlinear filtering problems, the ekf cannot be used. Other solutionsexist, each with its pros and cons. Below follows a brief description of some ofthe most common nonlinear filters.

The particle filter does not try to model the entire pdf of the posterior distribu-tion of xt , Gordon et al. (1993). Instead it approximates the distribution usingN samples, so called particles, each having a weight {γ it }Ni=1. N has to be large tomake the approximation valid. The resulting approximation of the posterior dis-tribution is p(xt |y1:t) ≈

∑Ni=1 γ

it δ(xt − xit), δ( · ) being Dirac’s delta function. The

particle filter can be used for very nonlinear estimation problems but it is notguaranteed to converge. Still, the filter has become very popular since it can han-dle difficult nonlinear non-Gaussian problems, is easy to implement and is quiteintuitive.

If the dimension of the problem is low, the Point Mass Filter can be used. It gridsthe state space and computes the posterior distribution in all the grid points. Itcan be applied to any nonlinear and non-Gaussian model. Its drawback is thatthe number of grid points grows exponentially with the number of dimensions ofthe state vector making the filter only applicable for low dimensional problems.

Page 35: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

2.3 Estimation Theory 19

Further, the unscented Kalman filter, Julier et al. (1995), deterministically sam-ples so called sigma points around the mean of the state distribution. Thesesigma points are propagated through the nonlinear models and are used to recon-struct the mean and covariance of the state estimates. This will more accuratelydescribe the true distribution of the states compared to ekf and requires no timeconsuming Jacobian computations.

Other nonlinear filtering methods such as Gaussian sum Kalman filters also exist.It models the state pdf using multiple Gaussians providing multimodal applica-bility.

Page 36: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.
Page 37: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

3Comparison between the Full and the

Reduced Model

A common situation in navigation systems is that we want to estimate a state butwe only have the derivative of the state available as a measurement. One exampleis when heading is estimated using a gyro, another is when velocity or position isestimated using an accelerometer. The situation can also be as in Example 2.1 inSection 2.2.2 where both position and acceleration measurements are available.

When we choose our dynamical model, the question is if we should model themeasurements as outputs or reduce the state space by treating the measurementsas inputs. In the former, all measurements have a corresponding state, in the caseof an acceleration measurement we have an acceleration state as well as a velocitystate and a position state. This was the form described initially in Example 2.1.In the latter, the derivative measurements are used as an input signal during thetime update of the integral states. In this case the acceleration measurements areonly used as time update inputs for the velocity and position states. This was thesecond model form described in Example 2.1. This reduced model form is alsoknown as a Luenberger observer, Rugh (1996).

Since the reduced input model has fewer states than the full state model it re-quires fewer computations but what are the other differences between these modelstructures? That will be studied in this section.

3.1 Model Structures

One way to study the differences between the model structures is to compare thetransfer functions of the systems. Unfortunately, it is not possible to calculate thetransfer function from an acceleration measurement to a position estimate if noposition measurement is available since the estimation uncertainty will grow in-

21

Page 38: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

22 3 Comparison between the Full and the Reduced Model

definitely with time. This is natural since the problem is not observable. If bothposition and acceleration measurements are available, the position can be esti-mated using a stationary Kalman filter. Since the filter is stationary, the transferfunction can be calculated. By choosing the measurement noise of the positionmeasurement as extremely large, we are in practice back to the structure of esti-mating a position using only acceleration measurements.

For the analysis, the following model states and measurements are used. xp, xvand xa represent the position, velocity and acceleration estimates, respectively,and yp, yv and ya represent the position, velocity and acceleration measurements,respectively. Two transfer functions are interesting to investigate: accelerationmeasurement to velocity estimate and acceleration measurement to position es-timate. Note that the former could equally well be seen as any integration, forexample from angular velocity measurements to heading estimates.

3.1.1 Full Model Structure

In the full model all states including the acceleration are estimated. This givesthe state and measurement vectors

xt =(xp,t xv,t xa,t

)Tyt =

(yp,t yv,t ya,t

)T.

The basic full model structure can be written as

xt+T = Axt + Bwtyt = Cxt + rt (3.1)

where rt and wt are measurement noises and process noise, respectively.

The full model is in this casexp,t+Txv,t+Txa,t+T

=

1 T T 2

20 1 T0 0 1

xp,txv,txa,t

+

T 3

6T 2

2T

wtyp,tyv,tya,t

=

1 0 00 1 00 0 1

xp,txv,txa,t

+

rp,trv,tra,t

(3.2)

where the process noise w ∼ N (0, σ2w) and rp ∼ N (0, σ2

p ), rv ∼ N (0, σ2v ) and

ra ∼ N (0, σ2a ) are noises from the position, velocity and acceleration measure-

ments, respectively. By choosing σ2p and σ2

v very large, they are deemed insignif-icant by the filter and only the acceleration measurements are used to estimatethe velocity and position.

One fundamental property of the full model structure is that it assumes that theacceleration is constant over the sampling interval.

Page 39: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

3.1 Model Structures 23

3.1.2 Reduced Model Structure

In the reduced model the acceleration measurements are used as direct inputs inthe time updates of the velocity and position estimates. This model also assumesthat the acceleration is constant over the sampling interval.

Again, measurements of position and velocity are needed to get a stationaryKalman filter. The state and measurement vectors used are

xut =(xp,t xv,t

)Tyut =

(yp,t yv,t

)T. (3.3)

The superscript u will be used throughout the chapter to indicate that we meanthe reduced model.

The basic model structure is in this case

xut+T = Auxut + Bu1ya,t + Bu1 ra,t + Bu2wtyut = Cuxut + rut (3.4)

since the measurement noise of the acceleration now must be taken into consid-eration as a process noise. The model can be rewritten as

xut+T = Auxut + Bu1ya,t +(Bu1 Bu2

) (ra,twt

)yut = Cuxut + rut . (3.5)

Since ra and w are independent, the resulting process noise covariance is

Qu =(Bu1 Bu2

) (σ2a 0

0 σ2w

) (Bu1Bu2

). (3.6)

Since all measurements yu in (3.5) should be deemed insignificant, the measure-ment covariances of ru will be very large. To make the model comparison straight-forward, the same noise parameters are used in both models. Therefore, the twonoise parameters in (3.6) have not been replaced by a single parameter.

The reduced model becomes(xp,t+Txv,t+T

)=

(1 T0 1

) (xp,txv,t

)+

(T 2

2T

)ya,t +

T 2

2T 3

6T T 2

2

(ra,twt)

(yp,tyv,t

)=

(1 00 1

) (xp,txv,t

)+

(rp,trv,t

). (3.7)

In this model, the time update step will in practice provide all new informationsince σ2

p and σ2v are very large.

Page 40: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

24 3 Comparison between the Full and the Reduced Model

3.2 Transfer Function Derivations

To compare the models, the transfer functions from the measurements to theestimates are derived. They are calculated using the stationary Kalman filter.

3.2.1 Transfer Function of Full Model

For the full model the transfer function is straightforwardly calculated using thetime and measurement update equations of the Kalman filter.

The time update of the full state model is

xt+T |t = Axt|t (3.8)

while the measurement update is

xt+T |t+T = xt+T |t + Kt+T(yt+T − Cxt+T |t

). (3.9)

For a stationary Kalman filter, the Kalman gain Kt+T = K . K is calculated by firstsolving the Ricatti equation

P = AP AT + Q − AP CT (CP CT + R)−1CP AT (3.10)

to calculate P , and then determine K as

K = P CT(CP CT + R

)−1. (3.11)

The covariance matrices R and Q in (3.10) are

R =

σ2p 0 0

0 σ2v 0

0 0 σ2a

Q = BBT σ2

w.

Unfortunately, there is no compact solution P to (3.10) as a function of T , σ2p , σ

2v , σ

2a

and σ2w, why K cannot be expressed as a function of these variables. Numerical

solutions to (3.10) are though readily available for a given set of noise parametersand sampling time.

Introducing the q-operator, qx(t) = xt+T and q−1x(t) = xt−T , and using (3.8), (3.9)and K , we can now derive the transfer functions H.

xt+T |t+T = Axt|t + K(yt+T − CAxt|t

)⇔

(qI − A + KCA)xt|t = Kqyt ⇔xt|t = (qI − A + KCA)−1Kq︸ ︷︷ ︸

H(q)

yt (3.12)

where I is the identity matrix.

H(q) is the set of transfer functions Hij (q) which should be read as transfer func-

Page 41: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

3.2 Transfer Function Derivations 25

tions from measurement j to state i, where i, j ∈ {p, v, a}

xp = Hpp(q)yp + Hpv(q)yv + Hpa(q)yaxv = Hvp(q)yp + Hvv(q)yv + Hva(q)yaxa = Hap(q)yp + Hav(q)yv + Haa(q)ya. (3.13)

The two most interesting transfer functions to study are Hpa(q) and Hva(q) sincethey reveal how the position and velocity estimates are depending on the acceler-ation measurements. They will be compared to the corresponding transfer func-tions of the input model.

3.2.2 Transfer Function of Reduced Model

The time update of the reduced model uses the acceleration measurement to esti-mate the future states.

xut+T |t = Au xt|t + Bu1ya,t (3.14)

The measurement update uses the measurements yut+T which the filter has abso-lutely no confidence in why they will only affect the state estimates in a minusculeway. Still, the update is needed to compute the transfer functions.

xut+T |t+T = xut+T |t + Kut+T(yut+T − C

u xut+T |t). (3.15)

Correspondingly to the full state model, Kut+T = Ku for a stationary filter and iscalculated by using P u that is acquired from the solution of the Ricatti equationas in Section 3.2.1. The necessary process noise Qu was described in (3.6) and themeasurement noise is

Ru =(σ2p 0

0 σ2v

). (3.16)

Now, (3.14) inserted into (3.15), the q-operator and Ku gives us

xut+T |t+T = Au xut|t + Bu1ya,t

+ Ku(yut+T − C

uAu xt|t + CuBu1ya,t)

(qI − Au + KuCuAu)xut|t = (Bu1 + KuCuBu1 )ya,t + Kuqyut ⇔

xut|t = (qI − Au + KuCuAu)−1Kuq︸ ︷︷ ︸Hu (q)

yut

+ (qI − Au + KuCuAu)−1(I + KuCu)Bu1︸ ︷︷ ︸Hua (q)

ya,t (3.17)

The transfer functions can be written as

xp = Hupp(q)yp + Hu

pv(q)yv + Hupa(q)ya

xv = Huvp(q)yp + Hu

vv(q)yv + Huva(q)ya (3.18)

Page 42: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

26 3 Comparison between the Full and the Reduced Model

where Hupa(q) and Hu

va(q)ya stem from Hua (q) while the other four originate from

Hu(q).

Hupa(q) and Hu

va(q) should be compared to Hpa(q) and Hva(q) to determine howthe reduced model and the full state model differ. This will be done by studyingthe Bode plots of the transfer functions for a given parameters setting.

3.3 Bode Plots Evaluations

To evaluate the transfer functions the unknown parameters must be set. As ex-plained in Section 3.1, σ2

p and σ2v were chosen very large to ensure that their

corresponding measurements do not affect the state estimates. They were set toσ2p = σ2

v = 1015. The sampling frequency was set to 100 Hz since it is common infor example imus. This gives T = 0.01.

Two parameters are now left: the process noise covariance σ2w and the accelera-

tion measurement noise covariance σ2a . As long as they are kept much smaller

than σ2p and σ2

v , only the ratio between the two determines the filter estimates.One can therefore be fixed while the other is changed to see if and how the trans-fer functions change. Since the amplitude is of no importance in this case, theprocess noise covariance is fixed to σ2

w = 1 for convenience. Note that for thereduced system, the total process noise of the system is a linear combination ofσ2w and σ2

a why the ratio is much less important. Therefore, the transfer functionsfor the reduced system hardly change at all for different noise covariance ratios.

First, the measurement covariance noise is set to σ2a = 0.1. The resulting Bode

plots of the transfer functions from acceleration measurement to velocity esti-mate, Hva and Hu

va, are shown in Figure 3.1. The gain curve has the slope −1 formost frequencies which means a pure integration. For high frequencies the fullmodel loses gain indicating that it does not trust inputs from these frequencies asmuch. The reduced model on the other hand integrates the acceleration measure-ments for all high frequencies. The gain loss for low frequencies experienced bythe reduced model is related to the position and velocity measurements yp andyv and is therefore irrelevant since those measurements normally do not exist.The larger σ2

p and σ2v are chosen, the lower the frequency where the input gain

loss appears. Unfortunately, the Ricatti equation cannot be solved for arbitrarilylarge σ2

p and σ2v in Matlab.

The transfer functions from acceleration to position estimate, Hupa and Hpa, are

shown in Figure 3.2. The gain has a −2 slope indicating a double integration.Again, the full model has a gain loss for high frequency measurements since themodel finds these less trustworthy while the reduced model double integratesalmost all frequency components. The low frequency plateau in the reducedmodel is related to the position and velocity measurements.

The knee frequency, i.e. the frequency where the full model breaks off from its −1slope in Figure 3.1 or its −2 slope in Figure 3.2, is related to the signal-to-noise ra-tio (snr) σ2

w/σ2a . A first indication of this is shown in Figure 3.3 where σ2

a = 0.001

Page 43: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

3.3 Bode Plots Evaluations 27

−150

−100

−50

0

50

100

150

Mag

nitu

de (

dB)

10−4

10−3

10−2

10−1

100

101

102

−270

−225

−180

−135

−90

−45

0

45

90

Pha

se (

deg)

Bode Plot of Transfer Functions Hva and Huva. σ

2a = 0.1

Frequency (rad/sec)

Hva

Huva

Figure 3.1: Bode plot of transfer functions Huva and Hva for σ2

a = 0.1.

−150

−100

−50

0

50

100

150

Mag

nitu

de (

dB)

10−4

10−3

10−2

10−1

100

101

102

−270

−225

−180

−135

−90

−45

0

45

90

Pha

se (

deg)

Bode Plot of Transfer Functions Hpa and Hupa. σ

2a = 0.1

Frequency (rad/sec)

Hpa

Hupa

Figure 3.2: Bode plot of transfer functions Hupa and Hpa for σ2

a = 0.1.

Page 44: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

28 3 Comparison between the Full and the Reduced Model

−150

−100

−50

0

50

100

150M

agni

tude

(dB

)

10−4

10−3

10−2

10−1

100

101

102

−270

−225

−180

−135

−90

−45

0

45

90

Pha

se (

deg)

Bode Plot of Transfer Functions Hpa and Hupa. σ

2a = 0.001

Frequency (rad/sec)

Hpa

Hupa

Figure 3.3: Bode plot of transfer functions Hupa and Hpa for σ2

a = 0.001.

giving an snr a hundred times larger than in Figure 3.2. The knee frequency ishere significantly higher than in Figure 3.2, roughly 35 rad/s compared to 4.5rad/s before.

When the knee frequency is plotted against snr, a clear connection is visible, seeFigure 3.4. The relation between the 10-logarithm of the knee frequency β andthe 10-logarithm of the snr is almost perfectly linear. The exact frequency tochoose as β is quite ad hoc though, especially for the low snr cases. The linearfunction is estimated to be

log10(β) = 0.5 log10(SNR) + 0.05. (3.19)

This knee frequency is related to the positions of the system’s poles. There arealways two poles very close to 1 resulting in a double integration. A third poleis found on the real axis in the unit circle. For lower snr, the third pole wan-ders closer to 1 causing the gain curve to drop for lower frequencies. The poleplacement of Hpa is shown using a root locus in Figure 3.5.

3.4 Discussion

The main drawback of using the reduced model instead of the full model is thatthe position and velocity estimates become more sensitive to high frequency noisein the acceleration measurement. Everything is just accepted as true user acceler-ations and is integrated straight into the states. The full model has an additional

Page 45: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

3.4 Discussion 29

100

105

10−1

100

101

102

103

SNR − σ2ω/σ2

a

knee

freq

rad

/s

Knee frequency of Hpa

related to SNR

Knee Frequency

Figure 3.4: The knee frequency of Hpa and Hva related to snr.

step that prefilters the acceleration measurement before it enters the state esti-mates making them less noisy. The full filter estimates are therefore a tradeoffbetween the trustworthiness of the model and the measurements, while the re-duced model has no such tweaking abilities. The only thing one can influence inthe reduced model are the sizes of the state covariances, not the estimates them-selves.

One might get the idea of low pass filtering the acceleration measurements beforethey are fed to the reduced model to compensate for the high frequency gaindifference. Unfortunately this will only result in the measurement noise ra beingfrequency dependent. This in turn would then have to be compensated for in thereduced filter why the low pass filter will only solve the problem by introducinga new one.

Another option is to use a different sampling technique like first-order hold or thebilinear transformation. This might smooth out the acceleration measurementsand thereby dampen the high frequency effects of the zero-order hold samplingthat is used.

Since the main advantage of the reduced model is its simpler implementation,the application must specify which model structure to use. If computational re-sources are scarce, the reduced model might be preferable despite its larger sensi-tivity, especially if the reduced structure can be applied in multiple dimensions.

Page 46: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

30 3 Comparison between the Full and the Reduced Model

−1.5 −1 −0.5 0 0.5 1 1.5

−1

−0.5

0

0.5

1

Root locus of poles of Hpa

Figure 3.5: Root locus of the poles of Hpa for different snr. For lower snr,the pole on the real axis moves closer to 1. Two poles are always very closeto 1.

Page 47: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

4Indoor Localization

There is a growing desire in some fields to track people moving around in a build-ing. When firefighters, police officers or soldiers search through a building, ac-curately knowing the true position of the personnel would greatly enhance theirsafety. Finding a trapped firefighter in a burning building is far easier if his orher position is available for the rescue team, with or without a map.

The problem of indoor localization has received an increasing amount of atten-tion in the last couple of years; Beauregard (2007); Feliz et al. (2009); Foxlin(2005); Ojeda and Borenstein (2007); Godha et al. (2006); Woodman and Harle(2009); Grzonka et al. (2010); Aggarwal et al. (2011); Jiménez et al. (2010); Robert-son et al. (2009); Widyawan et al. (2008). Since gps signals cannot be used in-doors due to the extreme weakness of the signals, new positioning techniquesmust be developed. The drive is to put sensors in the first responders’ uniformsand use these to track the movements and thereby the position of the user. Thishas led to a transition from robot sensor platforms to human ones.

In robotics, localization spans all types of robots which can be equipped with awide variety of sensors and operate in a multitude of environments. Particularlythe slam problem, estimating a map while positioning oneself in the same map,has been well studied. The land based robot commonly used for this type of prob-lem is equipped with wheel encoders, laser sensors, cameras, accelerometers, etc.A robot is a neat platform since the structural relationships between the sensorsare constant and the sensors are almost always observing the surroundings fromthe same height and tilt.

The human sensor platform provides new challenges due to the non-sturdinessof the human body compared to a robot. If the person is equipped with multiplesensors on different positions on the body, the distances between these sensors

31

Page 48: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

32 4 Indoor Localization

will not be fixed. Thus, the mutual positions of all sensors can vary, but onlyin restricted ways. For example, the distance from the head to the feet can beshorter than the full length of the user if he or she is crouching, but there arerestrictions on how tall an uninjured user can be.

To track a person, a variety of sensors can be used. An imu with accelerometersand gyros and maybe also magnetometers is simple and cheap and is thereforea very common sensor. Unfortunately, a positioning system that only uses ac-celerometers and gyros is prone to be quite wrong quite soon why it is usuallysupported by a range measuring radio device such as WiFi or Ultra Wide Band,Woodman and Harle (2009), or is fused with preexisting maps for enhanced track-ing precision, Woodman and Harle (2009); Aggarwal et al. (2011); Widyawanet al. (2008).

The imus used for indoor localization are small and cheap and consequently per-form quite poorly. There is commonly a drift in the gyros causing the orienta-tion estimate to be incorrect after a short period of time. Since the orientationis wrong, the direction of down is wrong and a part of the gravitational accelera-tion will instead be believed to originate from the user movements. This error isdouble integrated to estimate the sensor position, resulting in a positioning errorthat grows cubically in time. This rapidly causes very large positioning errors.

By mounting the imu on the foot, the positioning errors can be greatly reduced.If we can safely detect that the foot is on the ground we know that it is at a standstill which we can use in the filter. Since the foot is stationary during the stance,virtual measurements of zero velocity can be fed to the positioning framework.This is known as zero velocity update (zupt) and reduces the positioning errorfrom being cubical to linear in time Foxlin (2005).

The problem of false positives, i.e. calling a stand still when the foot is in midair,is a serious one. If the foot is not at a standstill when one is called, the filter will beupdated with false virtual measurements which can ruin the position estimates.False negatives, i.e. missing one or two true stand stills, is not an equally bigproblem. The stand still detection framework should therefore be designed withthe focus on minimizing the false positives.

This chapter consists of three parts. In the first section, Section 4.1, a probabilis-tic framework for stand still detection is derived. The second section, Section 4.2,covers an extensive investigation of how the position of the imu on the boot af-fects the possibility to detect stand stills for a number of user movements. In thefinal section, Section 4.3, the stand still detection framework is incorporated in alocalization framework that uses zupt to position a user. The positioning exper-iments were performed in the vicon lab that provides accurate ground truth ofthe movements.

Page 49: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

4.1 Stand Still Detection 33

4.1 Stand Still Detection

The key to making a positioning system with a foot mounted imu work is tosafely detect that the foot is at a stand still. True detected stand stills used forzupt will greatly enhance the localization performance. False stand stills usedfor zupt will though risk causing large positioning errors. It is therefore worseto detect false positives than to miss true positives. In this section we will look ata probabilistic framework for stand still detection.

To detect that the foot is touching the ground, one obvious option would be touse pressure sensors mounted under the soles. There are though some drawbackswith this solution. First, pressure sensors usually have some sort of moving partswhich are prone to be worn out by usage. Second, there are cases when the foot isat a standstill even though the sole is not touching the ground, for example whenthe subject is crawling, why the stationary phases would not be detected by thepressure sensors. And third, if the foot is turning while on the ground it wouldbe regarded as a stand still by the pressure sensors even though it is moving. Thedrive is therefore to use the data from the foot mounted imu itself to detect thatthe sensor is at a standstill.

Previously, stand still detection has mostly been done by comparing the accelerom-eter or gyro signal to a threshold. Here we put the stand still detection in a prob-abilistic framework using test statistics with known distributions and a HiddenMarkov Model. The result is an estimated probability of stand still that is consec-utively calculated at every time instant. This can be used for zupt in a filteringframework. This section is based on the work presented in Callmer et al. (2010b)and Rantakokko et al. (2011).

4.1.1 Related Work

Most solutions to the stand still detection problem use an averaged accelerome-ter or gyro measurement and compares it to a threshold; Beauregard (2007); Felizet al. (2009); Foxlin (2005); Ojeda and Borenstein (2007). The threshold is cho-sen ad hoc and is normally quite restrictive to minimize false positives. Anotherapproach is the moving variance used in Godha et al. (2006) where the variancecomputed over a sliding window is compared to a threshold. One of the problemswith this approach is that in order to make it work properly the time intervalduring which the boot is still must be quite long. Therefore it might work welldetecting stand stills during walking but not during running.

Probabilistic zero velocity detection has previously been proposed in Skog (2009)who used a hypothesis test to determine if the foot was stationary or moving. Thehypothesis test was performed using a test statistic based on a Generalized Like-lihood Ratio Test (glrt). The pdf of the acceleration and/or the angular velocityduring the swing phase of the step, was approximated with an unnormalized uni-form distribution. The pdf during stand still was based on the exponential ofthe norm of the acceleration and/or the angular velocity, which has an unknowndistribution. The resulting test statistic was a moving average of the norm of the

Page 50: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

34 4 Indoor Localization

acceleration measurements and/or the angular velocity measurements. This wascompared to a threshold to determine if the foot was to be rendered stationary.Since the test statistic has an unknown distribution the threshold was chosen adhoc, making the framework in practice similar to the ones in Beauregard (2007);Feliz et al. (2009); Foxlin (2005); Ojeda and Borenstein (2007).

The test statistics used in Skog (2009) are similar to the ones used in this worksince we both evaluate one acceleration based and one angular velocity based.The acceleration based test statistic differs though in that we have chosen onewhich has a known distribution. Skog (2009) also has a third test statistic basedon a combination of acceleration and angular velocity while we choose to com-bine the measurements by first evaluating the signals separately and then fusingthe estimated probabilities.

4.1.2 Test Statistics Derivation

The test statistics are based on the accelerometer and gyro signals from the imu.The sensor was mounted on the boot under the shoe laces. An example of a mea-sured walking sequence with a shoe lace mounted imu is shown in Figure 4.1.The foot is stationary around time instants 550, 660, 770, 870 and 980. Duringthese phases the norm of the accelerometer signals is the gravitation constantwith some added noise. At the same time instants the norm of the angular veloc-ity signal is zero with some additive noise. These will be the key characteristicsof the test statistics.

Sensor Models

The signal model is

yt =[yat (θ)yωt (θ)

]+

[vatvωt

](4.1)

where yat and yωt denote the acceleration measurement vector and the angularvelocity measurement vector, respectively. Further, θ denotes the model depen-dence on the phase of the human step sequence. Naturally, the model differssignificantly between when the foot is at stand still and when it is moving.

The measurements are assumed to have additive independent identically dis-tributed Gaussian noise va ∼ N (0, σ2

a ) and vω ∼ N (0, σ2ω) where σ2

ω = σ2ωI and

σ2a = σ2

a I and I is the 3 × 3 identity matrix.

During stand still the sensor model is[yatyωt

]=

[gut0

]+

[vatvωt

], (4.2)

where ut is the unknown gravitational direction vector and g is the gravitationalconstant 9.81. Since the orientation of the boot changes over time, so does ut .When the foot is moving the sensor model changes to[

yatyωt

]=

[gut + atωt

]+

[vatvωt

](4.3)

Page 51: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

4.1 Stand Still Detection 35

500 600 700 800 900 1000−20

0

20

Accelerometer Data

Sample

Acc

eler

atio

n, [m

/s2 ]

500 600 700 800 900 1000−10

0

10

Sample

Ang

ular

Vel

ocity

, [ra

d/s] Gyro Data

xyz

Figure 4.1: Example of accelerometer data (where x, y and z is solid, dashedand dashdotted) and gyro data (ωx, ωy and ωz is solid, dashed and dash-dotted, ωi is angular rotation rate around axis i) during a walking sequence.The foot is stationary around time instants 550, 660, 770, 870 and 980.

where at and ωt are induced by human movements and therefore have unknowndistributions.

All stand still detection frameworks are roughly based on the assumption that atand ωt are not both zero at the same time instances if the foot is not stationary.

Test Statistics

To be able to differentiate between the two modes, test statistics with known dis-tributions are computed. Two different ones are evaluated here, one using onlythe accelerometer data, T a, and one using only the angular velocity data, T ω.

The test statistic of the accelerometer magnitude detector is computed as

T at =‖yat ‖2

σ2a

(4.4)

where T a ∼ χ2(3, λ) during stand still. It has a noncentral chi-square distributionsince yat has nonzero mean when the foot is stationary. Its noncentrality parame-ter λ = g2/σ2

a and 3 is the number of degrees of freedom.

Page 52: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

36 4 Indoor Localization

The angular velocity test statistic is

T ωt =‖yωt ‖2

σ2ω

(4.5)

where T ω ∼ χ2(3) during stand still since yω has zero mean when the foot isstationary. T ω has 3 degrees of freedom.

Test Statistic Appearance during Walking Sequence

The two test statistics are computed for the walking sequence in Figure 4.1 and isshown plotted in logscale in Figure 4.2. A plot of the test statistics of the walkingsequence in The means of the stand still distributions are marked with dashedlines. The stand still events occurring around time instants 550, 660, 770, 870and 980 are clearly visible. The test statistic T a has a movement distribution

500 600 700 800 900 100010

2

104

Ta

500 600 700 800 900 100010

0

102

104

Figure 4.2: Logarithmic plot of the test statistics with the mean of the standstill distribution marked with a dashed line. T at at the top and T ωt at thebottom. The foot is stationary around time instants 550, 660, 770, 870 and980.

that has a significant overlap with the stand still distribution, causing the teststatistic to cross the mean of the stand still distribution during the stride. Thisis shown around time instants 530, 615, 630, 710, 750, 825 and 935. Simplycalling a stand still when T a is close to the mean of the stand still distributionwill therefore cause a lot of false positives.

T ω has a distribution during movement that does not have a real overlap of thestand still distribution, why T ω may be a safer test statistic than T a to use for

Page 53: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

4.1 Stand Still Detection 37

stand still detection. Figure 4.2 also shows that the stand still phases of T ω

appear shorter than indicated by T a. Apparently, the angular velocity changesbefore the acceleration does when a step is started.

4.1.3 Test Statistic Distribution Validation

The test statistics must be validated to ensure that their experimental stand stilldistributions are similar to the theoretical ones. We also estimate the distribu-tions of the test statistics under experimental movements and plot how they re-late to the stand still distributions. The empirical movement distributions areshown as histograms and are plotted with their respective approximations. Thehistograms were created using a large amount of experimental data. The samemovement distributions were used for all types of movements since the differ-ences between them were quite small, especially in the most important regionsclose to the stand still distributions.

Acceleration Magnitude Detector

The distributions of the acceleration magnitude test statistic T a are shown inFigure 4.3. The theoretical and the empirical stand still distributions have similarmean but slightly different covariances. One of the reasons why the empiricaldensity has a smaller covariance than the theoretical one, could be that we havebeen a bit too meticulous selecting the stand still data. Note also the significantoverlap of the probability distributions of stand still and movement. That makesit difficult to safely identify stand stills by only looking at T a.

The movement distribution was approximated using two Gaussians as

pm(T a) = 0.94 ·N (500, 1000) + 0.06 ·N (390, 10).

Angular Rate Magnitude Detector

The distributions of T ω are shown in Figure 4.4. Clearly, the theoretical standstill distribution is very similar to the empirical one, estimated by experimentaldata. The movement distribution was approximated using two Gaussians as

pm(T ω) = 0.25 ·N (30, 25) + N (5000, 3000).

Also note the much smaller overlap between the stand still and movement dis-tributions, bottom left in Figure 4.4. This should enable more robust stand stilldetection when using T ω instead of T a.

4.1.4 Hidden Markov Model

To determine the probability of stand still, a Hidden Markov Model (hmm ) isused. It is based on modes, in this case two modes: stand still and movement,and it estimates the probability of being in each mode at every time instant ina recursive fashion. To do that it uses a predefined probability of switching be-tween the modes, the distributions of the test statistic under each mode and themeasured value of the test statistic.

Page 54: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

38 4 Indoor Localization

0 500 1000 1500 20000

0.005

0.01

0.015

0.02

0.025

0.03Stand still histogram and theoretical dist

Ta

p s(Ta )

Empirical p

s(Ta)

Theoretical ps(Ta)

0 500 1000 1500 20000

0.002

0.004

0.006

0.008

Ta

p m(T

a )

Movement histogram and approximation

Empirical p

m(Ta)

Approximated pm

(Ta)

0 500 1000 1500 20000

0.005

0.01

0.015

0.02

0.025

0.03

Ta

p(T

a )

Stand still vs Movement distribution

Stand stillMovement

Figure 4.3: Top left: theoretical stand still distribution of T a (solid) andempirical histogram (dashed), top right: empirical movement histogram(dashed) and approximation (solid), bottom left: empirical stand still (dashed)and movement (solid) distributions. Note the significant overlap between thestand still and movement distribution.

One question might arise at this point. Why cannot we just formulate a simplehypothesis test to figure out if the foot is stationary? First of all, a hypothesis testonly provides some sort of information if the H0 hypothesis is rejected. Other-wise we do not know anything. That means that in order to detect stand stills,the H0 hypothesis must be that the foot is moving. This in turn means that wemust reject the H0 hypothesis based on the test statistic being below or above acertain threshold. This threshold is based on the movement distribution which isunknown. Since we cannot determine a statistically derived threshold, a hypoth-esis test cannot be formulated.

The hmm has as stated above two modes: mode 1 when the foot is at a stand stilland mode 2 when the foot is moving. The mode transition probability matrixstates the probability of a mode switch which induces some dynamics into theprobability estimation. A lower mode transition probability requires a measure-ment with a higher likelihood for a mode switch to occur.

Page 55: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

4.1 Stand Still Detection 39

0 10 20 30 40 500

0.05

0.1

0.15

0.2

0.25Stand still histogram and theoretical dist

p s(Tω)

Empirical p

s(Tω)

Theoretical ps(Tω)

1 100 100000

0.002

0.004

0.006

0.008

0.01

p m(T

ω)

Movement histogram and approximation

Empirical p

m(Tω)

Approximated pm

(Tω)

0 10 20 30 40 500

0.05

0.1

0.15

0.2

0.25

p(T

ω)

Stand still vs Movement distribution

Stand stillMovement

Figure 4.4: Top left: theoretical stand still distribution of T ω (solid) andempirical histogram (dashed), top right: empirical movement histogram(dashed) and approximation (solid), bottom left: empirical stand still (dashed)and movement (solid) distributions. Note the difference in magnitude be-tween the stand still and movement distribution (bottom left).

The mode transition probability matrix used in the experiment is

Π =[0.95 0.050.05 0.95

](4.6)

which states that the probability of going from stand still to moving or vice versa,is 5%. During normal walking the right foot takes about one step per secondwhich results in roughly 2 mode transitions every 100 measurements. The tran-sition probabilities were chosen slightly higher to incorporate also faster move-ments.

The mode probabilities at time t are calculated using the recursion

µit = P (rt = i|yt) ∝ p(yt |rt = i)P (rt = i|yt−1)

= p(Tt |rt = i)Nr∑j=1

Πjiµjt−1. (4.7)

Page 56: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

40 4 Indoor Localization

where rt is the mode state. Hence we have

µit =p(Tt |rt = i)

∑Nrj=1Πjiµ

jt−1∑Nr

l=1 p(Tt |rt = l)∑Nrj=1Πjlµ

jt−1

. (4.8)

The probability density functions of movement used in the hmm are approxima-tions set to resemble the empirical movement density functions in Figures 4.3and 4.4. The hmm framework thus consecutively estimates the probability ofmovement and stand still for each time instant.

If the probability transition matrix is chosen as having the same probability ofchange between all modes, i.e.

Πc =

α . . . α...

. . ....

α . . . α

(4.9)

the probability of each mode is only depending on the latest measurement.

µit =p(Tt |rt = i)

∑Nrj=1Π

cjiµ

jt−1∑Nr

l=1 p(Tt |rt = l)∑Nrj=1Π

cjlµ

jt−1

=p(Tt |rt = i)α

∑Nrj=1 µ

jt−1∑Nr

l=1 p(Tt |rt = l)α∑Nrj=1 µ

jt−1

=p(Tt |rt = i)∑Nrl=1 p(Tt |rt = l)

(4.10)

All dynamics are then removed making it prone to give false positives by onlyone troublesome measurement. This is the same as saying that a stand still canoccur at any time and that each measurement is independent of all previous mea-surements which is not true.

For the two mode case stand still and movement, the two step algorithm can nowbe summarized in Algorithm 3.

Combined Test Statistic

We now have one mode probability based on the accelerometer test statistic andone on the gyro test statistic. Instead of just looking at one of these, one can fusethese mode probabilities to get a stand still probability based on both accelera-tion and angular velocity. Theoretically, this would reduce the number of falsepositives since there is a small chance that both will be wrong at the same time.On the other hand, if one of the test statistics fails to indicate the true positivesfor some reason, all true positives will be missed.

Page 57: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

4.1 Stand Still Detection 41

Algorithm 3 Stand Still Detection Framework

Require: Stand still distribution ps and movement distribution pm of test statis-tic T . Measurements y with noise parameter σ . Mode transition probabilitymatrix Π.

1: for t = tstart, . . . , tfinal do2: Compute test statistic.

Tt =‖yt‖2

σ2 (4.11)

3: Estimate stand still and movement probabilities, µst and µmt .

µst =Ps,t

Ps,t + Pm,tµmt =

Pm,tPs,t + Pm,t

wherePs,t = ps(Tt)

(Πssµ

st−1 +Πsmµ

mt−1

)Pm,t = pm(Tt)

(Πmsµ

st−1 +Πmmµ

mt−1

)(4.12)

4: Declare stand still if µst ≥ α.5: end for

The combined stand still probabilities are calculated as

µst = µst(Ta) · µst(T

ω) (4.13)

making the movement probabilities

µmt = 1 − µst . (4.14)

In the experiments, this third option will be evaluated alongside the first two.

4.1.5 Experimental Results

The mode probabilities provided by the hmm of the data sequence in Figure 4.1is shown in Figure 4.5. The top two show the probabilities indicated by T a andT ω and illustrate the difference in stand still detection performance. The bottomone is the combined one that only indicates a stand still when both test statisticsindicate that there is a stand still.

In the experiments, α in Algorithm 3 was chosen as α = 0.99. That means thatwe defined false positive as when the system calculated a stand still probabilityof at least a 99% while the foot was not at rest. If the foot actually was at rest thestand still was defined as detected. The measurement noises were set as σa = 0.5and σω = 0.08, estimated from data when the sensor was at rest.

It is clear that the acceleration based test statistic T a is close to indicating falsepositives around some of the troublesome time instants mentioned in Section 4.1.2:500, 615 and 825. The framework does not call it a stand still after the first mea-surement of T a that is close to the stand still mean, but after a couple of closemeasurements the hmm starts to assume that the foot is at rest.

Page 58: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

42 4 Indoor Localization

500 600 700 800 900 10000.95

1

P(m

ode)

Probability of different modes during experiment, acc

500 600 700 800 900 10000.95

1

P(m

ode)

Probability of different modes during experiment, gyro

P(still)P(walking)

500 600 700 800 900 10000.95

1

P(m

ode)

Sample

Probability of different modes during experiment, acc and gyro

Figure 4.5: From top to bottom, mode probabilities for T a, T ω and the com-bination, evaluated on the data set in Figure 4.1. The foot is stationaryaround time instants 550, 660, 770, 870 and 980.

The angular velocity based test statistic T ω gives a distinct detection of everyfoot stance. The stationary moment is rather short but is often followed by ashorter second stationary moment. Figure 4.2 shows that this is because thereis commonly a slight angular movement halfway through the deemed stationarypart. This second stationary moment provides no new information and only thefirst detection is necessary to perform zupt. No false positives occur during thestride phase of the step.

When both T a and T ω has to indicate that the foot is at a stand still, there arefewer detected stand stills. In Figure 4.2, we can see that it pretty much gives thesame result as T ω does. The combination does not seem necessary when walking,but it might be necessary if certain movements result in false positives using onlythe gyro.

Further walking experiments reveal the stand still detection performance shownin Table 4.1. All 190 true stationary phases were detected, but also some falsepositives.

Clearly, the acceleration based test statistic performs the worst and the combi-nation is the best. The results are though only approximate. One of the bigproblems when evaluating indoor navigation systems is how to acquire reliableground truth data. You can count the number of steps you take but that onlyworks when you are just walking straight ahead and that is the least interesting

Page 59: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

4.1 Stand Still Detection 43

case since it is the simplest. A data set consisting of 100 steps when you closeand open doors, move around furniture and such is much more difficult to ana-lyze afterwards and it is also much more prone to result in false positives. Thisproblem comes back over and over again when dealing with indoor navigationsystems: what is the ground truth?

Walking, 190 steps T a T ω bothStand stills detected 190 190 190False positives 128 12 5

Table 4.1: True positives and false positives while detecting stand stillphases. 190 steps were taken in 4 different experiments.

4.1.6 Conclusions and Future Work

Two test statistics with known distributions have been evaluated for stand stilldetection. In conjunction with a Hidden Markov Model, the mode probabilitiesare readily calculated and can be used for zero velocity updates.

The test statistic based on accelerometer data only has been shown to result inplenty of false positives. This is natural since there is a significant overlap be-tween the test statistic pdf during stand still and the pdf during movements. Thegyro based one has been shown to provide good stand still detection capabilitiesduring walking even though some false positives do occur, mainly when smallmovements are made. Combining the estimated stand still probabilities seems tobe the safest stand still detector, since sometimes the foot may look like it is stillwhen only one signal is considered but when looking at both signals you can tellthat it is not.

One can also consider extending the test statistics by incorporating a number ofmeasurements over time.

T ωt =t∑

i=t−k

‖yωi ‖2

σ2ω

(4.15)

During stand still the measurements will be independent. A short time interval,k ≈ 3, might remove false positives resulting from just one or two troublesomemeasurements.

Page 60: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

44 4 Indoor Localization

4.2 Stand Still Detection Performance for DifferentIMU Positions

For a localization framework with a foot mounted imu, the sensor must be moun-ted on the shoe. Where to attach the imu on the boot might seem like a straightfor-ward problem, put it on the shoe of course! It is though not that straightforward.Different sections of the shoe are stationary during different parts of the stance.Also, different movements cause different sections of the boot to be stationary. Aplacement that is good for walking is not necessarily good for running or crawl-ing. More motions than walking have to be taken into consideration since the endusers of a professional indoor positioning system, i.e. firefighters, police officersand soldiers among others, have a tendency to do a lot more than walking. In thissection we will investigate how the sensor position and shoe type affect the standstill detection performance using four different movements.

The first sensor position is by the heel, Figure 4.6a. In a real end user applicationone idea is to hide the sensor in the thick sole of the heel to protect it. Anotheroption is to put the sensor by the toes since they are involved in almost all move-ments, Figure 4.6b. Intuitively, the toes should be stationary during pretty muchall conceivable standing motions sequences. Since the sensors are getting smallerand smaller, the problem of hiding the sensor in the thinner sole by the toe couldsoon be solved. A third option is to put the sensor by the shoelaces, Figure 4.6c.This position could be common if the sensor is not integrated with the shoe, butis instead strapped onto the preexisting boot.

Two different types of boots have also been used: one softer winter boot, Fig-ure 4.6b, and one rigid hiking boot, Figure 4.6a. The reason is to illustrate notonly the importance of the sensor position, but also of the softness of the shoe. Asofter shoe tends to be reshaped more by the ground during the stance makinga larger section of the boot stationary at a given stance. A more rigid boot doesnot get reshaped during the stance making the boot more likely to roll againstthe ground. Therefore a rigid boot is often not really stationary during a stance.The likely end users of a foot mounted positioning system probably have bootswith a rigidness somewhere in between the winter and the hiking boot. All threesensor positions will be evaluated on both boots using walking, running, duckwalking and duck-drag. The last movement is common among firefighters andmeans that he or she is moving standing on one knee and one foot. The pose isshown by firefighter 55 in Figure 4.7. The sensor was assumed attached to theboot that is touching the ground with the sole. Duck walking is walking in asquatted position.

The main problem with evaluating even more realistic firefighter movementslike crawling is that it is very difficult to obtain reliable ground truth data forsuch experiments. The experiment has to take place in for example a vicon lab,Section 2.1.4, but not even these experiments will be completely like crawlingaround furniture and such in an apartment. The data is somewhat periodic butstill the stand still phases are not straightforward to pinpoint. The more distinct

Page 61: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

4.2 Stand Still Detection Performance for Different IMU Positions 45

(a) Hiking boot with thesensor by the heel.

(b) Winter boot with thesensor by the toe.

(c) Hiking boot with thesensor by the shoe laces.

Figure 4.6: Sensor positions and boots used during stand still detection ex-periments. All combinations were evaluated.

and repetitive the movement is, the easier it is to differentiate between true andfalse positives using nothing but the sensor data. Walking and also running ex-periments are therefore easy to evaluate, while crawling or duck-drag are moredifficult, especially the former.

For all sensor positions and movements we will also compare the stand still de-tection performance of the gyro based test statistic and the combined one.

4.2.1 Experimental Results

All results are plotted as detection rate versus false positives rate in Figure 4.8.The top row shows walking experiments, second is running, third is duck walk-ing and the last one is duck-dragging. The left column are the results using thesofter winter boot and the right column are the results using the hiking boot. Thestand still detection results using the angular velocity test statistic and the results

Figure 4.7: Firefighter 55 is demonstrating the duck-drag position.

Page 62: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

46 4 Indoor Localization

using the combination are shown in the same plot. The same symbol is used forthe same movement but the test statistics are color coded.

A stand still is assumed detected if the estimated stand still probability is above99% during a true stand still. If no stand still did occur it is considered a falsepositive. Figure 4.8 is therefore a roc-curve with only one threshold sample(99%).

The best result is a combination of high stand still detection ability combinedwith few false positives. This result has been achieved when the marker is plottedin the top left corner. The further to the right, the more false detections. Thefurther down, the fewer stand still detections. All in all, the best combinationseems to be a toe mounted sensor with a test statistic using a combination ofangular velocity and acceleration measurements.

The complete results are also shown in table form in the end of the chapter. Ineach table the results using the softer winter boot are shown at the top and at thebottom are the results from the hiking boot experiments.

Walking

The first movement that will be evaluated is walking. This is the most com-mon movement used in localization experiments, Beauregard (2007); Feliz et al.(2009); Foxlin (2005); Ojeda and Borenstein (2007); Godha et al. (2006); Skog(2009); Woodman and Harle (2009). This is probably because walking is easy toperform, the data is straightforward to analyze and the stand still phases are longand stable making them easy to detect.

The results from the walking sessions are shown in the top row in Figure 4.8 andin Table 4.2. It is clear that almost all stand still phases are detected no matterthe boot or the sensor position. What is not shown in in Figure 4.8 and Table 4.2are the characteristics of the stand still phases. With a heel mounted sensor thestand still phases were always quite short making them more difficult to detect.For a toe mounted sensor the stand still phases were long but sometimes the toescan have zero angular velocity when the boot is in mid air causing false positives.It might therefore be necessary to support the gyro based test statistic with theaccelerometer based one to achieve truly safe stand still detection.

Running

The second movement is running which is a bit more difficult to analyze. It isperiodical like walking which is nice, but the stand still sequences are signifi-cantly shorter making the true stand still moments more difficult to distinguish.A higher sampling frequency could help but it is not necessary that it would solveall the problems.

The second row in Figure 4.8 and Table 4.3 shows the stand still detection perfor-mance during running which shows clear differences between the settings. Thestand still phases are pretty much non existent when a rigid hiking boot is used.If only accelerometer data was considered, some stand stills could still have been

Page 63: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

4.2 Stand Still Detection Performance for Different IMU Positions 47

0 0.5 10

0.5

1

Det

ectio

n R

ate

Walking, winter boot

0 0.5 10

0.5

1Walking, hiking boot

0 0.5 10

0.5

1

Det

ectio

n R

ate

Running, winter boot

0 0.5 10

0.5

1Running, hiking boot

0 0.5 10

0.5

1

Det

ectio

n R

ate

Duck Walking, winter boot

0 0.5 10

0.5

1Duck Walking, hiking boot

0 0.5 10

0.5

1

False Positives Rate

Det

ectio

n R

ate

Duck−Dragging, winter boot

0 0.5 10

0.5

1

False Positives Rate

Duck−Dragging, hiking boot

Heel − Tω

Toe − Tω

Lace − Tω

Heel − Tω and Ta

Toe − Tω and Ta

Lace − Tω and Ta

Figure 4.8: Stand still detection results using the winter boot (left column)and the hiking boot (right column) for the 99% probability threshold. Toprow is walking experiments, second is running, third is duck walking, last isduck-dragging. Heel mounted sensor is marked with squares, toe mountedwith circles and crosses marks lace mounted sensor. The best corner to be inis the top left combining a good stand still detection ability with few falsepositives. Overall, the toe mounted sensor using the combination of angularvelocity and acceleration measurements seems to provide the most reliableresults.

Page 64: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

48 4 Indoor Localization

detected, especially for a toe mounted sensor in which case almost all would thenhave been detected. When looking at the data the reason for this is clear. Whilerunning, angular velocity wise the boot is almost never really still. During eachstance the rigid boot is rolling against the ground why the sensor is always mov-ing. A stand still is therefore impossible to detect since it in reality does not hap-pen. This is probably true also for higher sampling frequencies. Further studiesare needed to establish this.

The heel and lace mounted sensors have significant problems detecting standstills even when the softer winter boot is used. Not even the accelerometer pro-vides reliable results indicating that those parts of the boot are rarely stationarywhile running.

Duck Walking

The third movement, duck walking, is walking in a squatted position. The heelsnever touch the ground making the results open for discussion. A heel mountedsensor should therefore never be stationary during a duck walking sequence.Sometimes though, the foot is still enough to indicate a stand still even thoughthe heel is not touching the ground. We have though chosen to list these as truepositives. The same reasoning is also valid for the lace mounted sensor.

The stand still performance during duck walking is shown in the third row ofFigure 4.8 and in Table 4.4. The toe mounted sensor is the most reliable whichis natural since the toes are the only part that actually touch the ground. Therigidness of the hiking boot also makes it easier to detect stand stills using thelace mounted sensor compared to using the winter boot.

Duck-drag

Duck-drag is a movement where the whole foot with the sensor is flat on theground with the other knee on the ground. The movement is common amongfirefighters while searching a room with poor visibility.

The last row of in Figure 4.8 and Table 4.5 shows the stand still detection resultsfor a duck-drag sequence. Detecting the stand stills is straight forward no matterthe sensor position or the boot. The whole foot is flat on the ground during eachstance why they are readily detected. The problems are the false positives. Usingonly the gyro results in many false positives if the softer boot is used. Supportingthe gyro measurements with the accelerometer signals significantly reduces thefalse positives in this case.

Using a rigid boot, the heel mounted sensor performs the best. The toe mountedsensor misses some true positives but it has no false positives. The lace mountedsensor causes a couple of false positives.

4.2.2 Discussion

The problem of reliable stand still detection using a foot mounted imu is not astraightforward one. We have in this section shown that both the type of boot andthe position of the sensor will greatly affect the stance detection performance.

Page 65: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

4.2 Stand Still Detection Performance for Different IMU Positions 49

The worst sensor position seems to be a heel mounted sensor. During many move-ments the heel hardly ever touches the ground making the stand stills nearly im-possible to detect. In some cases the stand stills are impossible to detect sincethey do not even exist because the heel is always moving. Simply falling for thetemptation to hide the sensor in the heel just because it appears easy, seems to bea poor choice.

The lace mounted sensor performs quite well during most movements. The mostproblematic movement is running since the laces are very rarely still while run-ning. It is better to put the sensor by the laces than by the heel, but it does notseem to be the ultimate sensor position.

Putting the sensor by the toes appears to be the best position. Stand stills havebeen shown to be safely detected using almost all movements and boot combi-nations. It seems though that the gyro based stand still detection needs somesupport by either the accelerometer or some logical rules to reduce the numberof false positives. A very rigid boot can still make the stand stills difficult to de-tect during running which must be noted if a positioning system is to be built forpolice officers or soldiers.

Walking sequenceWinter boot, 46 steps Heel Toe LaceStand stills detected, T ω 44 46 46False positives 0 1 0Stand stills detected, T a and T ω 44 46 46False positives 0 0 0

Hiking boot, 50 steps Heel Toe LaceStand stills detected, T ω 50 50 50False positives 0 1 0Stand stills detected, T a and T ω 50 50 50False positives 0 0 0

Table 4.2: Experimental results of the stand still detection performance of awalking sequence using different sensor positions on two different boots.

Page 66: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

50 4 Indoor Localization

Running sequenceWinter boot, 31 steps Heel Toe LaceStand stills detected, T ω 0 31 5False positives 0 0 0Stand stills detected, T a and T ω 0 15 0False positives 0 0 0

Hiking boot, 40 steps Heel Toe LaceStand stills detected, T ω 0 3 1False positives 1 2 0Stand stills detected, T a and T ω 0 1 1False positives 0 0 0

Table 4.3: Stand still performance during a running sequence using differentsensor positions.

DuckWalking sequenceWinter boot, 30 steps Heel Toe LaceStand stills detected, T ω 16 30 19False positives 5 1 2Stand stills detected, T a and T ω 5 30 0False positives 3 0 0

Hiking boot, 29 steps Heel Toe LaceStand stills detected, T ω 20 29 25False positives 10 0 1Stand stills detected, T a and T ω 15 29 25False positives 0 0 0

Table 4.4: Stand still detection performance during a duck walking se-quence. The toe mounted sensor is the most reliable.

4.3 Localization Experiments

Localization experiments where the position of a walking subject was estimatedhave been performed. The system uses the foot mounted imu and zero velocityupdates based on the stand still detection framework described in Section 4.1.

4.3.1 Measurements

Three types of measurements are used in the localization experiments. First, imudata is used to estimate the movements of the boot. Second, virtual measure-ments of zero velocity are created when a stand still event has been detected.Third, the experiments were performed in the LiU vicon lab which providesaccurate position measurements that are used as ground truth to evaluate thesystem performance.

Page 67: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

4.3 Localization Experiments 51

Duck-drag sequenceWinter boot, 29 steps Heel Toe LaceStand stills detected, T ω 25 29 29False positives 4 19 7Stand stills detected, T a and T ω 24 29 29False positives 1 0 0

Hiking boot, 37 steps Heel Toe LaceStand stills detected, T ω 37 37 37False positives 0 0 3Stand stills detected, T a and T ω 37 29 37False positives 0 0 1

Table 4.5: Stand still detection performance during a duck-drag sequence.The softer winter boot gives plenty of false positives when only gyro data isused to detect the stand stills.

IMU data

The imu sensor provides acceleration and gyro measurements which are used tocalculate how the boot is moved and to detect stand still phases. The accelerationmeasurements ya,t are three dimensional and contain both user movements andearth gravity. To remove the earth gravity component from the accelerometerdata, the orientation of the sensor must be known. A small error in orientationwill result in gravity components being misinterpreted as user accelerations.

Angular velocity is measured using the three dimensional gyro measurementsyω,t . They are used to keep track of the orientation of the boot. By simply addingthe gyro measurements, the orientation can be estimated. Unfortunately, a longterm heading drift will be introduced since only relative heading measurementsare used.

The sensor also provides measurements of the magnetic field. If only the earthmagnetic field was measured it could be used as a compass measurement. Unfor-tunately, indoors the magnetic field is disturbed by steel structures and electricalwiring making the measurements unreliable. These disturbances are commonlystronger close to the floor which is bad news for a foot mounted magnetometer.We have started to study methods to verify that the magnetometer is reliablebased on gyro studies, but it has not been used in these experiments. Conse-quently, no magnetometer is used in the localization experiments.

Virtual Stand Still Measurements

Each time a stand still has been detected, a virtual measurement of zero velocityis used to update the filter estimates. A small noise is added to represent theuncertainty in the stand still detection.

Page 68: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

52 4 Indoor Localization

Position Ground Truth

The undertaken positioning experiments were performed in the vicon lab at LiUto obtain a ground truth to compare the estimated position with. The size of theactive area in the lab is about 3 by 5 meters making the experiments quite limitedin size. Therefore the user has to walk back and forth in the area to producemeaningful experiments.

4.3.2 Models

To estimate the position of a user based on accelerations, knowledge of the orien-tation of the sensor is crucial to compute the user movements. The orientationis represented using quaternions, qt , which are four dimensional. These are up-dated by the gyro measurements, used as direct inputs in the time update.

The position must also be represented by states since that is what we truly wantto estimate. Position is tracked in three dimensions giving the three states pt .Since the filtering framework uses virtual zero velocity measurements, also veloc-ity states are needed so the measurements can be incorporated correctly, givingthree more states vt . Both velocity and position are updated using accelerationmeasurements used as direct inputs in the time update.

The ten dimensional state vector is therefore

xt =(pt vt qt

)T. (4.16)

Dynamical Model

The dynamical model used is as mentioned above on input form where the ac-celerometer and gyro measurements are used as direct inputs in the model.pt+Tvt+T

qt+T

=

I T I 00 I 00 0 I

ptvtqt

+

T 2

2 IT I0

(RT (qt)(ya,t + ra,t

)+ wa,t

)

+T2

00

S ′(qt)

(yω,t + rω,t + wω,t

)(4.17)

where I is a 3×3 identity matrix. RT (qt) rotates the local measurements of acceler-ation to global acceleration measurements using the quaternion estimates. S ′(qt)converts the gyro measurements into changes in orientation. These conversionsare described more thoroughly in Appendix A.

There are four parameters representing the uncertainties of the model. ra and rωare measurement noise from the accelerometer and gyro, respectively. Added tothese are wa and wω which are process noises added to the position and velocitycovariances and orientation covariances, respectively. These measurement andprocess noise terms do not represent the same uncertainties. In the accelerationcase, the measurement noise represents the uncertainties in the acceleration mea-surement used as input while the process noise term represents the model errors

Page 69: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

4.3 Localization Experiments 53

introduced by assuming that the acceleration is constant over the time interval.The two noise terms can of course be added to create a third new noise term re-placing the two old ones. Correspondingly, wω represents the errors introducedby assuming constant angular velocity. It must therefore be converted from threedimensional uncertainties in orientation to four dimensional quaternion covari-ances.

Measurement Model

Since no measurements of position, velocity or orientation exist, the only mea-surements available are the virtual measurements of zero velocity whenever astand still has been detected.

0 = vt + r0,t . (4.18)

A slight noise r0,t has been added to represent the uncertainty in the stand stilldetection framework.

4.3.3 Filter

Since a nonlinearity is present due to the rotation matrices, an extended Kalmanfilter is used to estimate the states, see Algorithm 2 Section 2.3.2. It mainly con-sists of a time update step since the measurement update only enters when astand still has been detected.

Time Update

The time update of the state estimates uses ya and yω according topt+T |tvt+T |tqt+T |t

=

I T I 00 I 00 0 I

pt|tvt|tqt|t

+

T 2

2 IT I0

RT (qt|t)ya,t +T2

00

S ′(qt|t)

yω,t (4.19)

to update the states. The quaternion estimates must thereafter be normalized

qt+T |t+T :=qt+T |t+T||qt+T |t+T ||

(4.20)

to ensure that they are still unit quaternions. The time update of the state co-variances P depend on the jacobians of the dynamical model f where f ′x are thejacobians of (4.17) with respect to the states and f ′ν are the jacobians of (4.17)with respect to the noise.

Pt+T |t = f ′x (x)Pt|tf′x (x)T + f ′ν(x)Qνf

′ν(x)T (4.21)

where the state jacobians are

f ′x (x) =

I T I T 2

2∂RT (qt|t)∂qt|t

ya,t

0 I T∂RT (qt|t)∂qt|t

ya,t

0 0 I + T2∂S ′(qt|t)∂qt|t

yω,t

(4.22)

Page 70: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

54 4 Indoor Localization

and the noise jacobians are

f ′ν(x) =

T 2

2 RT (qt|t) T 2

2 I 0 0T RT (qt|t) T I 0 0

0 0 T2 S′(qt|t) T

2 S′(qt|t)

. (4.23)

The process noise matrix is

Qν =

σ2a I 0 0 00 σ2

wa I 0 00 0 σ2

ωI 00 0 0 σ2

wω I

(4.24)

where σ2a , σ2

wa , σ2ω and σ2

wω are the noise covariances of the acceleration measure-ment noise, the acceleration process noise, the gyro measurement noise and theangular velocity process noise, respectively.

Measurement Update

The measurement update occurs only when a stand still has been detected. Forthe state and covariance updates, the jacobians of the measurement function withrespect to the states, h′x, and noise, h′r0 , are needed.

h′x =(0 I 0

)(4.25)

h′r0 = I (4.26)

The Kalman gain K can now be calculated as

S = h′xPt+T |th′Tx + h′r0Rh

′Tr0

K = Pt+T |th′Tx S−1. (4.27)

where R = r0I . The state estimates and covariances are now updated accordingto

xt+T |t+T = xt+T |t − K(0 − vt+T |t

)Pt+T |t+T = Pt+T |t − Kh′xPt+T |t . (4.28)

After the measurement update the quaternion estimates must once again be nor-malized.

4.3.4 Experimental Results

A couple of walking experiments have been performed for which ground truth isavailable. Since the experiments had to be performed on quite a limited area dueto the vicon system, the area was traversed several time to make the experimentslarge enough. The downside is that when position estimate and ground truth areplotted together, the plots become quite messy.

Page 71: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

4.3 Localization Experiments 55

Experiment 1

The first experiment covers three laps of the area totalling roughly 36 metersduring 35 seconds, Figure 4.9. The accumulated error in heading is surprisinglysmall and the uncertainty estimates covers the ground truth within two standarddeviations. The x-, y- and z-components with two standard deviations are alsoshown individually in Figure 4.10.

Experiment 2

The second experiment is slightly longer, about 42 meters, and covers multiplesharp corners, Figure 4.11. These 180◦ turns are underestimated resulting in aheading misalignment causing the estimates to drift off. The estimated uncer-tainties are though accurate and keeps the ground truth within two standarddeviations. Again, the estimated x-, y- and z-positions are plotted with groundtruth and two standard deviations in Figure 4.12.

4.3.5 Discussion and Future Work

The positioning framework described above suffers from two shortcomings. Firstof all there are no heading measurements, making it impossible to recover fromaccumulated heading errors. The gyros are not perfect why these errors are proneto happen. One solution could be to incorporate a magnetometer in the measure-ments and thereby receive heading estimates. The problem is as described beforethat the magnetometer measurements are unreliable due to plenty of corrupt-ing disturbances. We have started to look at using the gyro measurements todecide when the magnetometer is reliable and when it is not. By detecting thatthe magnetometer measurements are reliable and only incorporating these intothe estimate, the heading errors could be removed. As of today, the reliance testworks quite well in the one dimensional case.

The second problem is the lack of position measurements or relations betweenposition estimates. If some sort of loop closure measurements were available theperformance of the positioning would be greatly improved. We will in the nearfuture look into using for example magnetometer disturbances or some sort ofactive markers as a mean to create loop closure measurements.

Page 72: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

56 4 Indoor Localization

−3 −2 −1 0 1 2 3−2

−1

0

1

2

3

4

x−pos, [m]

y−po

s, [m

]Estimated position with uncertainties and true position

Est posTrue pos

Figure 4.9: First positioning experiment with estimate, solid line, andground truth, dashed line. The experiment started and ended in the origin.

0 10 20 30

−2

0

2

time, [s]

x−po

s, [m

]

X position with uncertainties

0 10 20 30

−2

0

2

4

y−po

s, [m

]

time, [s]

Y position with uncertainties

0 10 20 30−2

0

2

z−po

s, [m

]

time, [s]

Z position with uncertainties

Est PosTrue Pos

Figure 4.10: x-, y- and z-position estimates and ground truth of the firstpositioning estimate. 2σ of estimate uncertainties are shown in thin dashedline.

Page 73: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

4.3 Localization Experiments 57

−3 −2 −1 0 1 2 3−2

−1

0

1

2

3

4

x−pos, [m]

y−po

s, [m

]Estimated position with uncertainties and true position

Est posTrue pos

Figure 4.11: Second positioning experiment with estimate, solid line, andground truth, dashed line. The experiment started and ended in the origin.

0 20 40

−2

0

2

time, [s]

x−po

s, [m

]

X position with uncertainties

0 20 40

−2

0

2

4

y−po

s, [m

]

time, [s]

Y position with uncertainties

0 20 40−1

0

1

2

z−po

s, [m

]

time, [s]

Z position with uncertainties

Est PosTrue Pos

Figure 4.12: x-, y- and z-position estimates with ground truth for experi-ment 2. 2σ of estimate uncertainties are shown in thin dashed line.

Page 74: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.
Page 75: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

5Conclusions and Future Work

5.1 Conclusions

The conclusions have previously been drawn under their respective sections butsome will be repeated here for completeness.

5.1.1 Indoor Localization

The work on indoor localization was divided into two parts. A stand still de-tection framework was presented which was evaluated using different boots andsensor positions. The stand still detection framework was incorporated in a local-ization experiment with very accurate ground truth.

The stand still detection framework is based on either gyro signals only or a com-bination of gyro and accelerometer signals. The stand still distributions of thesignals are used in conjunction with a Hidden Markov Model to estimate theprobabilities of stand still and movement. Using both the measured accelera-tions and angular velocities to detect that the foot is at rest has been shown toprovide the safest stand still detections. Still some false positives do occur anda basic requirement of forcing the boot to be still for at least k measurements,k ∼ {2, 3}, would probably enhance the performance.

The imu should preferably be mounted near the toes on the boot. This area isalmost always stationary during stand still no matter the movement why it seemsto be a better sensor position than the heel. The more rigid the boot, the moredifficult it is to detect the stand stills, most notably because the sensor is neverreally at rest since the boot rolls over the ground due to its sturdiness.

The localization framework suffers from the absence of heading and position mea-

59

Page 76: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

60 5 Conclusions and Future Work

surements. The zupt experiments provide fairly good results when compared toground truth but large turns often tend to be underestimated causing the posi-tion estimate to drift off.

5.1.2 RADAR SLAM

A navigation framework for surface vessels using radarmeasurements only waspresented in Paper A. The system was intended to be used during gps failure andis shown to provide accurate navigation results during an extensive experiment.The navigation framework treats the radar scans like images and uses the sift al-gorithm to track features between scans. These provide information about howthe vessel is moving and turning which was used to estimate the vessel position.

5.1.3 Underwater Sensor Positioning

A passive underwater sensor localization scheme is presented in Paper B wheretriaxial magnetometers and a friendly vessel with known magnetic characteris-tics is used to determine the sensors positions. Simulations indicate that if thevessel is equipped with a gnss a positioning error of 12.9% can be achieved. Thesimulations also indicate that our positioning scheme is quite insensitive to minorerrors in sensor orientation and magnetic signature, when gnss is used through-out the trajectory.

5.2 Future Work

The future work will primarily be undertaken in indoor localization. That said,the other work presented in this thesis could be quite useful in this task as well.If the outcome of the future work described below is that one or two imus are notenough to do accurate firefighter positioning, other sensors will be needed too.Range measurements from a sonar or possibly even a radar sensor might beneeded to assist the imu positioning framework. Another possibility is to map themagnetic characteristics of the environment in a slam fashion and incorporatethis into the positioning using loop closures.

The first steps will though just involve imus. Can we expand the magnetometerreliance test mentioned earlier to also work in three dimensions? Incorporatingthis would greatly enhance the localization performance since the heading erroris the most troubling one.

The problem of tracking crawling firefighters has not been solved though justbecause walking ones can be tracked. A second imu probably needs to be put onthe knee to detect when it touches the ground. The foot mounted one will mostlikely not be still enough while crawling, why a knee mounted sensor is probablyneeded. This will be looked into soon.

Finally, as mentioned earlier, incorporating some sort of loop closure measure-ments, preferably ones based on the way firefighters search a building, wouldallow great positioning improvements. The system should be designed using the

Page 77: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

5.2 Future Work 61

fundamental difference between the human sensor platform and the robotic plat-form: a thinking person. By exploiting the human in the system and designingthe positioning solution appropriately and carefully, the human sensor platformcan be just as informative as the robotic one, if not more. We have ideas for howthis can be done and they will be implemented in the near future.

Page 78: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.
Page 79: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

AQuaternion Properties

Quaternions were discovered by Hamilton as an extension for the imaginary num-bers into three dimensions, Hamilton (1844). Later, the unit quaternion started tobe used for angle representation in rotations providing singularity free rotations.For a thorough description of quaternions see Kuipers (1999).

A.1 Operations and Properties

A quaternion is a four-tuple of real numbers denoted by q = (q0, q1, q2, q3). Al-ternatively it can be described as consisting of a scalar part q0 and the vectorq.

q =

q0q1q2q3

=(q0q

)(A.1)

Quaternion multiplication is denoted as � and defined as

p � q =(p0p

)�

(q0q

)=

(p0q0 − p ·q

p0q + q0p + p × q

). (A.2)

63

Page 80: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

64 A Quaternion Properties

Some quaternions properties are:

p � q , q � p (A.3)

norm(q) =

√√√ 3∑i=0

q2i (A.4)

q−1 =(q0q

)−1

=(q0−q

)(A.5)

The unit quaternion used for rotation operations also fulfills norm(q) = 1.

A.2 Describing a Rotation using Quaternions

The quaternion

q =(

cos δsin δn

)(A.6)

describes a rotation around the vector n with the angle 2δ. There are two waysof depicting a rotation: either the coordinate frame is rotated and the vector isfixed or the vector is rotated and the coordinate frame is fixed. The difference isin the sign of the rotation. In this work the vector will be assumed constant andthe coordinate system is rotated.

Given the vectors

v =(0v

)and u =

(0u

)(A.7)

a rotation of v around n can be written as

u = q−1 � v � q (A.8)

which is the assumed standard rotation in this work. The resulting rotation is

v =(

q ·uq0 − (q0u − q × u) ·q(q ·u)q + q0(q0u − q × u) + (q0u − q × u) × q

)(A.9)

which simplifies to

v =(

02(q ·u)q + (q2

0 − q ·q)u − 2q0q × u

). (A.10)

A.3 Rotation Matrix

The quaternion rotation (A.10) can be rewritten as a matrix multiplication

v = R(q)u (A.11)

Page 81: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

A.4 Quaternion Dynamics 65

where

R(q) =

(q2

0 + q21 − q

22 − q

23) 2(q1q2 + q0q3) 2(q1q3 − q0q2)

2(q1q2 − q0q3) (q20 − q

21 + q2

2 − q23) 2(q2q3 + q0q1)

2(q1q3 + q0q2) 2(q2q3 − q0q1) (q20 − q

21 − q

22 + q2

3)

. (A.12)

A.4 Quaternion Dynamics

In the case of the quaternions describing a rotation between a global coordinatesystem and a local one attached to a moving sensor, the description of the quater-nions will contain some dynamics. The full derivation of the quaternion dynam-ics can be studied in Törnqvist (2008) but parts will be recited here.

Let the quaternion qlg represent the rotation of the local coordinate system in theglobal one. The angular velocity of the sensor unit in the local coordinate systemis ωllg which can be written as

ωllg =

0ωxωyωz

=(

)(A.13)

while qlg is

qlg =

q0q1q2q3

=(q0q

). (A.14)

The quaternion derivative can now be written as

qlg =12qlg � ωllg =

12

(−q ·ω

q0ω + q × ω

)

=12

−(q1ωx + q2ωy + q3ωz)

q0ω −

0 −ωz ωyωz 0 −ωx−ωy ωx 0

q1q2q3

=12

0 −ωx −ωy −ωzωx 0 ωz −ωyωy −ωz 0 ωxωz ωy −ωx 0

︸ ︷︷ ︸S(ωllg )

q0q1q2q3

=12

−q1 −q2 −q3q0 −q3 q2q3 q0 −q1−q2 q1 q0

︸ ︷︷ ︸S ′(qlg )

ωxωyωz

Page 82: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

66 A Quaternion Properties

If the angular velocity ωt is assumed constant over the sampling interval thenoise free discrete time model is

qt+1 = e12 S(ωt)T qt (A.15)

where a Taylor series expansion gives

e12 S(ωt)T =

∞∑n=0

(12S(ωt)T

)nn!

=∞∑n=0

(T2

)n 1n!S(ωt)

n. (A.16)

If the sampling time T is short, the expansion can be approximated with the firsttwo terms

e12 S(ωt)T ≈ I +

T2S(ωt) (A.17)

giving the discrete time model

qt+1 = qt +T2S(ωt)qt

= qt +T2S ′(qt)ωt . (A.18)

If angular velocity noise term νω is included the discrete model becomes

qt+1 = qt +T2S ′(qt)ωt +

T2S ′(qt)νω,t . (A.19)

Page 83: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

Bibliography

P. Aggarwal, D. Thomas, L. Ojeda, and J. Borenstein. Map matching and heuristicelimination of gyro drift for personal navigation systems in gps-denied condi-tions. Measurement Science and Technology, 22(2):025205, 2011.

S. Beauregard. Omnidirectional pedestrian navigation for first responders. InProc. of the 4th Workshop on Positioning, Navigation and Communication,WPNC07, Hannover, Germany, 2007.

J. Callmer, K. Granström, J. Nieto, and F. Ramos. Tree of words for visual loopclosure detection in urban slam. In Proceedings of the 2008 Australasian Con-ference on Robotics and Automation (ACRA), 2008.

J. Callmer, M. Skoglund, and F. Gustafsson. Silent localization of underwater sen-sors using magnetometers. EURASIP Journal on Advances in Signal Processing,2010a.

J. Callmer, D. Törnqvist, and F. Gustafsson. Probabilistic stand still detectionusing foot mounted IMU. In Proceedings of the International Conference onInformation Fusion (FUSION), 2010b.

J. Callmer, D. Törnqvist, H. Svensson, P. Carlbom, and F. Gustafsson. RadarSLAM using visual features. EURASIP Journal on Advances in Signal Pro-cessing, 2010c. Under revision.

The Economist. No jam tomorrow. The Economist Technology Quarterly, 398(8724):20–21, 2011.

R. Feliz, E. Zalama, and J. G. Garcia-Bermejo. Pedestrian tracking using inertialsensors. Journal of Physical Agents, 3(1):35–43, 2009.

E. Foxlin. Pedestrian tracking with shoe-mounted inertial sensors. IEEE Com-puter Graphics and Applications, 25(6):38–46, 2005.

S. Godha, G. Lachapelle, and M. E. Cannon. Integrated GPS/INS system forpedestrian navigation in signal degraded environment. In Proc. of ION GNSS,2006.

67

Page 84: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

68 Bibliography

N.J. Gordon, D.J. Salmond, and A.F.M. Smith. A novel approach tononlinear/non-gaussian bayesian state estimation. IEE Proceedings on Radarand Signal Processing, 140:107–113, 1993.

K. Granström, J. Callmer, F. Ramos, and J. Nieto. Learning to detect loop clo-sure from range data. In Proceedings of the IEEE International Conference onRobotics and Automation (ICRA), 2009.

A. Grant, P. Williams, N. Ward, and S. Basker. GPS jamming and the impact onmaritime navigation. Journal of Navigation, 62(2):173–187, 2009.

S. Grzonka, F. Dijoux, A. Karwath, and W. Burgard. Mapping indoor environ-ments based on human activity. In Proc. IEEE International Conference onRobotics and Automation (ICRA), Anchorage, Alaska, 2010.

F. Gustafsson. Statistical Sensor Fusion. Studentlitteratur, 2010.

S.W.R. Hamilton. On quaternions; or on a ner system of imaginaries in algebra.Philosophical Magazine, xxv:10–13, 1844.

A.R. Jiménez, F. Seco, J.C. Prieto, and J. Guevara. Indoor pedestrian navigationusing an ins/ekf framework for yaw drift reduction and a foot-mounted imu.In Positioning Navigation and Communication (WPNC), 2010 7th Workshopon, pages 135 –143, march 2010.

S.J. Julier, J.K. Uhlmann, and H.F. Durrant-Whyte. A new approach for filteringnonlinear systems. In American Control Conference, 1995. Proceedings of the,volume 3, pages 1628 –1632, June 1995.

R.E. Kalman. A new approach to linear filtering and prediction problems. Trans.ASME J Basic Engr., 82:35–45, 1960.

J.B. Kuipers. Quaternions and Rotation Sequences. Princeton University Press,1999.

F. Lindsten, J. Callmer, H. Ohlsson, D. Törnqvist, T. B. Schön, and F. Gustafs-son. Geo-referencing for UAV navigation using environmental classification.In Proceedings of 2010 International Conference on Robotics and Automation(ICRA), 2010.

L. Ojeda and J. Borenstein. Non-GPS navigation for security personnel and firstresponders. Journal of Navigation, 60(3):391–407, 2007.

J. Rantakokko, P. Strömbäck, J. Rydell, J. Callmer, D. Törnqvist, F. Gustafsson,P. Händel, M. Jobs, and M. Grudén. Accurate and reliable soldier and first re-sponder indoor positioning: Multi-sensor systems and cooperative localization.IEEE Wireless Communications Magazine, 2011. Accepted for publication.

P. Robertson, M. Angermann, and B. Krach. Simultaneous localization and map-ping for pedestrians using only foot-mounted inertial sensors. In Proceedingsof the 11th international conference on Ubiquitous computing, Ubicomp ’09,pages 93–96, 2009.

Page 85: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

Bibliography 69

W. J. Rugh. Linear System Theory. Prentice-Hall, Englewood Cliffs, NJ, 2ndedition, 1996.

I. Skog. Low-cost Navigation Systems. PhD thesis, KTH, Stockholm, Sweden,2009.

D. Törnqvist. Estimation and Detection with Applications to Navigation. PhDthesis, Linköping University, Linköping, Sweden, 2008.

N. Wahlström, J. Callmer, and F. Gustafsson. Magnetometers for tracking metal-lic targets. In Proceedings of the International Conference on Information Fu-sion (FUSION), 2010.

N. Wahlström, J. Callmer, and F. Gustafsson. Single target tracking using vectormagnetometers. In Proceedings of the International Conference on Acoustics,Speech and Signal Processing (ICASSP), 2011.

Widyawan, M. Klepal, and S. Beauregard. A backtracking particle filter for fusingbuilding plans with pdr displacement estimates. In Positioning, Navigationand Communication, 2008. WPNC 2008. 5th Workshop on, pages 207 –212,march 2008.

O. Woodman and R. Harle. Rf-based initialisation for inertial pedestrian tracking.In Proceedings of the 7th International Conference on Pervasive Computing,2009.

Page 86: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.
Page 87: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

Part II

Publications

Page 88: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.
Page 89: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

Paper ARADAR SLAM using Visual Features

Authors: Jonas Callmer, David Törnqvist, Henrik Svensson,Pelle Carlbom and Fredrik Gustafsson

Edited version of the paper:

J. Callmer, D. Törnqvist, H. Svensson, P. Carlbom, and F. Gustafsson.Radar SLAM using visual features. EURASIP Journal on Advances inSignal Processing, 2010c. Under revision.

Page 90: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.
Page 91: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

RADAR SLAM using Visual Features

Jonas Callmer∗, David Törnqvist∗, Henrik Svensson∗∗,Pelle Carlbom† and Fredrik Gustafsson∗

∗Dept. of Electrical Engineering,Linköping University,

SE–581 83 Linköping, [email protected]

∗∗Nira Dynamics,Linköping, Sweden

†Saab DynamicsLinköping, Sweden

Abstract

A vessel navigating in a critical environment such as an archipelago,requires very accurate movement estimates. Intentional or uninten-tional jamming makes gps unreliable as the only source of informa-tion and an additional independent navigation system should be used.In this paper we suggest estimating the vessel movements using asequence of radar images from the preexisting body-fixed radar .Island landmarks in the radar scans are tracked between multiplescans using visual features. This provides information not only aboutthe position of the vessel but also of its course and velocity. We presenthere a complete navigation framework that requires no additionalhardware than the already existing naval radar sensor. Experimentsshow that visual radar features can be used to accurately estimatethe vessel trajectory over an extensive data set.

1 Introduction

Almost all outdoor navigation systems today rely on global navigation satellitesystems (gnss) such as the global positioning system (gps). This applies to road-bound vehicles, aircrafts, surface vessels as well as personal navigation systems.However, the total reliance on gps today comes with obvious risks. For securityand safety critical navigation applications, gps cannot be the only positioningsensor. In military applications, an independent backup sensor insensitive togps jamming is often used, but also in civilian applications the robustness againstjamming may constitute one of the main design issues in the future. For marineapplications, Volpe (2001); EMRF (2001); Grant et al. (2009) discuss the problemof intentional or unintentional gps jamming and alternative backup systems arestrongly recommended. Basically, the required redundancy can be achieved ei-ther with an existing wireless network, with a dedicated network of sensors orbeacons, or with support from a geographical information system (gis) wherelandmarks of opportunity act as natural beacons.

75

Page 92: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

76 Paper A RADAR SLAM using Visual Features

Figure 1: The high speed patrol boat type used for the data acquisition.Note the backwash created by the jet propulsion system. Courtesy of Dock-stavarvet AB.

We will in this contribution focus on surface ships, and investigate a completelynovel approach requiring only a radar as information source. No infrastructuresuch as gnss , radio beacons or gis is required. The solution is applicable to all60000 commercially registered ships around the world, all of the safety, securityand military fleets and also to a large fraction of the larger private boats.

The idea is to use the radar as an imagery sensor, and apply computer visionalgorithms to detect landmarks of opportunity. Landmarks that occur during atleast two radar scans are used for visual odometry that gives speed, relative posi-tion and relative course. The main motivation for using visual features to matchradar scans instead of trying to align the radar scans is that visual featuresare easily matched despite large translational and rotational differences, whichis very difficult using other scan matching techniques. The landmarks can op-tionally be saved in a map format which can be used to recognize areas that havebeen visited before. That is, a by-product of the robust navigation solution is amapping and exploration system.

Our application example is based on a military patrol boat, Figure 1, that oftenmaneuvers close to the shore in high speeds, at night, without visual aid, in sit-uations where gps jamming or spoofing cannot be excluded. As the results willshow, we are able to navigate in a complex archipelago using only the radar ,and get a map that is very close to ground truth.

A similar robust surface navigation system was proposed in Dalin and Mahl(2007); Karlsson and Gustafsson (2006), but there it was assumed that an accu-rate sea chart is available. The idea was to apply map matching between theradar image and the sea chart, and the particle filter was used for this mapping.Unfortunately, commercial sea charts still contain rather large absolute errors ofthe shore, see Volpe (2001); EMRF (2001), which makes them less useful in blindnavigation with critical maneuvers without visual feedback.

Page 93: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

1 Introduction 77

Figure 2: Typical radar image showing the islands surrounding the ves-sel. The radar disturbances close to the vessel are caused by the vessel andthe waves. Behind the vessel (lower part of the image) the striped shapeddisturbances are the result of backwashes reflecting the radar pulses.

The radar used in these experiments measures the distances to land areas using1024 samples in each direction and a full revolution is comprised of roughly2000 directions. Each scan has a radius of about 5 km giving a range resolutionof roughly 5 meters. These measurements are used to create a radar image bytranslating the range and bearing measurements into Cartesian coordinates. Anexample of the resulting image is shown in Figure 2.

The radar image gives a birds eye view of the surrounding islands and by track-ing these islands, information about how the vessel is moving is obtained. Weuse the Scale-Invariant Feature Transform (sift) Lowe (1999) to extract trackablefeatures from the radar image which are subsequently matched with featuresfrom later scans. These features are shown to be distinct and stable enough to beused for island tracking over an extended period of time. When these featuresare tracked using a filter, an estimate of the vessel movements are obtained thatover time gives an accurate trajectory estimate.

The outline is as follows: Section 2 gives a overview of the related work followedby a theoretical filtering framework in Section 3. In Section 4, the performanceof sift is evaluated on radar images and the trajectory estimation performanceon experimental data is given in Section 5. The paper then ends in Section 6 withconclusions and suggested future work.

Page 94: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

78 Paper A RADAR SLAM using Visual Features

2 Background and Relation to SLAM

The approach in this contribution is known as the Simultaneous Localization AndMapping (slam) problem. Today, slam is a fairly well studied problem with solu-tions that are reaching some level of maturity Durrant-Whyte and Bailey (2006);Bailey and Durrant-Whyte (2006). slam has been performed in a wide variety ofenvironments such as indoors; Newman and Ho (2005), in urban; Bosse and Zlot(2008); Granström et al. (2009); Rouveure et al. (2009); Checchin et al. (2009)and rural; Ramos et al. (2007b); Rouveure et al. (2009) areas, underwater; Eusticeet al. (2005); Mahon et al. (2008) and in the air; Bryson and Sukkarieh (2005).The platform is usually equipped with a multitude of sensors such as lasers, cam-eras, inertial measurement units, wheel encoders, etc. In this work we will useonly the radar sensor of a naval vessel to perform slam in a maritime environ-ment. The data used was recorded in the Stockholm archipelago by Saab BoforsDynamics Carlbom (2005).

radars have been used for a long time to estimate movements, for example in theearly experiments by Clark and Durrant-Whyte (1998). radar reflecting beaconsin known positions were tracked using a millimeter radar and this was shownto improve the movement estimates. Shortly after, Clark and Dissanayake (1999)extended the work by tracking natural features instead of beacons.

Thereafter, laser range sensors became more popular since they are more reliable,giving a range measurement in all directions without being overly sensitive to theproperties of the reflective surface. The problem of estimating the vehicle move-ments became a problem of range scan alignment. This problem was studiedamong others in Lu and Milios (1994, 1997); Chen and Medioni (1992); Ramoset al. (2007a).

The advantages of the radar , such as its ability to function in all weather condi-tions, have though resulted in it making a comeback. Lately, microwave radarshave been used in slam experiments but now using a landmark free approach.In Rouveure et al. (2009), slam was performed in both urban and rural areas byaligning the latest radar scan with a radar map using 3D correlations to esti-mate the relative movements of the vehicle. The radar map was constructedby consecutively adding the latest aligned radar scan to the previous scans.Checchin et al. (2009) performed slam in an urban scenario by estimating therotation and translation of the robot over a sequence of scans using the Fourier-Mellin Transform.

The combination of radar and sift has previously been explored by in Li et al.(2008), where Synthetic Aperture radarmeasurements were co-registered usingmatched sift features. To the best of the authors knowledge, this is the first timesift features have been extracted from a radar image for tracking landmarks toperform slam.

Page 95: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

3 Theoretical Framework 79

x

y

θφψ

m

r

v

X

Y

Figure 3: The global (X, Y ) and the local boat fixed (x, y) coordinate systems,the course ψ and the crab angle φ giving the difference between course andvelocity vector. The vessel and an island landmark m are also depicted as isthe measured bearing θ and range r of the landmark relative to the vessel.

3 Theoretical Framework

All vessel movements are estimated relative a global position. The positions ofthe tracked landmarks are not measured globally but relative to the vessel. There-fore two coordinate systems are used, one global for positioning the vessel andall the landmarks and one local relating the measured feature positions to thevessel. Figure 3 shows the local and global coordinate systems, the vessel and alandmark m.

The variables needed for visual odometry are summarized in Table 1.

3.1 Detection Model

If a landmark is detected at time t, the radar provides a range, rt , and bearing,θt , measurement to the island landmark i as

yit =(r itθit

)+ eit (1)

where eit is independent Gaussian noise. These echos are transformed into aradar image using polar to rectangular coordinates conversion and the resultcan be seen in Figure 2. Figures 1 and 2 also show that the forward and sidewaysfacing parts of the scans are the most useful ones for feature tracking. This is dueto the significant backwash created by the jet propulsion system of the vessel,which is observed along the vessel trajectory in Figure 1. This backwash disturbsthe radar measurements by reflecting the radar pulse, resulting in the stripeshaped disturbances behind the vessel in Figure 2.

Page 96: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

80 Paper A RADAR SLAM using Visual Features

Table 1: Summary of notationParameter Description(X, Y ) Global position(x, y) Local coordinate system with x aligned with the

stemv Velocityψ Course, defined as the angle between the global X

axis and the local x axis, as would be shown by acompass

ω Turn rate, defined as ψφ Difference between course and velocity vector (”crab”

angle), which is mainly due to streams and wind(miX , m

iY ) Global position of landmark i

r i Range from the radar to landmark iθi Bearing from the strap-down body-fixed radar to

landmark i

sift is today a well established standard method to extract and match featuresfrom one image to features extracted from a different image covering the samescene. It is a rotation and affine invariant Harris point extractor that uses adifference-of-Gaussian function to determine scale. Harris points are in turnregions in the image where the gradients of the image are large, making themprone to stand out also in other images of the same area. For region description,sift uses gradient histograms in 16 subspaces around the point of interest.

In this work, sift is used to extract and match features from radar images. Bytracking the sift features over a sequence of radar images, information abouthow the vessel is moving is obtained.

3.2 Measurement Model

Once a feature has been matched between two scans, the position of the featureis used as a measurement to update the filter.

Since the features are matched in Cartesian image coordinates, the straightfor-ward way would be to use the pixel coordinates themselves as a measurement.After having first converted the pixel coordinates of the landmark to coordinatesin the local coordinate system, the Cartesian feature coordinates are now relatedto the vessel states as

yit =(yix,tyiy,t

)+ eit = R(ψt)

(miX,t − XtmiY ,t − Yt

)+

(eiX,teiY ,t

)(2)

where yix,t is the measured x-coordinate of feature i in the local coordinate frameat time t and R(ψt) is the rotation matrix between the ship orientation and theglobal coordinate system. (X, Y ) and (miX , m

iY ) are global vessel position and

global position of landmark i, respectively.

Page 97: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

3 Theoretical Framework 81

The problem with this approach is that eX,t and eY ,t in (2) are dependent sincethey are both mixtures of the range and bearing uncertainties of the radar sen-sor. These dependencies are also time dependent since the mixtures depend onthe bearing of the radar sensor. Simply assuming them to be independent cancause large estimation errors.

A better approach is to convert the Cartesian landmark coordinates back to polarcoordinates and use these as a measurement

yit =(r itθit

)=

(miX,t − Xt)2 + (miY ,t − Yt)2

arctan(miY ,t−YtmiX,t−Xt

)− ψt

+(eir,teiθ,t

). (3)

This approach results in independent noise parameters er ∼ N (0, σ2r ) and eθ ∼

N (0, σ2θ ), which better reflect the true range and bearing uncertainties of the

range sensor.

3.3 Motion Model

The system states describing the vessel movements at time instant t are

zt =(Xt Yt vt ψt ωt φt

)T(4)

where v is the velocity, ψ is the course, ω is the angular velocity and φt is the crabangle, i.e. the wind and stream induced difference between course and velocityvector (normally small). Due to the size and the speed of the vessel, Figure 1, thecrab angle is assumed to be very small throughout the experiments. The systemstates are more extensively described in Table 1 and are also shown in Figure 3.We will be using a coordinated turn model, though there are many plausiblemotion models available.

When landmarks at unknown positions are tracked to estimate the movementsof the vessel, these should be kept in the state vector. If the same landmarks aretracked over a sequence of radar scans, a better estimate of the vessel’s move-ments is acquired than if they are tracked between just two.

The system states are therefore expanded to also include all landmarks withinthe field of view to create a visual odometry framework. The new state vectorbecomes

zt =(Xt Yt vt ψt ωt φt mkX,t mkY ,t . . . mlY ,t

)T. (5)

Only the l − k latest landmarks are within the field of view why only these arekept in the state vector. As the vessel travels on, the landmarks will one by oneleave the field of view, be removed from the state vector and replaced by newones.

When all old landmarks are kept in the state vector even after they have left thefield of view, it is a slam framework. If an old landmark that left the field of viewlong ago was rediscovered, this would allow for the whole vessel trajectory to be

Page 98: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

82 Paper A RADAR SLAM using Visual Features

updated. This is called a loop closure and is one of the key features in slam. Theslam state vector is therefore

zt =(Xt Yt vt ψt ωt φt m1

X,t m1Y ,t . . .

)T. (6)

A linearized discretization of the coordinated turn model using the slam land-mark augmentation gives

Xt+∆tYt+∆tvt+∆tψt+∆tωt+∆tφt+∆tm1X,t+∆t

m1Y ,t+∆t...

=

Xt + 2vtωt

sin(ωt∆t2 ) cos(ψt + φt + ωt∆t2 )

Yt + 2vtωt

sin(ωt∆t2 ) sin(ψt + φt + ωt∆t2 )

vt + νv,tψt + ωt∆tωt + νω,tφt + νφ,tm1X,t

m1Y ,t...

(7)

where ∆t is the difference in acquisition time between the two latest matchedfeatures. νv , νω and νφ are independent Gaussian process noises reflecting themovement uncertainties of the vessel.

3.4 Multi-Rate Issues

Having defined a motion model and a measurement model, state estimation isusually straightforwardly implemented using standard algorithms such as theextended Kalman filter (ekf ), unscented Kalman filter (ukf) or the particle filter(pf), see Gustafsson (2010). These filters iterate between a prediction step basedon the motion model and a correction step based on the measurement model andthe current measurement. The most natural approach is to stack all landmarksfrom one radar revolution into a large measurement vector and then run thefilter with ∆t = T = 1.5, where T is the radar revolution time (1.5s in ourapplication). There are, however, two non-standard problems here.

The first problem is that all matched features should not be used to update thefilter at the same time. Since the vessel is moving while the radar is revolving,a shift in the radar image is introduced. A vessel speed of 10 m/s will resultin a difference in landmark position of about 15 meters from the beginning tothe end of the scan. If the vessel is travelling with a constant velocity and course,then all relative changes in feature positions between two scans would be equal,but if the vessel is turning or accelerating, a shift in the relative position changeis introduced. This causes different features to have different relative movementswhich will result in estimation errors if all measurements are all used at the sametime. We have found the error caused by this batch approach to be quite large.Therefore, the filter should be updated by each matched feature independently. Ifindependent course and velocity measurements were available, the radar imagecould be corrected for these delays. The course and velocity estimates of the filter

Page 99: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

3 Theoretical Framework 83

should though not be used for scan correction, since this would create a feedbackloop from the estimates to the radarmeasurements that can cause instability.

Second, now, if there was one landmark detected in each scan direction, then thefilter could be applied with the rate ∆t = T /N = 1.5

2000 where N is the number ofmeasurements per rotation (2000 in our application). This is not the case though,and we are facing a multi-rate problem with irregularly sampled measurements.The measurement model can now conveniently be written as

yt =

yit if detection of landmark i at time tNaN otherwise.

(8)

Now, any of the filters (ekf , ukf, pf) can be applied at rate ∆t = T /N using (8),with the understanding that a measurement being NaN simply means that thecorrection step is skipped.

3.5 Alternative Landmark Free Odometric Framework

The landmark based framework derived above suffers from one apparent short-coming: the number of features will grow very fast. After only a short time pe-riod, thousands of potential landmarks will have been found, causing large over-head computations in the implementation. Either a lot of restrictions must bemade on which of the new landmarks to track, or a different approach is needed.

If the map is not central, an approach based on differential landmark processingcould be taken. Instead of tracking the same feature over a sequence of scans,features are only matched between two scans to compute the relative movementbetween the sweeps. This will though cause the positioning to perform slightlyworse

Relative Movement Estimation

As described in Section 3.4, all features from the entire scan should not be usedto estimate the relative movement at the same time. The delays over the scan willcreate a skewed scan if the vessel is accelerating or turning, introducing estima-tion errors. The idea is therefore to use subsets of features, measured over a shorttime interval, to compute the relative changes in course, ∆ψt , and position, ∆Xtand ∆Yt , between two scans. These relative movement estimates are calculatedmultiple times per scan pair.

The relative changes in position can be estimated by comparing the measuredlandmark positions in the local coordinate frames. These landmark positions arerelated as (

yix,t−Tyiy,t−T

)=

(cos(∆ψt) − sin(∆ψt)sin(∆ψt) cos(∆ψt)

) (yix,tyiy,t

)+

(∆Xt∆Yt

)(9)

where yix,t is the measured x−coordinate of landmark i at time instant t in thelocal coordinate system. yix,t−T is the measured x−coordinate in the previous scan.

If the relative change in course between two scans is assumed small, (9) can be

Page 100: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

84 Paper A RADAR SLAM using Visual Features

approximated using cos(∆ψt) ≈ 1 and sin(∆ψt) ≈ ∆ψt .

The equation system is now approximately linear and by stacking multiple land-marks in one equation system

yix,t−T − yix,t

yiy,t−T − yiy,t

yjx,t−T − y

jx,t

yjy,t−T − y

jy,t

...

=

−yiy,t 1 0−xix,t 0 1−yjy,t 1 0

−xjx,t 0 1...

∆ψt∆Xt∆Yt

, (10)

an overdetermined system is acquired and ∆ψt , ∆Xt and ∆Yt can be determinedusing a least squares solver.

This estimated change in position and course can in turn be used to calculate avelocity and an angular velocity measurement as

yt =(vtωt

)=

(∆Xt)2+(∆Yt)2

T∆ψtT

+(ev,teω,t

). (11)

ev and eω are assumed to be independent Gaussian noises. This transformationthat provides direct measurements of speed and course change, gives what isusually referred to as odometry.

Although this approach simplifies the implementation a lot, it comes with certaindrawbacks. First, the landmarks correctly associated between two images areused only pair-wise, and this is sub-optimal since one loses both the averagingeffects that occur when the same landmark is detected many times and also thecorrelation structure between landmarks. Second, assuming no cross-correlationbetween ev and eω is a simplification since vt and ωt are based on ∆Xt , ∆Yt and∆ψt which are not calculated independently. Therefore the measurements vt andωt are actually dependent making the noise independence assumption incorrect.And third, in order to estimate the relative movements, the time interval usedto detect the landmarks must be informative enough to calculate ∆ψt , ∆Xt and∆Yt , but not long enough to allow a significant scan skewedness to appear. Thistradeoff must be balanced.

ESDF

The simplified odometric model above can still be used for mapping if a trajectorybased filtering algorithm is used. One such framework is known in the slam lit-erature as the Exactly Sparse Delayed-state Filter (esdf ) Eustice et al. (2006). Ithas a state vector that consists of augmented vehicle states as

z1:t =(z1:t−1zt

), (12)

where the state zt is given in (4) and z1:t−1 are all previous poses. If no loop clo-sures are detected, then the esdf is simply stacking all pose estimates, but once

Page 101: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

4 sift Performance on radar Images 85

a loop closure is detected and the relative pose between the two time instances iscalculated, the esdf allows for the trajectory to be updated using this new infor-mation.

Once the trajectory has been estimated, all the radar scans can be mappedto world coordinates. By overlaying the scans on the estimated trajectory, aradar map is obtained. Each pixel now describes how many radar detectionsthat have occurred in that coordinate.

4 SIFT Performance on RADAR Images

sift is used to extract visual island features from the radar images. Figure 4shows the features that are extracted from the upper right quadrant of a radar scanexample. Two types of features are detected: island features and vessel relatedfeatures. The latter originate from radar disturbances caused by the vessel andthe waves and are visible in the bottom left corner of Figure 4. Unfortunately,this section of the image cannot just be removed since the vessel commonly trav-els very close to land making island features in the area sometimes crucial fornavigation.

The total number of features detected is of course depending on the number ofislands in the area, but also on where these islands are situated. A large islandclose to the vessel will block a large section of the radar scan, resulting in fewfeatures. In these experiments, an average of 650 features were extracted per fullradar scan.

Figure 4: radar image with extracted features marked with arrows. Thisis a typical example of how a radar image from which sift extracts a largenumber of features, may look.

Page 102: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

86 Paper A RADAR SLAM using Visual Features

(a) Plenty of islands makes it easy to findgood matching features.

(b) Few islands result in few features tomatch.

Figure 5: Two pairs of consecutive radar images with detected matchesmarked with lines.

4.1 Matching for Movement Estimation

The sift features are matched to estimate the relative movement of the vessel be-tween multiple consecutive scans. Figures 5a and 5b show examples of how wellthese features actually match. In Figure 5a, a dense island distribution results ina lot of matches which provide a good movement estimation. In Figure 5b thereare very few islands making it difficult to estimate the movements accurately.

There are two situations that can cause few matches. One is when there are fewislands and the other is when a large island is very close to the vessel, blockingthe view of all the other islands. When an island is passed close with a high veloc-ity, the radar scans can also differ quite significantly between two revolutions.This results not only in few features to match but also in features that can besignificantly more difficult to match causing the relative movement estimates todegrade. On average though, about 100 features are matched in each full scan.

4.2 Loop Closure Matching

radar features can also be used to detect loop closures which would enable thefilter to update the entire vessel trajectory. The rotation invariance of the sift fea-tures makes radar scans acquired from different headings straightforward tomatch. Quite a large difference in position is also manageable due to the rangeof the radar sensor. This requires of course that no island is blocking or dis-turbing the view. Figure 6a shows example locations a, b and c that were used toinvestigate the matching performance of the visual features.

In area a, Figure 6b, and b, Figure 6c, the features are easy to match despite therather long translational difference over open water in b. In both cases a 180◦

difference in course is easily overcome by the visual features. This shows thestrength of both the radar sensor and of the visual features. The long range ofthe sensor makes loop closures over a wide passage of open water possible. Thesescans would be used in a full scale slam experiment to update the trajectory.

In area c, Figure 6d, only two features are matched correctly and there are also

Page 103: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

5 Experimental Results 87

(a) Example locations used to examine theloop closure detection performance.

(b)Multiple matched features from a loopclosure in area a in Figure 6a. A 180◦ dif-ference in course is easily handled by thesift features.

(c) Location b has both a 180◦ coursedifference and a translative change but acouple of features are still matched.

(d) 2 correct and 2 false matches in areac makes it unsuitable for loop closure up-dates.

Figure 6: Examples of loop closure experiments using visual sift features.

two false positives. If the scans are compared ocularly it is quite challenging tofind islands that are clearly matching, mostly due to blurring and to blockingislands. It is also noticeable that the radar reflections from the islands differdue to differences in radar positions which of course alter the sift features. Thepoor matching result is therefore natural in this case.

5 Experimental Results

The experimental results in this section come from the master thesis by HenrikSvensson; Svensson (2009). The implemented framework is the one described inSections 3.5.

5.1 Results

The trajectory used in the slam experiment is shown in bold in Figure 7. Thetrack is about 3000 seconds long (50 minutes) and covers roughly 32 km. Theentire round trip was unfortunately never used in one single experiment since itwas constituted of multiple data sets.

Page 104: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

88 Paper A RADAR SLAM using Visual Features

Figure 7: The whole trajectory where the studied interval is marked by boldline.

Figure 8: esdf trajectory estimate with covariance ellipses compared togpsmeasurements.

Page 105: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

5 Experimental Results 89

Figure 9: Velocity estimate compared to gpsmeasurements. A slight positivebias is present, probably due to the simplifications mentioned in Section 3.5.

Figure 10: Course estimate compared to gps measurements. The courseestimate is the sum of all estimated changes in course, causing the error togrow in time.

Page 106: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

90 Paper A RADAR SLAM using Visual Features

The estimated trajectory with covariance is compared to the gps data in Figure 8.The first two quarters of the trajectory consists of an island rich environment, seeFigure 6a, resulting in a very good estimate. The third quarter covers an areawith fewer islands causing the performance to degrade. This results in an initialmisalignment of the final quarter which makes the estimated trajectory of thislast segment seem worse than it actually is.

Both velocity and course estimates, Figures 9 and 10, are quite good when com-pared to gps data. There is though a positive bias on the velocity estimate, prob-ably due to the simplifications mentioned in Section 3.5. The course error growsin time since the estimate is the sum of a long sequence of estimated changesin course, see (11), and there are no course measurements available. The finalcourse error is about 30 degrees.

5.2 Map Estimate

A radar map of the area was generated by overlaying the radar scans on theestimated trajectory. Figure 11a shows the estimated radar map that should becompared to the map created using the gps trajectory, Figure 11b. They are verysimilar although small errors in the estimated trajectory are visible in Figure 11aas blurriness. Some islands appear a bit larger in the estimated map because ofthis. Overall the map estimate is good.

The estimated map should also be compared to the satellite photo of the area withthe true trajectory marked in white as shown in Figure 11c. When compared,many islands in the estimated map are readily identified. This shows that therather simple approach of using visual features on radar images can providegood mapping results.

6 Conclusions

We have presented a new approach to robust navigation for surface vessels basedon radar measurements only. No infrastructure or maps are needed. The basicidea is to treat the radar scans as images and apply the sift algorithm for track-ing landmarks of opportunity. We presented two related frameworks, one basedon the slam idea where the trajectory is estimated jointly with a map, and theother one based on odometry. We have evaluated the sift performance and theodometric framework on data from a high-speed patrol boat, and obtained a veryaccurate trajectory and map.

An interesting application of this work would be to apply this method to under-water vessels equipped with synthetic aperture sonar as the imagery sensor, sincethere are very few low-cost solutions for underwater navigation.

Page 107: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

6 Conclusions 91

(a) The estimated map created by overlay-ing the radar scans on the estimated tra-jectory. The estimated trajectory is markedwith a black line.

(b) A radar map created using thegps measurements of the trajectory. Thetrue trajectory is marked with a black line.

(c) A satellite photo of the area. The truetrajectory is marked with a white line. Manyislands in Figure 11a are readily identifiedin this photo.

Figure 11: From top to bottom: the estimated radar map of the area,the true radar map generated using gps and a satellite photo of thearchipelago.

Page 108: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

92 Paper A RADAR SLAM using Visual Features

Bibliography

T. Bailey and H. Durrant-Whyte. Simultaneous localization and mapping (SLAM):part II. Robotics & Automation Magazine, IEEE, 13(3):108 – 117, 2006.

M. C. Bosse and R. Zlot. Map matching and data association for large-scaletwo-dimensional laser scan-based SLAM. International Journal of Robotics Re-search, 27(6):667–691, 2008.

M. Bryson and S. Sukkarieh. Bearing-only SLAM for an airborne vehicle. InProceedings of the Australasian Conf. Robot. Autom. (ACRA), 2005.

J. Callmer, D. Törnqvist, H. Svensson, P. Carlbom, and F. Gustafsson. RadarSLAM using visual features. EURASIP Journal on Advances in Signal Pro-cessing, 2010. Under revision.

P. Carlbom. Radar map matching. Technical Report, 2005.

P. Checchin, F. Gérossier, C. Blanc, R. Chapuis, and L. Trassoudaine. Radar scanmatching SLAM using the fourier-mellin transform. In IEEE Intl. Conf. onField and Service Robotics, 2009.

Y. Chen and G. Medioni. Object modeling by registration of multiple range im-ages. Image and Vision Computing, 10(3):145–155, 1992.

S. Clark and G. Dissanayake. Simultaneous localisation and map building usingmillimetre wave radar to extract natural features. In Proceedings of the IEEEInternational Conference on Robotics and Automation, 1999.

S. Clark and H. Durrant-Whyte. Autonomous land vehicle navigation using mil-limeter wave radar. In Proceedings of the IEEE International Conference onRobotics and Automation, 1998.

M. Dalin and S. Mahl. Radar map matching. Master’s thesis, Linköping Univer-sity, Sweden, 2007.

H. Durrant-Whyte and T. Bailey. Simultaneous localization and mapping (SLAM):part I. Robotics & Automation Magazine, IEEE, 13(2):99 – 110, 2006.

EMRF. GNSS vulnerability and mitigation measures, (rev. 6). Technical report,European Maritime Radionavigation Forum, 2001.

R. Eustice, H. Singh, J. Leonard, M. Walter, and R. Ballard. Visually navigatingthe RMS titanic with SLAM information filters. In Proceedings of Robotics:Science and Systems, 2005.

R.M. Eustice, H. Singh, and J.J Leonard. Exactly sparse delayed-state filters forview-based SLAM. IEEE Transactions on Robotics, 2006.

K. Granström, J. Callmer, F. Ramos, and J. Nieto. Learning to detect loop clo-sure from range data. In Proceedings of the IEEE International Conference onRobotics and Automation (ICRA), 2009.

Page 109: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

Bibliography 93

A. Grant, P. Williams, N. Ward, and S. Basker. GPS jamming and the impact onmaritime navigation. Journal of Navigation, 62(2):173–187, 2009.

F. Gustafsson. Statistical Sensor Fusion. Studentlitteratur, 2010.

R. Karlsson and F. Gustafsson. Bayesian surface and underwater navigation. IEEETransactions on Signal Processing, 54(11):4204–4213, November 2006.

F. Li, G. Zhang, and J. Yan. Coregistration based on SIFT algorithm for syntheticaperture radar interferometry. In Proceedings of ISPRS Congress, 2008.

D.G. Lowe. Object recognition from local scale-invariant features. In ICCV ’99:Proceedings of the International Conference on Computer Vision-Volume 2,1999.

F. Lu and E.E. Milios. Robot pose estimation in unknown environments by match-ing 2D range scans. In Computer Vision and Pattern Recognition, 1994. Pro-ceedings CVPR ’94., 1994 IEEE Computer Society Conference on, 1994.

F. Lu and E.E. Milios. Globally consistent range scan alignment for environmentmapping. Autonomous Robots, 1997.

I. Mahon, S.B. Williams, O. Pizarro, and M. Johnson-Roberson. Efficient view-based SLAM using visual loop closures. IEEE Transactions on Robotics, 24(5):1002–1014, 2008.

P. Newman and K. Ho. SLAM-loop closing with visually salient features. InProceedings of the IEEE International Conference on Robotics and Automation(ICRA), 2005.

F. Ramos, D. Fox, and H.F. Durrant-Whyte. CRF-matching: Conditional randomfields for feature based scan matching. In Proceedings of Robotics: Science andSystems, 2007a.

F. Ramos, J. Nieto, and H.F. Durrant-Whyte. Recognising and modelling land-marks to close loops in outdoor SLAM. In Proceedings of the IEEE Interna-tional Conference on Robotics and Automation (ICRA), 2007b.

R. Rouveure, M.O. Monod, and P. Faure. High resolution mapping of the environ-ment with a ground-based radar imager. In Proceedings of the InternationalRadar Conference, 2009.

H. Svensson. Simultaneous Localization And Mapping in a Marine Environmentusing Radar Images. Master’s thesis, Linköping University, Sweden, 2009.

J. Volpe. Vulnerability assessment of the transportation infrastructure relying onglobal positioning system final report. Technical report, National Transporta-tion Systems Center, 2001.

Page 110: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.
Page 111: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

Paper BSilent Localization of Underwater

Sensors using Magnetometers

Authors: Jonas Callmer, Martin Skoglund and Fredrik Gustafsson

Edited version of the paper:

J. Callmer, M. Skoglund, and F. Gustafsson. Silent localization of under-water sensors using magnetometers. EURASIP Journal on Advances inSignal Processing, 2010a.

Page 112: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.
Page 113: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

Silent Localization of Underwater Sensorsusing Magnetometers

Jonas Callmer, Martin Skoglund and Fredrik Gustafsson

Dept. of Electrical Engineering,Linköping University,

SE–581 83 Linköping, Sweden{callmer, ms, fredrik}@isy.liu.se

Abstract

Sensor localization is a central problem for sensor networks. If thesensor positions are uncertain, the target tracking ability of the sen-sor network is reduced. Sensor localization in underwater environ-ments is traditionally addressed using acoustic range measurementsinvolving known anchor or surface nodes. We explore the usage oftriaxial magnetometers and a friendly vessel with known magneticdipole to silently localize the sensors. The ferromagnetic field createdby the dipole is measured by the magnetometers and is used to local-ize the sensors. The trajectory of the vessel and the sensor positionsare estimated simultaneously using an Extended Kalman Filter (ekf ).Simulations show that the sensors can be accurately positioned usingmagnetometers.

1 Introduction

Today, surveillance of ports and other maritime environments is getting increas-ingly important for naval and customs services. Surface vessels are rather easyto detect and track, unlike submarines and other underwater vessels which posenew threats such as terrorism and smuggling. To detect these vessels, an ad-vanced underwater sensor network is necessary. Such sensors can measure fluc-tuations in for example magnetic and electric fields, pressure changes and acous-tics.

Deploying an underwater sensor in its predetermined position can be difficultdue to currents, surge and the lack of a Global Navigation Satellite System (gnss )functioning underwater. Sometimes the sensors must be deployed fast, resultingin very uncertain sensor positions. These positions must then be estimated inorder to enable the network to accurately track an alien vessel.

Lately, many solutions to the underwater sensor localization problem have beensuggested. They can be broadly divided into two major categories: range-based

97

Page 114: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

98 Paper B Silent Localization of Underwater Sensors using Magnetometers

and range-free. In general, range-based schemes provide more accurate position-ing than range-free schemes.

Range-based schemes use information about the range or angle between sensors.The problem is thereafter formulated as a multilateral problem. Common meth-ods to measure range or angle include Time of Arrival (ToA), Time Differenceof Arrival (TDoA), Angle of Arrival (AoA) or Received Signal Strength Indicator(rssi). These methods usually require active pinging but silent methods based onTDoA have been suggested Cheng et al. (May. 2008). The 3D positioning problemcan be transformed into a 2D problem by the use of depth sensors Cheng et al.(2008). The range positioning scheme is often aided by surface nodes, anchornodes, mobile beacons or autonomous underwater vehicles Zhou et al. (2007);Erol et al. (2008, 2007). Joint sensor localization and time synchronization wasperformed in Tian et al. (2007).

Range-free schemes generally provide a coarse estimate of a node’s location andtheir main advantage lie in their simplicity. Examples of range-free schemes areDensity-aware Hop-count Localization (dhl) Wong et al. (2005) and Area Local-ization Scheme (als) Chandrasekhar and Seah (2006). A more thorough descrip-tion of underwater sensor localization solutions, can be found in the surveys Aky-ildiz et al. (2005); Chandrasekhar et al. (2006).

In this paper we propose a method to silently localize underwater sensors equippedwith triaxial magnetometers using a friendly vessel with known static magneti-zation characteristics. (For methods to estimate the magnetic characteristics, seeNelson and Richards (2007).) Each sensor is further equipped with a pressuresensor and an accelerometer used for depth estimation and sensor orientation es-timation, respectively. To enable global positioning of the sensors, the vessel orone sensor must be positioned globally. To the best of the authors knowledge thisis the first time magnetic dipole tracking is used for sensor localization.

For target tracking in shallow waters, magnetometers are often a more useful sen-sor than acoustics, since sound scatters significantly in these environments Birsan(2006). Birsan has explored the use of magnetometers and the magnetic dipoleof a vessel for target tracking Birsan (2004, 2005). Two sensors with known posi-tions were used to track a vessel while simultaneously estimating the unknownmagnetic dipole of the vessel. Tracking and estimation were performed usingan unscented Kalman filter Birsan (2004) and an unscented particle filter Birsan(2005). Dalberg et al. (2006) fused electromagnetic and acoustic data to tracksurface vessels using underwater sensors.

Several studies of the electromagnetic characteristics of the maritime environ-ment have stated that the permeability of the seabed differs considerably fromthe permeability of air or water. The environment should therefore be modeledas a horizontally stratified model with site specific permeability and layer thick-ness for each segment Dalberg et al. (2006); Daya et al. (2005); Birsan (2006). Thishas not been included in our simulation study but should be considered in fieldexperiments.

Page 115: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

2 Methodology 99

In the past 10-15 years quite a lot of effort has been put into reducing the staticmagnetic signature of naval vessels by active signature cancelling. This has in-creased the importance of other sources of magnetic fields such as Corrosion Re-lated Magnetism (crm ) Daya et al. (2005); Lundin (2003). crm is generated bycurrents in the hull, normally induced by corrosion or the propeller. It is there-fore very difficult to estimate and subsequently difficult to cancel. This makescrm important in naval target tracking but not so much in sensor localization.In our study, a friendly vessel used for sensor localization can turn off its activesignature cancelling, resulting in a magnetic field from the dipole which is con-siderably stronger than the field induced by crm . We have therefore neglectedcrm .

An underwater sensor network used for real time surveillance must be silent. Nei-ther sensor localization, surveillance or data transfer can be allowed to exposethe sensor network. Silent communication rules out the use of acoustic modemswhich are the common mean of wireless underwater data transfer Akyildiz et al.(2005). We therefore assume that the sensors are connected by wire. As a conse-quence, common problems in underwater sensor networks such as time synchro-nization, limited bandwidth and limited energy resources will be neglected.

The sensor localization problem is basically reversed Simultaneous Localizationand Mapping (slam ). In common slam Durrant-Whyte and Bailey (2006); Bai-ley and Durrant-Whyte (2006), landmarks in the environment are tracked withon-board sensors. The positions of these landmarks and the vehicle trajectory areestimated simultaneously in a filter. In sensor localization the sensors are observ-ing the vessel from the ”landmarks” position. The problem has the same form asslam but with a known number of landmarks and known data association.

The paper outline is as follows: Section 2 describes the system, measurementmodels and state estimation. Simulations of the performance of the positioningscheme, its sensitivity to different errors and the importance of the appearanceof the trajectory are studied in Section 3. The paper ends with conclusions.

2 Methodology

This section describes the nonlinear state estimation problem here solved withekf-slam, how the vessel dynamics and sensors are modeled and how differentperformance measures are computed. All vectors are expressed in a world fixedcoordinate system unless otherwise stated.

2.1 System Description

The sensor positioning system is assumed to have the following process and mea-surement model

xk+1 = f (xk) + wk (1)

yk = h(xk , uk , euk ) + ek (2)

Page 116: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

100 Paper B Silent Localization of Underwater Sensors using Magnetometers

where f ( · ) is a nonlinear state transition function, h( · ) is a nonlinear measure-ment function, xk the state vector, uk the inputs, wk the process noise, euk inputnoise and ek measurement noise. In slam the state vector consists of both the ves-sel position pv = [x, y]T and landmark (sensor) states s stacked, i.e. x = [pTv , s

T ]T .

Process Model

The process model describes the vehicle and the sensors dynamics. There arecomplex vessel models available which include 3D orientation, angular rates,engine speed, rudder angle, waves, hull, etc., see e.g. Fossen and Perez (2004).Since we do not consider any particular vessel or weather condition, a very sim-ple vessel model is used. It is assumed that no substantial movements in thez-coordinate, pitch and roll angles of the vessel are made, hence a nonlinear 5states coordinated turn model is sufficient. The parametrization used is a lin-earized discretization according to Gustafsson (2001)

xk+T = xk +2vkωk

sin(ωkT ) cos(hk +ωkT

2) (3a)

yk+T = yk +2vkωk

sin(ωkT ) sin(hk +ωkT

2) (3b)

vk+T = vk (3c)

hk+T = hk + ωkT (3d)

ωk+T = ωk (3e)

where T is the sampling interval and (x, y), v, h, ω denote position, speed, headingand angular rate, respectively. Furthermore, it is assumed that the sensors arestatic and do not move after some time of deployment, hence a process model forthe sensors is

sxj ,k+T = sxj ,k (4a)

syj ,k+T = syj ,k j = 1, . . . , M (4b)

where M is the number of sensors, sxj and syj are sensor j’s x and y position,respectively.

Measurement Model

Each sensor contains a pressure sensor which is used as an input, dj,k , of the z-component. The sensor also contains accelerometers which are used to determinethe direction of the earth gravitational field. The magnetometers in the sensorcan be used to compute the direction of the earth magnetic inclination if theenvironment is free from magnetic disturbances such as ships. In most cases themagnetic inclination vector will not be parallel to the gravitational vector (exceptfor the magnetic north and south pole) and the sensor orientation may be readilymeasured. The sensor orientation is modeled as a static input Cj .

In this paper we only consider the ferromagnetic signature due to the iron in thevessel construction. The ferromagnetic signature stems from the large pieces ofmetal used to construct a vessel. Each piece has its own magnetic dipole and

Page 117: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

2 Methodology 101

the sum of these dipoles can roughly be simplified into a single dipole. Themagnetic flux density for a dipole diminishes cubically with the distance to thedipole. With vector magnetometers dipole orientation can be estimated. Triaxialmeasurements of the magnetic flux density from a dipole can be modeled as

h(xk , uk) = (5)µ0

4π|rj,k |5(3〈rj,k ,m(hk)〉rj,k − |rj,k |2m(hk)) (6)

where m(hk) = [mx cos(hk), my sin(hk), mz]T is the magnetic dipole of the vesseland µ0 is the permeability of the medium. rj,k = Cj [xk − sxj,k , yk − syj,k , 0 − dj,k]

T

is the vector from each sensor to the vessel where Cj is the static orientation ofsensor j in the global coordinate frame and dj,k is the measured depth of thesensor. Note that dj,k and Cj,k should be seen as inputs, uk = {dj,k ,Cj }Mj=1, sincethese are measured variables but not part of the state vector. The dipole modelwithout coordinate transformations can be found in e.g. Cheng (1989). In theproximity of the vessel, a possibly better model would be a multiple dipole modelLindin (2007) where the measurement is the sum of several dipoles, but this isout of the scope of this paper. A single dipole is a reasonable approximation ifthe measurements are made at a large distance compared to vessel size Nelsonand Richards (2007).

The magnetic dipole used throughout the simulations was m = [50,−5, 125]T

kAm2 (same as in Birsan (2005)). Fig. 1 shows the measured magnetic flux den-sity at sensor 3 in Fig. 2 from a vessel where the dipole has been slightly rotatedaround the z-axis between each simulation. The upper left figure in Fig. 1 wasacquired using the magnetic dipole discussed earlier. Clearly the direction ofthe dipole affects the measured magnetic field. This indicates the importance ofusing a accurate dipole estimate.

2.2 State Estimation

Our approach to the state estimation problem is to use an Extended Kalman Filter(ekf ) in the formulation of ekf-slam, for details see e.g. Durrant-Whyte andBailey (2006). There are some characteristics in this system which do not usuallyexist in the common slam problem:

• The landmarks (sensor) are naturally associated to the measurements, i.e.data association is solved.

• The sensors global orientations are known which in turn makes it possibleto estimate the orientation of the trajectory.

• The planar motion assumption and the pressure sensor makes it possible totransform sensor positioning into 2D.

A well known problem with slam is the ever expanding state space that comeswith addition of new landmarks which will eventually make it intractable to com-pute a solution. In a sensor network the number of sensors (landmarks) will nor-

Page 118: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

102 Paper B Silent Localization of Underwater Sensors using Magnetometers

200 400 600 800

−5

0

5

x 10−6

Mag

netic

flux

den

sity

[T]

Time [s]

Dipole rotated 0.0 rad

h

x

hy

hz

200 400 600 800

−5

0

5

x 10−6

Mag

netic

flux

den

sity

[T]

Time [s]

Dipole rotated 0.3 rad

h

x

hy

hz

200 400 600 800

−5

0

5

x 10−6

Mag

netic

flux

den

sity

[T]

Time [s]

Dipole rotated 0.6 rad

h

x

hy

hz

200 400 600 800

−5

0

5

x 10−6

Mag

netic

flux

den

sity

[T]

Time [s]

Dipole rotated 0.9 rad

h

x

hy

hz

Figure 1: Measured magnetic flux density at sensor 3 in Fig. 2 for vesselswith slightly rotated dipoles.

Page 119: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

2 Methodology 103

mally be known.

Due to the duality of the estimation problem implied in slam , i.e. estimate amap and simultaneously localize the vehicle in the map, the question of state observ-ability needs to be answered. Previous observability analyzes on the slam prob-lem Kim and Sukkarieh (2004); Bryson and Sukkarieh (2008); Andrade-Cettoand Sanfeliu (2005); Lee et al. (2006); Wang and Dissanayake (2008); Perera et al.(2009) has focused on vehicle fixed range and/or bearing sensors, such as laserand camera. Perera et al. (2009); Andrade-Cetto and Sanfeliu (2005) concludethat only one known landmark needs to be observed in 2D slam for the globalmap to be locally weakly observable. In our proposed system the sensors are inthe actual landmarks position and their measurements are informative in bothrange and bearing to the dipole, hence the global map is observable if one sensorposition is known. Theoretically this means that the sensor positions and the tra-jectory can be estimated in a global coordinate frame with a global map positionerror depending only on the error of the known sensor. If no global position ofeither sensor or vessel is available the sensors can be positioned locally.

Even if the system is observable there are no guarantees that an ekf will con-verge since it depends on the linearization error and the initial linearization point.More recent approaches to the slam problem Kaess et al. (2007); Dellaert (2005)consider smoothing instead of filtering. These methods can handle linearizationerrors better since the whole trajectory and map can be re-linearized. Yet, a goodinitial linearization point is necessary.

2.3 Cramer-Rao Lower Bound

Given the trajectory of a vessel, it is interesting to study a lower bound on the co-variance of the estimated sensor positions. We have chosen to study the Cramer-Rao lower bound (crlb) due to its simplistic advantages. crlb is the inverse ofthe Fisher Information Matrix (fim ), I(x), which in case of Gaussian measure-ment errors can be calculated as

I(x) = H(x)T R(x)−1H(x), (7a)

H(x) = ∇xh(x) (7b)

where R(x) is Gaussian measurement noise and H(x) denote the gradient of h(x)w.r.t. x. The crlb of a sensor position is given by

Cov(s) = E{(s0 − s)(s0 − s)T

}(8a)

≥ I(s)−1 (8b)

where s0 is true sensor position and s is the corresponding estimate. Since in-formation is additive the fim of a sensor location for a certain trajectory can becalculated as the sum of the fims from all vessel positions along the trajectory.The lower bound of the covariance of the sensor position estimate is then the in-verse of the sum of the fims. A more extensive study of the fundamentals of crlbcan be found in Kay (1993).

Page 120: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

104 Paper B Silent Localization of Underwater Sensors using Magnetometers

3 Simulation Results

The sensor positioning problem can, depending on which sensors are available,be solved in different ways. If no accurate global position of the vessel or a sensoris available during the experiment (gps is for example easily jammed), the sensorscan only be positioned locally. In Section 3.1, magnetometers are used to localizethe sensors. If global vessel position is available throughout the experiment, fromgnss or using a radar sensor and a sea chart, it can be used as a measurement ofthe position of the vessel. This will not only position the sensors globally but alsoenable a more accurate trajectory estimation. This experimental configuration issimulated in Section 3.2. The parameters used in the simulations are listed inTable 1.

Param. Covarianceslam /gnss

x0 10/10 my0 10/10 mv0 0/0 m/sh0 1/1 radω0 0/0 rad/ssxj 400/400 m

eGNSS 1 meh 10−16 T

Param. Value

m [50, -5, 125] kAm2

µ0 4π 10−7 Tm/Adj,0 {-5, -15} mT 0.1 s

Table 1: The parameters used in the simulations.

3.1 Magnetometers Only

If there is no reliable global position measurement of the vessel, the trajectory ofthe vessel must be estimated using the same magnetic fluctuations as are beingused to localize the sensors. Simulations show that the sensor network needs to bemore dense when no gnss is available. If there is little or no overlap in which twoor more sensors observe the vessel simultaneously, the trajectory estimate, and inthe end the sensor position estimates, depend more on the vessel model thanobservations. Yet, the sensor positions are still coupled through the covariancematrix.

A sensor localization simulation using 7 sensors and a generated trajectory isshown in Fig. 2 and Fig. 3. Fig. 4 shows the Root Mean Square Error (rmse) ofeach sensor as it develops over time. Since the initial guesses of sensor positionswere generated independently, different sensors have different initial errors. Allsensor though have the same initial uncertainty covariance (400 m, see Table 1 ).The initial guesses are meant to represent the prior information of the true sensorpositions, acquired during sensor deployment. The limited range of the magneticfluctuations causes the sensor position estimate to change only when the vesselis sufficiently close. This can be studied in Fig. 4. Sensor 4 in Fig. 2 is too far

Page 121: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

3 Simulation Results 105

away from the vessel for accurate positioning, resulting in a large uncertaintyellipse. From Fig. 2, it is clear that error in trajectory estimates result in errors inestimated sensor positions.

200 Monte Carlo simulations using different trajectories and sensor locationsshow that this configuration results in a positioning error of 26.3% in average.A sensor failing to retain the true sensor position within two standard devia-tions was considered incorrectly positioned. In Fig. 2, sensor 7 is incorrectlypositioned.

−60 −40 −20 0 20 40 60 80−50

0

50

11

22

33

44

55

667

7

west−east [m]

sout

h−no

rth

[m]

Sensor positioning simulation

True trajectoryEstimated trajectoryEstimated sensor positionTrue sensor position

Figure 2: Estimated sensor positions with 2σ uncertainty and vessel trajec-tory, for simulations using magnetometers.

3.2 Magnetometers and GNSS

If global position measurements of the vessel are available throughout the tra-jectory, these measurements are used to improve the trajectory estimate. Eachsensor is positioned relative to the trajectory of the vessel and is therefore less de-pendent of other sensor positions than in Section 3.1. This is quite natural sincethe cross correlations will not have such great impact on the sensor position es-timates when the trajectory is known. Simulation results using the same sensorpositions and trajectory used in Section 3.1, are shown in Fig. 5. Fig. 6 shows thermse of each sensor throughout the simulation. The global trajectory measure-ments result in more accurate sensor position estimates and lower uncertaintiesthan using only magnetometers. Sensor 4 is far away from the trajectory resultingin a very uncertain position estimate.

200 Monte Carlo simulations using different trajectories and sensor locations

Page 122: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

106 Paper B Silent Localization of Underwater Sensors using Magnetometers

−100 −50 0 50 100

−40

−20

0

20

40

60

80

1122

33

44

55

667

7

west−east [m]

sout

h−no

rth

[m]

Sensor positioning simulation

True trajectoryEstimated trajectoryEstimated sensor positionTrue sensor position

Figure 3: Estimated vessel trajectory with 2σ uncertainty and sensor posi-tions, for simulations using magnetometers.

0 20 40 60 800

10

20

30

40

50

60

70

80

Time [s]

Dis

tanc

e [m

]

RMSE of sensor position estimates

Sensor 1Sensor 2Sensor 3Sensor 4Sensor 5Sensor 6Sensor 7

Figure 4: Root Mean Square Error of estimated sensor positions throughoutthe simulation using only magnetometers.

Page 123: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

3 Simulation Results 107

show that using magnetometers and gnss results in a sensor positioning errorof 12.9% on average.

−40 −20 0 20 40 60

−80

−60

−40

−20

0

20

40

1122

33

4

4

55

66

77

west−east [m]

sout

h−no

rth

[m]

Sensor positioning simulation

True trajectoryEstimated trajectoryEstimated sensor positionTrue sensor position

Figure 5: Estimated vessel trajectory and sensor positions with 2σ uncer-tainty. gnss and magnetometers are used as sensors.

3.3 Trajectory Evaluation using CRLB

crlb for sensor positions surrounding a couple of trajectories were calculated forthe case of gnss and magnetometers. A high crlb indicates that after a simula-tion, a sensor in that position would still have a high uncertainty. Fig. 7 showsthe trajectory used in Sections 3.1 and 3.2. Fig. 8 and Fig. 9 show two other tra-jectories. It is clear that the crlb becomes low in an area where the vessel canbe observed from many directions. In Fig. 8 sensor positions quite close to theend of the trajectory have a high crlb since they only observe the vessel fromone direction. In Fig. 9 sensor positions between the start and end point of thetrajectory are relatively difficult to estimate since it only observe the vessel fromtwo opposite directions. The simulations suggest that in field experiments thevessel should be maneuvered in a trajectory that allows each sensor to observethe vessel from as many directions as possible.

3.4 Sensitivity Analysis, Magnetic Dipole

The magnetic dipole of the vessel will probably not be accurately measured in areal world experiment. How will the positioning perform if the estimated magni-tude of the dipole is for example 102% or 110% of the true magnitude?

The trajectory previously used has been simulated using an assumed dipole that

Page 124: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

108 Paper B Silent Localization of Underwater Sensors using Magnetometers

0 20 40 60 800

10

20

30

40

50

Time [s]

Dis

tanc

e [m

]

RMSE of sensor position estimates

Sensor 1Sensor 2Sensor 3Sensor 4Sensor 5Sensor 6Sensor 7

Figure 6: rmse of estimated sensor positions throughout the simulation.gnss and magnetometers are used as sensors.

differs from the true one. A dipole with a magnitude of 98% of the true one isgenerated and the error is divided over the three components of the dipole. Eachdipole error is simulated multiple times using the same trajectory and each timethe error is distributed amongst the dipole components differently. Again, a sen-sor failing to retain the true sensor position within two standard deviations isconsidered incorrectly positioned. Table 2 shows the percentage of incorrectlypositioned sensors for different errors of magnitude and different simulation set-tings.

Dipole 80% 90% 95% 98% 99% 100%slam 44.6% 25.7% 23.4% 23.4% 18.9% 14.3%gnss 38.3% 9.7% 3.4% 2.9% 0.0% 0.0%Dipole 101% 102% 105% 110% 120%slam 14.3% 14.3% 16.6% 34.3% 53.1%gnss 4.0% 4.6% 8.6% 12.0% 38.3%

Table 2: Sensitivity analysis of error in dipole estimate.

3.5 Sensitivity Analysis, Sensor Orientation

The sensor orientation is assumed measured in the previous experiments sinceit can be estimated prior to the experiment. We will now study how sensitivethe system is to errors in the orientation estimate. The positioning performance

Page 125: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

3 Simulation Results 109

−150

−100

−50

0

50

100

150 −100−50

050

100

050

100

south−north [m]

CRLB of possible sensor positions

west−east [m]

Figure 7: crlb for all sensor positions surrounding the trajectory (in red).Trajectory 1.

when sensor orientation errors are present, are evaluated using 25 Monte Carlosimulations for each orientation error using different trajectories. For each sim-ulation, random orientation errors with the stated covariance are generated. (Acovariance of 0.16 rad results in orientation errors up to 0.8 rad or 45◦.) Table 3shows the percentage of incorrectly positioned sensors for different sensor orien-tation error covariances.

Note that the sensor positioning error of a system using gnss and magnetometerswas merely unaffected by the introduction of an orientation covariance of up to0.04 rad. If the sensor observes the vessel from many different directions, thepositioning still succeeds. When only magnetometers are used, the trajectorymeasurements cannot compensate for the errors in orientation, rendering largerpositioning errors.

Ori Cov 0.0 rad 0.01 rad 0.04 rad 0.16 rad 0.36 radslam 26.3% 29.8% 36.9% 54.8% 52.4%gnss 12.9% 12.5% 11.9% 18.5% 26.8%

Table 3: Sensitivity analysis of error in estimated sensor orientation.

Page 126: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

110 Paper B Silent Localization of Underwater Sensors using Magnetometers

−100

−50

0

50

−500

50100

150200

050

100

south−north [m]

CRLB of possible sensor positions

west−east [m]

Figure 8: crlb for all sensor positions surrounding the trajectory (in red).Trajectory 2.

4 Conclusions

We have presented a silent underwater sensor localization scheme using triaxialmagnetometers and a friendly vessel with known magnetic characteristics. Moreaccurate sensor positions will enhance the detection, tracking and classificationability of the underwater sensor network. Monte Carlo simulations indicate thata sensor positioning accuracy of 26.3% is achievable when only magnetometersare used and of 12.9% when gnss and magnetometers are used. Knowing themagnetic dipole of the vessel is important and a dipole magnitude error of 10%results in a positioning error increase of about 10%. Simulations also show thatour positioning scheme is quite insensitive to minor errors in sensor orientation,when gnss is used throughout the trajectory.

Acknowledgments

This work was supported by the Strategic Research Center moviii, funded bythe Swedish Foundation for Strategic Research, ssf, cadics, a Linnaeus centerfunded by the Swedish Research Council, and link-sic, an Industry ExcellenceCenter founded by Vinnova.

Page 127: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

4 Conclusions 111

−100

−50

0

50

100

150

−200

−150

−100

−50

0

50

1000

50100

west−east [m]

CRLB of possible sensor positions

south−north [m]

Figure 9: crlb for all sensor positions surrounding the trajectory (in red).Trajectory 3.

Page 128: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

112 Paper B Silent Localization of Underwater Sensors using Magnetometers

Bibliography

I. F Akyildiz, D. Pompili, and T. Melodia. Underwater acoustic sensor networks:research challenges. Elsevier Journal of Ad Hoc Networks, 3(3), 2005.

J. Andrade-Cetto and A. Sanfeliu. The Effects of Partial Observability WhenBuilding Fully Correlated Maps. Robotics, IEEE Transactions on, 21(4):771–777, Aug. 2005. ISSN 1552-3098. doi: 10.1109/TRO.2004.842342.

T. Bailey and H. Durrant-Whyte. Simultaneous localization and mapping (SLAM):part II. Robotics & Automation Magazine, IEEE, 13(3):108 – 117, 2006.

M. Birsan. Non-linear kalman filters for tracking a magnetic dipole. In Proc. ofIntl. Conf. on Maritime Electromagnetics, MARELEC, 2004.

M. Birsan. Unscented particle filter for tracking a magnetic dipole target. In Proc.of MTS/IEE OCEANS, 2005.

M. Birsan. Electromagnetic source localization in shallow waters using Bayesianmatched-field inversion. Inverse Problems, 22(1):43–53, 2006.

M. Bryson and S. Sukkarieh. Observability analysis and active control for air-borne SLAM. Aerospace and Electronic Systems, IEEE Transactions on, 44(1):261–280, January 2008. ISSN 0018-9251. doi: 10.1109/TAES.2008.4517003.

J. Callmer, M. Skoglund, and F. Gustafsson. Silent localization of underwater sen-sors using magnetometers. EURASIP Journal on Advances in Signal Processing,2010.

V. Chandrasekhar and W.KG. Seah. Area Localization Scheme for UnderwaterSensor Networks. In Proc. of the IEEE OCEANS Asia Pacific Conf., 2006.

V. Chandrasekhar, W.KG. Seah, Y. Sang Choo, and H.V. Ee. Localization in un-derwater sensor networks: survey and challenges. In ACM Intl. Workshop onUnderWater Networks WUWNet, 2006.

D.K. Cheng. Field and Wave Electromagnetics. Addison-Wesley, Reading, Mas-sachusetts, 2nd edition, 1989.

W. Cheng, A.Y. Teymorian, L. Ma, X. Cheng, X. Lu, and Z. Lu. Underwater Local-ization in Sparse 3D Acoustic Sensor Networks. In Proc. of conf. on ComputerCommunications, INFOCOM,IEEE, 2008.

X. Cheng, H. Shu, Q. Liang, and DHC Du. Silent Positioning in UnderwaterAcoustic Sensor Networks. Transactions on Vehicular Technology, IEEE, 57(3),May. 2008.

E. Dalberg, A. Lauberts, R.K. Lennartsson, M.J. Levonen, and L. Persson. Under-water target tracking by means of acoustic and electromagnetic data fusion. InProc. of Intl. Conf. on Information Fusion, 2006.

Page 129: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

Bibliography 113

Z.A. Daya, D.L. Hutt, and T.C. Richards. Maritime electromagnetism and DRDCSignature Management research. Technical report, Defence R&D Canada,2005.

F. Dellaert. Square root sam. In Robotics: Science and Systems, pages 177–184,2005.

H. Durrant-Whyte and T. Bailey. Simultaneous localization and mapping (SLAM):part I. Robotics & Automation Magazine, IEEE, 13(2):99 – 110, 2006.

M. Erol, L.F.M. Vieira, and M. Gerla. AUV-Aided Localization for UnderwaterSensor Networks. In Proc. of Intl. Conf. on Wireless Algorithms, Systems andApplications, WASA, 2007.

M. Erol, L.F.M. Vieira, A. Caruso, F. Paparella, M. Gerla, and S. Oktug. MultiStage Underwater Sensor Localization using Mobile Beacons. In Proc. of Intl.Conf. on Sensor Technologies and Applications, SENSORCOMM, 2008.

T. I. Fossen and T. Perez. Marine Systems Simulator (MSS), 2004.www.marinecontrol.org.

F. Gustafsson. Adaptive Filtering and Change Detection. John Wiley & Sons,Hoboken, New Jersey, 2nd edition, 2001.

M. Kaess, A. Ranganathan, and F. Dellaert. iSAM: Fast Incremental Smoothingand Mapping with Efficient Data Association. In Robotics and Automation,2007 IEEE International Conference on, pages 1670–1677, April 2007. doi:10.1109/ROBOT.2007.363563.

S. M. Kay. Fundamentals of Statistical Signal Processing - Estimation Theory.Prentice Hall, 1993.

J. Kim and S. Sukkarieh. Improving the real-time efficiency of inertial SLAM andunderstanding its observability. Intelligent Robots and Systems, 2004. (IROS2004). 2004 IEEE/RSJ International Conference on, 1:21–26 vol.1, Oct. 2004.doi: 10.1109/IROS.2004.1389323.

K. W Lee, W.S. Wijesoma, and I.G. Javier. On the Observability and Observabil-ity Analysis of SLAM. Intelligent Robots and Systems, 2006 IEEE/RSJ Interna-tional Conference on, pages 3569–3574, Oct. 2006. doi: 10.1109/IROS.2006.281646.

A. Lindin. Analysis and modelling of magnetic mine sweep for naval purposes.Master’s thesis, Linköping University, The Department of Physics, Chemistryand Biology, 2007. [In swedish].

A. Lundin. Underwater electric signatures. Are they important for a future navy?Master’s thesis, Swedish National Defence College, 2003. [In swedish].

J.B. Nelson and T.C Richards. Magnetic source parameters of MR OFFSHOREmeasured during trial MONGOOSE 07. Technical report, Defence R&D - At-lantic, Dartmouth NS (CAN), 2007.

Page 130: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

114 Paper B Silent Localization of Underwater Sensors using Magnetometers

L.D.L. Perera, A. Melkumyan, and E. Nettleton. On the linear and nonlinear ob-servability analysis of the SLAM problem. Mechatronics, 2009. ICM 2009. IEEEInternational Conference on, pages 1–6, April 2009. doi: 10.1109/ICMECH.2009.4957168.

C. Tian, W. Liu, J. Jin, Y. Wang, and Y. Mo. Localization and synchronization for3D underwater acoustic sensor network. In Intl. Conf. on Ubiquitous Intelli-gence and Computing UIC, 2007.

Z. Wang and G. Dissanayake. Observability analysis of SLAM using fisher in-formation matrix. Control, Automation, Robotics and Vision, 2008. ICARCV2008. 10th International Conference on, pages 1242–1247, Dec. 2008. doi:10.1109/ICARCV.2008.4795699.

S.Y. Wong, J.G. Lim, S.V. Rao, and W.KG. Seah. Multihop Localization with Den-sity and Path Length Awareness in Non-Uniform Wireless Sensor Networks. InProc. of IEEE Vehicular Technology Conf., 2005.

Z. Zhou, J.-H. Cui, and S. Zhou. Localization for large-scale underwater sensornetwork. In Proc. of IFIP Networking, 2007.

Page 131: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

Licentiate ThesesDivision of Automatic Control

Linköping UniversityP. Andersson: Adaptive Forgetting through Multiple Models and Adaptive Control of CarDynamics. Thesis No. 15, 1983.B. Wahlberg: On Model Simplification in System Identification. Thesis No. 47, 1985.A. Isaksson: Identification of Time Varying Systems and Applications of System Identifi-cation to Signal Processing. Thesis No. 75, 1986.G. Malmberg: A Study of Adaptive Control Missiles. Thesis No. 76, 1986.S. Gunnarsson: On the Mean Square Error of Transfer Function Estimates with Applica-tions to Control. Thesis No. 90, 1986.M. Viberg: On the Adaptive Array Problem. Thesis No. 117, 1987.

K. Ståhl: On the Frequency Domain Analysis of Nonlinear Systems. Thesis No. 137, 1988.A. Skeppstedt: Construction of Composite Models from Large Data-Sets. Thesis No. 149,1988.P. A. J. Nagy: MaMiS: A Programming Environment for Numeric/Symbolic Data Process-ing. Thesis No. 153, 1988.K. Forsman: Applications of Constructive Algebra to Control Problems. Thesis No. 231,1990.I. Klein: Planning for a Class of Sequential Control Problems. Thesis No. 234, 1990.F. Gustafsson: Optimal Segmentation of Linear Regression Parameters. Thesis No. 246,1990.H.Hjalmarsson: On Estimation of Model Quality in System Identification. Thesis No. 251,1990.S. Andersson: Sensor Array Processing; Application to Mobile Communication Systemsand Dimension Reduction. Thesis No. 255, 1990.K. Wang Chen: Observability and Invertibility of Nonlinear Systems: A Differential Alge-braic Approach. Thesis No. 282, 1991.J. Sjöberg: Regularization Issues in Neural Network Models of Dynamical Systems. ThesisNo. 366, 1993.P. Pucar: Segmentation of Laser Range Radar Images Using Hidden Markov Field Models.Thesis No. 403, 1993.H. Fortell: Volterra and Algebraic Approaches to the Zero Dynamics. Thesis No. 438,1994.T. McKelvey: On State-Space Models in System Identification. Thesis No. 447, 1994.T. Andersson: Concepts and Algorithms for Non-Linear System Identifiability. ThesisNo. 448, 1994.P. Lindskog: Algorithms and Tools for System Identification Using Prior Knowledge. The-sis No. 456, 1994.J. Plantin: Algebraic Methods for Verification and Control of Discrete Event DynamicSystems. Thesis No. 501, 1995.J. Gunnarsson: On Modeling of Discrete Event Dynamic Systems, Using Symbolic Alge-braic Methods. Thesis No. 502, 1995.A. Ericsson: Fast Power Control to Counteract Rayleigh Fading in Cellular Radio Systems.Thesis No. 527, 1995.M. Jirstrand: Algebraic Methods for Modeling and Design in Control. Thesis No. 540,1996.K. Edström: Simulation of Mode Switching Systems Using Switched Bond Graphs. ThesisNo. 586, 1996.

Page 132: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

J. Palmqvist: On Integrity Monitoring of Integrated Navigation Systems. Thesis No. 600,1997.A. Stenman: Just-in-Time Models with Applications to Dynamical Systems. ThesisNo. 601, 1997.M. Andersson: Experimental Design and Updating of Finite Element Models. ThesisNo. 611, 1997.U. Forssell: Properties and Usage of Closed-Loop Identification Methods. Thesis No. 641,1997.M. Larsson: On Modeling and Diagnosis of Discrete Event Dynamic systems. ThesisNo. 648, 1997.N. Bergman: Bayesian Inference in Terrain Navigation. Thesis No. 649, 1997.V. Einarsson: On Verification of Switched Systems Using Abstractions. Thesis No. 705,1998.J. Blom, F. Gunnarsson: Power Control in Cellular Radio Systems. Thesis No. 706, 1998.P. Spångéus: Hybrid Control using LP and LMI methods – Some Applications. ThesisNo. 724, 1998.M. Norrlöf: On Analysis and Implementation of Iterative Learning Control. ThesisNo. 727, 1998.A. Hagenblad: Aspects of the Identification of Wiener Models. Thesis No. 793, 1999.

F. Tjärnström: Quality Estimation of Approximate Models. Thesis No. 810, 2000.C. Carlsson: Vehicle Size and Orientation Estimation Using Geometric Fitting. ThesisNo. 840, 2000.J. Löfberg: Linear Model Predictive Control: Stability and Robustness. Thesis No. 866,2001.O. Härkegård: Flight Control Design Using Backstepping. Thesis No. 875, 2001.J. Elbornsson: Equalization of Distortion in A/D Converters. Thesis No. 883, 2001.J. Roll: Robust Verification and Identification of Piecewise Affine Systems. Thesis No. 899,2001.I. Lind: Regressor Selection in System Identification using ANOVA. Thesis No. 921, 2001.

R. Karlsson: Simulation Based Methods for Target Tracking. Thesis No. 930, 2002.P.-J. Nordlund: Sequential Monte Carlo Filters and Integrated Navigation. Thesis No. 945,2002.M. Östring: Identification, Diagnosis, and Control of a Flexible Robot Arm. ThesisNo. 948, 2002.C. Olsson: Active Engine Vibration Isolation using Feedback Control. Thesis No. 968,2002.J. Jansson: Tracking and Decision Making for Automotive Collision Avoidance. ThesisNo. 965, 2002.N. Persson: Event Based Sampling with Application to Spectral Estimation. ThesisNo. 981, 2002.D. Lindgren: Subspace Selection Techniques for Classification Problems. Thesis No. 995,2002.E. Geijer Lundin: Uplink Load in CDMA Cellular Systems. Thesis No. 1045, 2003.

M. Enqvist: Some Results on Linear Models of Nonlinear Systems. Thesis No. 1046, 2003.

T. Schön: On Computational Methods for Nonlinear Estimation. Thesis No. 1047, 2003.F. Gunnarsson: On Modeling and Control of Network Queue Dynamics. Thesis No. 1048,2003.S. Björklund: A Survey and Comparison of Time-Delay Estimation Methods in LinearSystems. Thesis No. 1061, 2003.

Page 133: Topics in Localization and Mapping · in cities using different land bound vehicles, in rural environments using au-tonomous aerial vehicles and underwater for coral reef exploration.

M. Gerdin: Parameter Estimation in Linear Descriptor Systems. Thesis No. 1085, 2004.

A. Eidehall: An Automotive Lane Guidance System. Thesis No. 1122, 2004.E. Wernholt: On Multivariable and Nonlinear Identification of Industrial Robots. ThesisNo. 1131, 2004.J. Gillberg: Methods for Frequency Domain Estimation of Continuous-Time Models. The-sis No. 1133, 2004.G. Hendeby: Fundamental Estimation and Detection Limits in Linear Non-Gaussian Sys-tems. Thesis No. 1199, 2005.D. Axehill: Applications of Integer Quadratic Programming in Control and Communica-tion. Thesis No. 1218, 2005.J. Sjöberg: Some Results On Optimal Control for Nonlinear Descriptor Systems. ThesisNo. 1227, 2006.D. Törnqvist: Statistical Fault Detection with Applications to IMU Disturbances. ThesisNo. 1258, 2006.H. Tidefelt: Structural algorithms and perturbations in differential-algebraic equations.Thesis No. 1318, 2007.S. Moberg: On Modeling and Control of Flexible Manipulators. Thesis No. 1336, 2007.J. Wallén: On Kinematic Modelling and Iterative Learning Control of Industrial Robots.Thesis No. 1343, 2008.J. Harju Johansson: A Structure Utilizing Inexact Primal-Dual Interior-Point Method forAnalysis of Linear Differential Inclusions. Thesis No. 1367, 2008.J. D. Hol: Pose Estimation and Calibration Algorithms for Vision and Inertial Sensors.Thesis No. 1370, 2008.H. Ohlsson: Regression on Manifolds with Implications for System Identification. ThesisNo. 1382, 2008.D. Ankelhed: On low order controller synthesis using rational constraints. ThesisNo. 1398, 2009.P. Skoglar: Planning Methods for Aerial Exploration and Ground Target Tracking. ThesisNo. 1420, 2009.C. Lundquist: Automotive Sensor Fusion for Situation Awareness. Thesis No. 1422, 2009.C. Lyzell: Initialization Methods for System Identification. Thesis No. 1426, 2009.R. Falkeborn: Structure exploitation in semidefinite programming for control. ThesisNo. 1430, 2010.D. Petersson: Nonlinear Optimization Approaches toH2-Norm Based LPV Modelling andControl. Thesis No. 1453, 2010.Z. Sjanic: Navigation and SAR Auto-focusing in a Sensor Fusion Framework. ThesisNo. 1464, 2011.K. Granström: Loop detection and extended target tracking using laser data. ThesisNo. 1465, 2011.