Simulated SAR with GIS Data and Pose Estimation using ...
Transcript of Simulated SAR with GIS Data and Pose Estimation using ...
![Page 1: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/1.jpg)
Simulated SAR with GIS Data
and
Pose Estimation using Affine Projection
Martin Divak
Space Engineering, master's level
2017
Luleå University of Technology
Department of Computer Science, Electrical and Space Engineering
![Page 2: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/2.jpg)
![Page 3: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/3.jpg)
Simulated SAR with GIS Data
and
Pose Estimation using Affine Projection
Author Martin Divak
Thesis supervisor Zoran Sjanic
Examiner Goerge Nikolakopoulos
Co-supervision Christoforos Kanellakis
![Page 4: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/4.jpg)
![Page 5: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/5.jpg)
The work presented in this thesis was conducted at Saab Aeronautics in the section Sensor Fusion and Tactical
Control. Interest in the subjects described in this thesis are of interest due to the sections development of
Decision Support Systems for Aircraft applications.
![Page 6: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/6.jpg)
![Page 7: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/7.jpg)
Abstract
Pilots or autonomous aircraft need knowledge of where they are in relation to the environment. On board
aircraft there are inertial sensors that are prone to drift which needs corrections by referencing against a
known item, place, or signal. Satellite data is not always reliable due to natural degradation or intentional
jamming so aircraft are dependant on visual sensors for navigation. Synthetic aperture radar, SAR, is an
interesting candidate as navigation sensor. SAR is a collection of methods used to generate high resolution
radar images using movement to increase its apparent antenna size, or aperture. Radar sensors are not de-
pendant o day light, unlike optical sensors. Infrared sensors can see in the dark but are affected by weather
conditions. Radar sensors active sensors, transmitting pulses and measuring echoes, in the microwave spec-
trum of electromagnetic radiation that does not have strong interactions with meteorological phenomena.
To use radar images in qualitative and quantitative analysis they must be registered with geographical in-
formation. Position data on an aircraft is not sufficient to determine with certainty what or where it is one
is looking at in a radar image without referencing other images over the same area. To lay an image on top
of another image and transforming it such that they match in image content position is called registration.
One way of georeferencing is to simulate a SAR image and register a real image, from the same view, using
corresponding reference points in both images. This present work demonstrate that a terrain model can be
split up and classified into different types of radar scatterers. Different parts of the terrain yielding different
types of echoes increases the amount of radar specific characteristics in simulated reference images. A terrain
that is relatively flat having to geometric features, may still be used to create simulated radar images for
image matching.
Computer vision with other type of sensors have had a long history compared to radar based systems.
Corresponding methods in radar have not had the same impact. Among these systems that have had a
lot of underlying development include stereoscopic methods where several images are taken of the same
area but from different views, meaning angles and positions, where image depth can be extracted from the
stereo images. Stereoscopic methods in radar image analysis have mainly been used to reconstruct objects
or environments seen from known parallel flight and orbital trajectories. The reverse problem, estimating
position and attitude given a known terrain, is not solved. This work presents an interpretation of the imaging
geometry of SAR such that existing methods in computer vision may be used to estimate the position from
which a radar image has been taken. This is a direct image matching without requiring registration that is
necessary for other proposals of SAR-based navigation systems. By determination of position continuously
from radar images aircraft could navigate independent of day light, weather, or satellite data.
Page i
![Page 8: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/8.jpg)
Sammanfattning
Piloter eller autonoma flygfarkoster behover kannedom om var nagonstans de befinner sig i relation till
omgivningen. Ombord pa flygfarkoster sa finns det troghetssensorer som paverkas av drift vilket behover
korrigeras genom referering mot ett kant foremal, plats, eller signal. Satellitdata ar inte alltid palitlig pa
grund av naturlig degradering eller avsiktlig storning sa ar en flygfarkost beroende av visuella sensorer for att
navigera. Syntetisk aperturradar, SAR, ar en intressant kandidat som navigationssensor. SAR ar en samling
metoder som anvands for att generera hogupplosta radarbilder genom att anvanda rorelse for att oka dess
apparenta antennstorlek, eller apertur. Radarsensorer ar inte beroende av dagsljus som optiska sensorer ar.
Infraroda sensorer kan se i morker men paverkas av vaderforhallanden som kan blockera infrarod stralning.
Radarsensorer ar aktiva sensorer, skickar pulser och mater ekon, i mikrovagsspektrumet av elektromagnetisk
stralning som inte har sarskilt starka interaktioner med meteorologiska effekter.
For att kunna anvanda radarbilder for kvantitativ saval som kvalitativ analys sa maste registreras med ge-
ografisk information. Positionsdata pa en flygfarkost ar inte tillracklig for att kunna bestamma med sakerhet
vad eller var man ser i en radarbild utan att referera mot andra bilder over samma omrade. Att lagga en
bild ovanpa en annan och transformera de sa att bildinnehallets positioner matchar kallas for registrering.
Ett satt att gora det pa ar att simulera hur en radarbild ser ut, givet att terrangen ar kand, fran samma vy
for att relatera bildkoordinaterna med varldskoordinater. I detta arbete demonstreras att en terrangmodell
kan delas upp och klassificeras som olika typer av radarspridare. Att olika delar av terrangen ger olika
ekon okar mangden radarspecifik karakteristik i simulerade referensbilder. En terrang som till och med ar
relativt platt, alltsa inte har nagra radarspecifik geometrisk karakteristik, kan anda anvandas till att skapa
simulerade radarbilder som kan anvandas till bildjamforelser.
Datorsyn med andra typer av sensorer har en langre historia jamfort med radarbaserade system. Motsvarande
metoder inom radar har inte haft lika stort genomslag. Bland dessa system som har haft mycket bakomlig-
gande utveckling inkluderar stereoskopiska metoder dar flera foton tas over samma omrade men fran olika
vyer, alltsa vinklar och positioner, dar bilddjup kan extraheras fran stereobilderna. Stereoskopiska metoder
inom radarbildanalys har huvudsakligen anvants till att rekonstruera objekt eller omgivningar som ses fran
kanda parallela flyg- eller omloppsbanor. Det omvanda problemet, estimering av position och attityd givet
en kand terrang, har inte en losning. Detta arbete tar upp en tolkning av avbildningsgeometrin sa att
existerande metoder inom datorsyn kan nyttjas till att estimera positionen fran vilken en radarbild har
tagits. Detta ar en direktjamforelse utan att behova bildregistrering, som kravs enligt andra forslag pa
SAR-baserade Navigationssystem. Genom att kunna bestamma position kontinuerligt fran radarbilder sa
kan flygfarkoster navigera oberoende av dagsljus, vader, och satellitdata.
Page ii
![Page 9: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/9.jpg)
List of Acronyms
AESA Active Electronically Scanned Array
ATR Automatic Target Recognition
BRDF Bidirectional Reflectance Distribution Function
CAD Computer-aided Design
CDT Constrained Delaunay Triangulation
CP Control Point
CPU Central Processing Unit
CV Computer Vision
DEM Digital Elevation Map
DLR Deutsches zentrum fur Luft- und Raumfahrt
DSM Digital Surface Map
DTM Digital Terrain Map
ESA European Space Agency
FOV Field Of View
GCP Ground Control Point
GIS Geographic Information System
GMTI Ground Moving Target Identification
GNSS Global Navigation Satellite System
INS Inertial Navigation System
InSAR SAR Interferometry
KvD Koenderink and van-Doorn
Lidar Light Detecton and Ranging
LOS Line-of-Sight
MTI Moving Target Identification
NLOS Non-Line-of-Sight
Radar Radio Detection and Ranging
RCS Radar Cross-section.
SAR Synthetic Aperture Radar
SLAM Simultaneous Localisation and Mapping
Sonar Sound Navigation and Ranging
UAV Unmanned Aerial Vehicle
Mentioned throughout the thesis are examples of letter designations of electromagnetic spectral bands. The
letter designations of the electromagnetic spectrum used in this thesis follows IEEE standard nomenclature.1
Letter VHF UHF L S C X Ka K Ku V W mm
GHz 0.03-0.3 0.3-1 1-2 2-4 4-8 8-12 12-18 18-27 27-40 40-75 75-110 110-300
HH, VV, and HV represent Transmit-Receive Horizontal/Vertical linear polarization modes.
1IEEE std 521 - 2002
Page iii
![Page 10: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/10.jpg)
Mathematical Notation
r Slant Range
V Velocity Vector
R Range Vector
fDC Doppler Centroid Frequency
ω Squint
ωw Azimuthal Beamwidth
∆t Illumination Time
θw Angular Width in Range/Swath Direction
θ Depression Angle
θnear Near swath grazing incidence
θfar Far swath grazing incidence
θdiff Difference in depression angle for parallel stereo channels
C Camera or Intrinsic Matrix
CSAR SAR Intrinsic Matrix
P‖ Normalized Orthographic Projection Matrix
PAff Affine Projection Matrix
PSAR SAR Projection Matrix
G Pose or Extrinsic Matrix
G⊥ Virtual Orthographic Camera Pose
R Rotation Matrix
t Translation Vector
u0 Horizontal Image Centre
v0 Vertical Image Centre
c0 Speed of Light
λ Wavelength
δr Slant Range Resolution
δaz Azimuth Resolution
R Bidirectional Reflectance Distribution Function
ϕinc Incidence Angle
ϕref Reflection Angle
Page iv
![Page 11: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/11.jpg)
Contents
Abstract . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . i
Popularvetenskaplig Sammanfattning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
List of Acronyms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
Mathematical Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iv
1 Introduction 1
1.1 Problem definition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Research Questions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.3 Thesis Contributions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.4 Thesis Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
2 Background 4
2.1 Earlier Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
2.2 Need for GNSS Independent Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.3 Vision-aided Robotics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2.4 Platforms and Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
2.5 SAR-aided Navigation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
2.6 Physical and Image Simulation of SAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8
3 Theory 10
3.1 Observation Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
3.1.1 Resolution Cell . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11
3.1.2 Geometric Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12
3.1.3 Radiometric Effects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14
3.2 Rendering . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
3.2.1 Scattering Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
3.3 Stereoscopic Radargrammetry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3.1 Parallax . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17
3.3.2 Parallel Heading Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.3.3 Arbitrary Heading Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18
3.4 Affine Structure in SAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4.1 Epipolar Geometry . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
3.4.2 Affine Projective Algebra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20
3.4.3 SAR Sensor Model . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
4 Model Preparation 23
4.1 Lantmateriet Dataset . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 23
4.2 Surface Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2.1 Polygon Vectordata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
4.2.2 Line Vectordata . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26
4.3 Varying Reflectivity Model for Objects . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27
4.4 Results of Simulating SAR Images . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
5 Image Utilization 31
5.1 Stereoscopy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31
5.2 Affine Epipolar Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
![Page 12: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/12.jpg)
6 Conclusion 36
6.1 Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
6.2 Answers to research questions. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37
6.3 Future Work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.3.1 Implementation of CV in SAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.4 Multistatic SAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
6.5 Polarimetric Decomposition . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
References 41
A SAR Frame Sequence 47
Page vi
![Page 13: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/13.jpg)
1. INTRODUCTION
1 Introduction
The intention of the work presented in this thesis is in working towards the goal of using SAR as a navigation
sensor. This introductory section will introduce the thesis in terms of the research questions identified and
addressed through experimentation and literature survey. From a problem definition, from which research
questions have been identified, to covering the main contributions of this thesis. It concludes an outline of
the thesis of the two main topics that are covered.
1.1 Problem definition
Some of the constraints on aircraft navigation are; degradation of GNSS signals, passive sensors constrained
to certain weather conditions, and drift in inertial sensors. Inertial sensors can be used in estimating head-
ing when GNSS is not functioning, and visual sensors can be used to correct for the drift. The additional
constraints facing aircraft are weather and time of day. Optical sensors require daylight, and whereas IR
sensors can be used during nighttime they are limited by weather conditions, as are optical sensors.
SAR is an important candidate in solving for these constraints. It is an active sensor, thus not constrained
by daylight, and it is in a part of electromagnetic spectrum largely unaffected by weather. Identified gaps
in the research includes robust automated geocoding of a SAR image and a lack of observation geometry
models that can be used in positioning. The limits are mainly image processing algorithms not specifically
developed for SAR.
Local motion estimation, diverging from a nominal trajectory, has enabled use of SAR on smaller aerial
platforms. What is lacking is a reliable global estimation, meaning where the nominal trajectory is in
relation to image content. This is the gap that this thesis aims to address.
1.2 Research Questions
These questions have been formulated from the constraints described above and addressing these questions
will aid in development and research into SAR aided navigation. As the simulation work enables pursuing
other topics in SAR image analysis the main research questions defined first relates to simulation:
• 1) Is the method of using 3D terrain maps for SAR image reference good enough for use in positioning?
• 2) Can texture, based on optical information, be used to generate reference images with more infor-
mation than only elevation maps?
• 3) How to increase the amount of radar specific information in simulated reference images?
As work proceeded other questions where identified in relation to the georeferencing of a SAR image or to
position an aircraft using radar image content:
• 4) What information in SAR images is used in registration and quantitative analysis?
• 5) Is it worth developing radar-specific image analysis methods and algorithms?
• 6) Is it possible to orient an image by direct matching from different views?
These question will be revisited in the concluding section of this thesis. Background, theory, and the work
presented will be summarized as answers.
Page 1 of 47
![Page 14: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/14.jpg)
1. INTRODUCTION
1.3 Thesis Contributions
Having surveyed the literature on SAR simulator use, SAR-aided Positioning, and use of SAR intensity
maps, the contributions to these and related areas of research are:
• Reflectivity models have been added to terrain maps to create more realistic simulated SAR images.
The purpose is to increase the amount of radar-specific salient features to aid in georeferencing real
SAR images. The terrain map has been classified into different types of reflecting surfaces using existing
vector data over the same area.
• The SAR observation geometry has bee re-interpreted such that existing algorithms used in multiple-
view geometry can be implemented. This is to directly match a real image and simulated reference,
or to be used in a stream of images from the same platform, or multiple platforms illuminating the
same area. This projection model for SAR was developed based on existing qualitative descriptions
in Stereoscopic Radargrammetry. Multiple view geometry for SAR is generally limited to parallel
trajectories. This approach is different from previous SAR-aided Positioning approaches typically
solving the Range-Doppler equations using geolocated images. Positioning was the main goal for this
projection model but this can also be used in scene reconstruction.
1.4 Thesis Outline
The main goal of this research and development effort is towards a working SAR-aided navigation system.
The total amount of processes that need to be developed for this includes navigation algorithms and image
processing procedures that are outside the scope of this thesis. Objectives that are covered in this thesis are
presented in table 1 and how research questions and theoretical background serves these objectives.
Table 1: Thesis outline in the following order: themes or partial goals towards SAR-aided Navigation,
objectives covered in this thesis, research questions relating to the objectives, and finally theory and methods,
with motivation, used towards fulfillment of the objectives.
Simulation Positioning
Model Preparation for Simulated SAR
for use in Georeferencing by Image
Registration
Aircraft Pose Estimation in Low
Visibility using Affine Structure in
Radargrammetry
(1) Is terrain data enough? (4) Information in SAR Images?
(2) Application of texture? (5) Radar Specific Methods?
(3) Backscatter modelling? (6) Positioning by Direct Matching?
• Observation Geometry to Motivate
Method of Simulation
• Geometric Algorithms for Triangulation
of Height Data
• Rendering Equation to Model Surface
Backscatter
• Radargrammetric Dual Problem
– Reconstruction from Known Positions
– Pose Estimation using Terrain Data
• Affine Epipolar Geometry for Pose Esti-
mation
• Define Resolution for Image Calibration
Because the simulation work has the intended goal of being used in navigation, a lot of the background
survey covers either both or navigation specifically, with the exception of SAR simulators which may be
considered more generally in this context. This background survey is to get a sense of technology readiness
Page 2 of 47
![Page 15: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/15.jpg)
1. INTRODUCTION
level and to identify gaps in comparison with other navigation sensors.
The concluding section covering future work will set some goals or milestones in the effort of developing a
navigation system using SAR as a visual sensor. This is intended to streamline further research into key
enabling requirements.
SAR Missions are typically divided into many areas of application.The interested reader is pointed towards
the review paper [1] for an introduction to SAR in terms of application areas. This thesis will generally limit
discussions to aircraft as platforms.
Page 3 of 47
![Page 16: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/16.jpg)
2. BACKGROUND
2 Background
This section presents some concepts and available technologies in fields that are relevant in model preparation
for simulated SAR and in development or implementation of positioning algorithms. Research gaps are
identified and a presentation of the problem of navigation in low visibility and without the use of GNSS is
clarified.
2.1 Earlier Work
Because of the reliability and performance issues of positioning in smaller and cheaper UAVs, [2], there is a
requirement for focusing and positioning SAR images beyond input from INS. It is shown that entropy as
focus measure can be used to estimate deviations from a known nominal trajectory. Another method based
on the phase of raw radar signals typically discarded in intensity image formation, [3], can also be used to
correct deviations in trajectory.
Earlier efforts has been done on matching optical maps and SAR images directly [4] by feature extraction.
The purpose of this work is to estimate a nominal position of a SAR platform by matching optical and
SAR images in a sensor fusion framework. In practice this means that the position of a SAR image can be
used in estimating aircraft parameters. These parameters are more global as a nominal trajectory may not
be known. it is also shown in [5] that the image matching method works for both focused and unfocused
images, but it may be of interest to also introduce a focusing process in the sensor fusion. A combined cost
function of image matching of a real scene with focus measure of image entropy is shown in [6]. Because
of the computational complexity it may be interesting to study the effect on the image focusing process by
focusing only subimages and see what the effect of different ranges is on the final product.
This thesis is a continuation of work presented in [7] where a simulated reference image using 3D terrain
data, shown with a real SAR image over the same area in figure 1, is used for the matching process. A simu-
lated SAR image using GIS data will contain more the features in real SAR taking into account radiometric
effects, not only geometric using elevation maps of a terrain. Optical sensors and SAR have very different
image acquisition geometries which means direct application of existing CV processes does not work, and
photos can not be used directly as textures in simulations.
Page 4 of 47
![Page 17: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/17.jpg)
2. BACKGROUND
a) b)
Figure 1: a) Real SAR image and b) Simulated SAR image. Results of feature extraction using Canny
detector shown as red circles in real image and white crosses in simulated image [7]
Canny detector, [8], used in feature detection shown in figure 1 and the registration thereof in figure 2, the
main principle of which is to use two levels of threshold making it insensitive to noise due to hysteresis. To
measure image matching Chamfer algorithm [9] was implemented in matching edges of the real SAR image
and simulated reference image. This image registration method is used as SAR and optical images have
different types of features.
Figure 2: Image Registration of the real and simulated SAR images [7] requires that features in both images,
that are from the same scatterer, match.
It is remarked in [7] that additional radar specific information can be used to make the image matching process
more robust. The current simulations only use a height map with the same backscatter model. Proposed
work is thus to investigate the effect of including radar reflectance to a 3D map of an environment for use
as reference image in positioning. The approach that is presented later in this thesis is using a combination
of diffuse and specular reflection to estimate most radar scattering behaviour over a real topography.
Page 5 of 47
![Page 18: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/18.jpg)
2. BACKGROUND
2.2 Need for GNSS Independent Navigation
GNSS signal degradation effects are typically grouped into
• Environmental
• Intentional
depending on what type of application is discussed in the article concerning navigation and which types of
degradation are most relevant to the system being discussed.
Demonstrations by University of Texas researchers are often referenced in literature when researching or
discussing GNSS signal degradation, denial, and spoofing. One of the demonstrations are of an overtaking
of a UAV following an incident where a military drone was allegedly hacked.2 Another demonstration aimed
at raising the issue in terms of civilian security is the remotely asserted control of a yacht.3
Efforts into GNSS anti-spoofing is not limited to unencrypted civilian signals. [10] A discussion of INS in
relation to GPS is presented in [11] where different INS technologies are presented, mitigation techniques in
jamming, and some degradation effects.
Interest in GNSS independence has been raised earlier, [12], with some demonstrations of spoofing that did
not get much attention until the aforementioned incident and demonstrations by Texas researchers. Aerial
and underwater environments as being susceptible to degraded GNSS signals. [13]
2.3 Vision-aided Robotics
The context of this thesis is navigation and data fusion. The insight that one can one can use of results
from maps as state or pose variable in estimating position enabled development of SLAM [14]. Applications
of SLAM, such as cooperative mapping, [15], are enabled by integrating developments from many different
specializations.
Compared to SfM that is a mapping technique that relates camera pose to 3D structure from a set of im-
ages, VO only establishes the egomotion of an observation platform. Primary function of visual system
is to establish pose, not mapping. SFM more general and encompasses sequential and unordered sets of
images as input. Feature extraction applied to large environent or long range image matching has had more
concentrated research efforts the past decade. [16]
Another use of images in navigation is first using image segmentation and classification before georefer-
encing. [17] demonstrates using RGB data, not only greyscale which is relevant point for future work, for
classifying extracted superpixels, or image segments, as asphalt, grass, or other environmental types. It is a
rotation invariant method as the image position likelihood is calculated using histograms of circular image
regions. There have been efforts into applying SLAM to radar data. [18] Raw SAR data that is only Range-
compressed can possibly be used in environments with strong point scatterers. [19] Not for the purposes of
mapping but for odometry for visual dead reckoning. This can also be seen as another approach to estimate
divergence from a nominal trajectory.
The conclusion of [20] that CV for UAVs lacks experimental evaluation. The research presented have not
fully integrated CV techniques in navigation systems, validation work presented in literature is limited to
experimental tests under a lot of assumptions, system simplifications, or the validation work is simulated
behaviour. This is found to be true for the longer range sensor SAR as navigation aid. Furthermore, it
2http://www.engr.utexas.edu/features/humphreysspoofing3https://news.utexas.edu/2013/07/29/ut-austin-researchers-successfully-spoof-an-80-million-yacht-at-sea
Page 6 of 47
![Page 19: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/19.jpg)
2. BACKGROUND
is highlighted in [20] that comparative studies of these techniques is made hard by a lack of benchmarking
and metrics that is transferable to different areas of applications. Development of SLAM systems is appli-
cation driven making metrics less transferable. Ideal and unique solutions do not exist for every operational
condition, platform, environment, and resources like hardware/software. [13] One contributor to a lack of
benchmarking are needs for phenomenological description of methods and errors. [21]
Autonomous underwater vehicles are also in need of using GNSS independant navigation. Efforts into terrain-
aided navigation [22] includes the use of synthetic aperture sonar and depth maps. Synthetic aperture sonar
has also had many similar developments to SAR [23] including sonoclinometry, interferometric sonar, autofo-
cusing by motion compensation, and sonargrammetry, all of which have had research efforts into utilization in
underwater mapping and navigation. This directly parallels the concept of using SAR as a navigation sensor.
Some differences in state-of-the-art in robotic vision using SAR and other visual sensors have been exempli-
fied. The gap between SLAM and its equivalent in SAR is in comparing state-of-the-art in photogrammetry
and radargrammetry. These topics are presented in greater depth as they form the basis of the theory that
is used in concept development for this thesis.
2.4 Platforms and Hardware
Modern processors, antennas, and algorithms enable SAR aided navigation research. Before describing nav-
igation research based on SAR some background on hardware and platforms is presented. UAVs as SAR
platforms are a recent research effort.
The study [24] shows technical feasibility of SAR-based navigation for a selection of UAV class, SAR system,
and DTM requirement. Phase difference between simulated and real phase maps for InSAR based navigation
did seem promising and their study indicates that position of the phase profiles rather than looking at phase
difference. They conclude that SAR intensity images can be used for the purpose of aiding navigation. It is
concluded that optimal settings and parameters for such a system are easily fulfilled by a commercial system
such as PicoSAR.45
SAR architectures have been demonstrated for a range of aircraft. Examples of demonstrations of SAR
systems carried by UAVs are highlighted here to get a sense of demonstrated operational conditions and
an understanding of platform/UAV type. UAVs of different sizes are utilized for different purposes. [20]
Their relative advantages and disadvantages need to be taken into account when designing a mission. Some
categorization of UAV types presented in [25]. These classifications are commonly used and referred to in
many applications. Development of motion compensation and miniaturized antenna and processor systems
has enabled the use of SAR on smaller platforms.
• For multirotor UAVs two examples of demonstrators are [26] in the X-band and [27] working in Ku-
band. For rotor-based aircraft the implementation for SAR sensors is rare but possible mainly due to
developments in motion compensation as stated in the demonstrator papers. CARABAS6 is flown on
single rotor aircraft with VHF and UHF band for foliage penetrating ability.
• Fixed wing aircraft are a more common airborne platform for SAR as it is connected to longer endurance
and higher payload. SAR typically does not require agility that comes with rotor based wings. Some
demonstrators are: SARape[28] in W-band SARENKA[29] C-band SAR system. Demonstration of
WATSAR[30] carries both S-band and Ku-band.
4http://www.leonardocompany.com/en/-/picosar-15Developed by the same company as the Raven ES-05 http://www.leonardocompany.com/en/-/raven-16http://saab.com/air/sensor-systems/ground-imaging-sensors/carabas/
Page 7 of 47
![Page 20: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/20.jpg)
2. BACKGROUND
The recent publication of the aforementioned papers indicate growing interest in SAR on UAV platforms.
Development of SAR for aircraft, especially smaller ones, have gone from project presentations and feasibility
studies to demonstrations. Demonstrations provide with power- and mass-budget specifications and notes
on control and image processing architectures. EM band is just one parameter that separate these systems.
2.5 SAR-aided Navigation
Navigation is the planning and filtering of a sequence of poses. Filtering several simultaneous estimates from
different types of sensors is fusion. Research into using SAR as a navigation sensor is presented exemplifying
what gaps exist and to clarify the purpose of the theory and methods presented in this thesis.
The idea of using SAR as a navigation sensor has been investigated before both as motion compensation
[31], and registration approaches, [32]. Motion compensation for SAR images can be used as input to
inertial sensor fusion. This is local motion estimation or odometry. Global estimation is in relation to an
environment. Other approaches to acquiring position data from SAR images is by using the range-Doppler
equations directly and known stationary targets sensed by an aircraft. [33]
An effort towards optimal global estimation using a multisensor fusion framework is presented in [34]. Re-
searchers find that global optimal fusion theory approach to an INS/GPS/SAR integrated navigation system
has a better performance using only INS/GPS. The presented results are simulations of their proposal of two
layer decentralized filters before a global fusion filter. Experimental data using such an approach is presented
in [35]. SAR specific data processing is not presented. It is unusual to cover state estimation in control theory
papers.7 Pose estimation using SAR images is either based on Georeferencing and Range-Doppler equations
or some other undisclosed techniques.
A method of georeferencing both SAR and InSAR is presented in the context of using SAR as a naviga-
tion sensor in [36]. The intensity image is referenced against a landmark database, assuming the scene has
landmarks picked up by ATR, whereas phase maps are compared against InSAR simulation over a DTM.
An experimental demonstrator for this approach was developed for two platforms showing positive results
implementing SAR as a navigation sensor and exemplifying the new developments required. [37] GNSS lack
of integrity, all weather capability, altimetry unreliable over flat areas, are some comparisons made by the
authors. Further discussion about InSAR-aided Navigation, how imaging geometry related to acquired phase
maps and error analyses, is presented in [38].
Bistatic observation geometries may also be useful in navigation. Paper [39] presents a system of spaceborne
transmitter airborne receiver bistatic forward looking observations where the main area of application is
navigation. Other uses are considered but not the end goal of development by the authors. The use of a
satellite, a GNSS carrying one such as Galileo presented by ESA researchers. Onboard recorded information
about the terrain does mean one can reference images against a database though this is unsuitable when
flying in unknown environments and in scenario modification.
A simulation of the performance of SAR aided navigation is presented in [40]. Simulated imaging is a crop
of a constant size of a reference SAR image along a linear trajectory without rotation. This cropped image
is matched against the full reference image in estimating the position in the linear trajectory used.
2.6 Physical and Image Simulation of SAR
Some highlights of the use of simulated SAR is in correcting positional errors by geocoding, radiometric
corrections of SAR images for quantitative analysis, evaluating signal processing algorithms and observation
7Control theory papers focus mainly on physical model of what is controlled, for example robotic arm or aircraft, and on
filtration method of sequences of state estimations.
Page 8 of 47
![Page 21: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/21.jpg)
2. BACKGROUND
geometries. [41] These applications have varied levels of simulation requirements that relates to what process
needs to be simulated. This section will cover some papers describing simulators and refer to some papers
detailing how simulators are used in research.
SAR Simulators are typically described using two classifications, [42]:
• Image Intensity Simulator - Typically using ray tracing or rasterization approaches to estimate an
image.
• Raw Signal Simulator - Physics-based approach to simulate electromagnetic propagation and how it is
recorded by an aircraft.
The paper [43] presentes orthorectification of SAR images using a simulated reference over the same area.
GRECOSAR, a SAR simulator based on GRaphical Electromagnetic COmputing software, [44] solves the
diffraction and geometrical optics equations given a scene with complex impedance. Simulator that is de-
veloped for use in georeferencing presented in [36] is a physics-based, or raw signal, simulator [45]. Single
and multiple scattering sometimes use different rendering algorithms, followed by summation of the different
results, as is explained for some SAR simulators compared in [46] and simulator for use in navigation in [45].
This thesis covers work with image simulators that yield intensity and not complex-valued images. Com-
putation of electromagnetic physical propagation may unsuitable for the purposes of generating reference
images if considering online onboard generation of references. In contrast to computer models real signals
propagate at c0 and ray-tracing or other image simulators can be fast enough for rendering. What the true
bottleneck will be in the real system: real frames, rendering, or matching, we can specify before building a
system.
Presented in [46] is a comparative study of three Image Simulators:
• RaySAR
• CohRaS [47]
• SARViz [48]
RaySAR, [49], developed during the course of a doctoral thesis, [50], in cooperation with DLR, who have
recently also released an online educational program for learning about SAR, [51]. Investigate comparative
studies of SAR simulators to understand potentials and limitations of different approaches.
Comparative studies of simulated SAR, for instance [46], gives insight into how our generation of reference
images differs from other simulation efforts. Realism is not the end goal, our interest is in estimating position
with related restrictions and requirements.
RaySAR has been used in interpreting scatterer distributions in urban environments [52]. SAR Simulators
have also been utilized in change detection [53]. As the mode of imaging of SAR is very different there have
been publications on what it is that is being imaged, for instance [54] cover some effects simulated using
CoHRaS of Pyramids, courtyard, and multifaceted poles. Interpreting images of man-made objects is not
straightforward due to, typically, multiple specular reflections. First time viewing f certain scenes in SAR
can be surprising unless investigated. [55]
Physical simulators are used for system performance or radar algorithm development. Physical or signal
simulators can use real trajectory data as input to study defocusing, [6] or evaluate ATR or MTI algorithms
and RCS of physical models, [56] , as well as degradation, and mitigation algorithms, due to environment or
jamming. [57]
Page 9 of 47
![Page 22: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/22.jpg)
3. THEORY
3 Theory
Historically, SAR image processing was done with all-optical systems [58]. SAR was an important appli-
cation of Fourier optics and Optical holography is sometimes stated as an analogue for SAR images [59].
Digital processors enabled flexibility in SAR processing [60]. This is mentioned here because the non-trivial
processing of SAR data increases the complexity of understanding SAR. This section presents a limited theo-
retical presentation of concepts and methods required for the experimental work in this thesis and use of CV
algorithms for SAR. First the imaging geometry is clarified with associated challenges. Differences between
SAR and photography will be highlighted with a representation in figure 3. Images are two dimensional
where there is depth ambiguity in photography, meaning we do not know at what distance from the camera
and object in a photo is located, whereas in SAR this ambiguity is in the height of the observed object, or
rather where the object lies in a circle segment. This circular geometry of SAR requires understanding of
the range-Doppler equations.
Figure 3: A comparison of projection models, or pixel contributions, for optical imaging and SAR imaging
over the same set of features. Figure from [61]
The model of the optical system in figure 3 is perspective projection. Locus of pixels in photography is along
projection lines from the centre of the optical system, and the locus of pixels in SAR are circle segments with
slanted range radius. Another way of putting it is everything along the dotted lines in figure 3 will contribute
to the same pixel. Occlusion in optical imaging is when objects are in front of another and shadows depend
on illumination angle. In SAR, occlusion and shadow are the same thing.
The reasons why SAR is different to interpret from optical images is geometric effects and geometric effects.
[61] Clarification of these types of effects will follow the presentation of observation geometry for SAR. A
discussion of these effects and distortions requires base knowledge of this observation geometry.
3.1 Observation Geometry
Monostatic zero-Doppler processed scanning mode will be the fundamental capture mode for most of the
thesis. Multistatic, squinted, and spotlight will be discussed where it adds necessary practical context. In
this thesis we will focus on some of the system design parameters that affect image output. Range sphere and
Doppler cone equations [62] are used with georeferenced SAR images to position aircraft also using onboard
pointing parameters.
Page 10 of 47
![Page 23: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/23.jpg)
3. THEORY
r =|R| (1)
λfDCr
2=V · R = r sinω (2)
The set of scatterers satisfying this equation, limited by antenna beamwidth, are called the locus of a SAR
pixel.
In imaging a flat plane, the isorange lines are a set of concentric circles, the flat plane intersecting spheres of
different radii. The isoDoppler lines are a set of coaxial, along the nadir of the trajectory, hyperbolas where
the flat plane intersects the Doppler cone for a given angle. For no squint the Doppler cone is a flat plane
orthogonal to the direction of platform motion.
3.1.1 Resolution Cell
A definition of resolution cell is a good starting point for understanding any imaging system. The range
resolution is typical of radar systems whereas the azimuth, or cross-range, resolution is unique to SAR. Some
derivations of SAR image parameters can be found in [1].
Slant Range Resolution from Pulse Bandwidth is given by
δr =c02B
(3)
Bandwidth is start and end frequency difference in the case of frequency modulated chirp, or pulse repetition
interval in pulses. Azimuthal, or cross-range, resolution for Synthetic Aperture
δa =LReal
2(4)
Wider beamwidth gives longer illumination time for a scatterer on the ground which is a longer synthetic
aperture. Neither resolution parameters depends on range which is an unintuitive theoretical result. The
longest possible synthetic aperture is given by the flight velocity and illumination time of the same scatterer
as shown in figure 4. A visulatization of range related parameters are shown in figure 5.
t0 = 0
Scatterer
ωw ωw
∆tTrajectory
Figure 4: Azimuth angle sensor parameters for a SAR sensor translating in the right direction.
Swath width or the footprint of the beam is dependant on the antenna size in the θ direction or θw. It can also
be limited by Doppler bandwidth but it is unnecessary to have a larger than necessary aperture if the SAR
hardware has digital beamforming. The term depression is prefered to use here as a system parameter rather
Page 11 of 47
![Page 24: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/24.jpg)
3. THEORY
than angle of incidence because incidence is a scene-specific scattering or shading parameter. Depression is
directly related to the imaging geometry.
θfar
Antenna
θnear
θw
Horizon
r
θ
Figure 5: Angular sensor parameters in range direction.
A visualization of squint angle is shown in figure 6. This visualization shows how squinted scanning modes
can look ahead, or behind, the unsquinted mode. zero-Doppler processed spotlight images have mean ω = 0.
t0 = 0
ω
∆tTrajectory
Figure 6: Squint angle geometry and sensor pointing visualization translating in the right direction. The
angle between zero-Doppler and squinted LOS is ω.
Having a definition of how SAR works as a sensor we will shift focus to more qualitative descriptions of
the effect of radar geometry and signal propagation. This is necessary in formulating what information or
features can be seen in SAR images and what limitations and strengths this imaging technique has.
Characteristic effects of SAR are typically grouped into geometric and radiometric. Understanding these
effects aids in understanding what it is that will be contributing to feature extraction and image matching
process when designing a full positioning system based on SAR, both in terms of developing radar-specific
procedures and why traditional techniques in CV are applicable or fail.
3.1.2 Geometric Effects
These effects are dominant in SAR due to the EM spectrum used. How objects appear in SAR depends very
much on shading, illumination angle, and pose, position and orientation. These are also the reason why SAR
images are not as straightforward to interpret, as the image is of the distance to the scatterer, radar being
a ranging instrument, not an angle of a ray of projection onto imaging plane as in photography. Consider
figure 3.
Page 12 of 47
![Page 25: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/25.jpg)
3. THEORY
Some effects, or distortions, are presented graphically in figure 7. This figure presents ground range evenly
spaced scatterers A B, C, ..., J appear in SAR. The approximation used here are that rays from the radar
source are parallel to the slant range that is one of the image coordinates. The other is azimuth or cross-range.
Figure 7: Geometric distortion effects due to ambiguity in swath angle. Image from [63]. Bottom scale shows
slant range and ground range for comparison. I and J are missing in slant range and this area of the SAR
image is shadowed.
Layover, or foldover, as seen in figure 7 means that illuminated scatterers in the same range appear in, or
contribute to, the same pixel.
High urban backscatter depends on the amount of planar surfaces at right angles. [64] These planar scat-
terers are dihedral and trihedral reflectors that appear in SAR images as lines and points known as phase
centers. This is because the backscattered radiation will have traveled the same range independant of where
on the surface making up the corner reflector it hits.
Ghost persistent scatterers located ”under ground” resulting from multiple reflections, typically more than
three. Ground as mirroring dihedral and trihedral reflectors for 4 and 5 reflections. [65] See also results on
courtyard simulations in [54]. The ghost scatterer position is the same regardless of view as is shown in figure
8. Consider figure ??: The real bridge backscatter is positioned closest to the sensor, dihedral scattering
apparent position is on the ground, and ghost scattering is a reflection of the bridge on the water that will
appear under water when applying stereo SAR reconstruction.
Page 13 of 47
![Page 26: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/26.jpg)
3. THEORY
Figure 8: Figure of ghost scattering from NLOS dihedral scatterer. Apparent position of this scatterer does
not change with observation parameters.
Persistent scatterer is a scatterer that persists, or appears, in different views. This is useful because they
can be applied in registration as corresponding points. Man made objects are typically rectangular in shape
and thus persist in many views enabling easier image registration in for example InSAR.
a) b)
Figure 9: a) SAR image of a bridge from [66] and b) Illustration of radar return modes from bridges, red
indicating direct backscatter, blue where the bridge and water act as dihedral reflector with phase center
indicated with blue, and green as ghost scatterer below ground.
Foreshortening, dilation, layover, and shadowing are typically introduced as distortions to radar images. The
method of positioning using image contents presented in this thesis will use these effects as they are intrinsic
to the observation geometry, except for shadowing that may be used for other functions.
3.1.3 Radiometric Effects
In this category of effects include atmospheric distortion and micro-Doppler effects that will have an impact
in image interpretation. Throughout the thesis there will be regular referencing to the effect that shading
Page 14 of 47
![Page 27: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/27.jpg)
3. THEORY
has on image matching. Speckles are an important radiometric feature of SAR, and other coherent imaging
techniques for that matter, when analyzing images.
A consequence of being a coherent imaging technique is the added complexity and image deterioration of
speckle noise. This type of noise is multiplicative in nature and thus harder to filter. It deteriorates texture
information. Typically filtering this noise results in deterioration of resolution. There exist metrology meth-
ods utilizing speckles but in the context of SAR it is mainly unwanted noise.
A common model for speckle noise is as mentioned multiplicative for which there is some discussion of
modelling in [67]. Another way of thinking about speckles is a random walk in the complex plane where
backscatterers within a resolution cell are coherently summed. Estimators exist for addative noise. If used
consider that the statistics are different for logarithmically transformed speckle. [68]
SAR is considered an all weather sensor but this is not technically incorrect. Tt depends on frequency band
and whether or not it is spaceborne or airborne. Airborne SAR is limited in higher frequencies [69] whereas
spaceborne applications are also limited in lower frequencies due to the ionosphere.
One interesting feature appears in Video SAR demonstrations from Sandia Laboratories of traffic. 8 As
vehicles accelerate and decelerate the strong scattering from the vehicles detach and approach the shadow
on the road. This is because SAR measures Doppler frequency, or frequency shift.
Shading, backscatter intensity as a function of surface orientation, is an important effect which is covered
more extensively in the concluding discussion on limitations of the proposed method of navigation and where
to shift future efforts. In fact, shading models can be used to estimate terrain using smoothness assump-
tion, the variable intensity over a surface, and some assumption on the backscatter model which is typically
Lambertian, covered in the section on Rendering. This method of shape estimation is called clinometry, or
shape-from-shading. [70]
The radiometric effect that we will look at more closely is shading and impedance. Complex impedance is
implied from the simulation models having absorption. Shading is an emergent effect due to the incidence
angle of illumination onto a surface. This will be explored in greater detail in section 3.2
3.2 Rendering
The image simulation part of this thesis deals with rendering radar images as they may appear from some
pose of view in an airspace. The rendering technique used by RaySAR is ray tracing. This is enough for
simulating geometric features in a scene. Some representation of radiometry is required for this thesis making
use of GIS data or textures.
Ray tracers follow rays projected from a source as an approximate illumination model. The main addition
to PovRay developed for RaySAR is that the distance traveled by the rays form the image rather than the
direction, which is the same difference in conceptual model between SAR and photography as seen in figure
3. The purpose of this rendering chapter is to introduce some concepts that are relevant to this thesis. In
particular the shading equations are an important theoretical preamble.
Render equation, or shading equation, is what BRDF is called as in graphics computing. [71] Ray tracer
that we will be using is built on a modified sum of ideal specular and Lambertian diffuse scattering. The
modification is that the specular scatterer is in a cone rather than delta function. [50] These cone shaped
8http://www.sandia.gov/radar/video/ Visited 16/03/17
Page 15 of 47
![Page 28: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/28.jpg)
3. THEORY
specular highlights are dependant on a surface roughness factor.
Gouraud shading, Phong shading, and rasterization, are examples of different rendering algorithms approx-
imating the render equation. [71] These have different procedures for treating graphics computing problems
like z-buffer and interpolation. We will limit the discussion to reflection models. Main difference between
ray tracing and rasterization approaches as expressed in [50] is that ray tracing does well in representing
multiple scattering in SAR whereas rasterization is faster when dealing in purely diffusive scattering cases
as is the case in most natural environments.
Ray tracers through refracting and volume scattering media exist. [71] and this functionality is included in
POV-Ray, and by extension RaySAR. [50] This may be of interest in the future in evaluating atmospheric
and foliage effects, though physical simulators could be more important as to not limit such simulation work
to emergent effects disregarding causes.
[71] also covers many radiative transfer processes: diffuse surface and volume scattering, translucency, mul-
tiple scattering due to reflections, refraction. More advanced radiative transfer models in graphics rendering
may be used in scientific evaluation of simulated SAR. Physics-based rendering, diffraction shading. Appli-
cation of textures and GIS data on a terrain is also a graphics problem.
This section contains generic rendering information that is highlighted to give a broader picture of how much
a radar image simulator can benefit from developments in computer graphics beyond ray tracing elevation
data. Another highlight is that graphics computing is not an obstacle.
3.2.1 Scattering Models
We will primarily discuss the rendering procedure using a simpler shading equation. Other models will be
described to see if important features will be neglected.
We will focus on models and methods necessary for analysis and implementation issues for this thesis.
BRDF is defined as
BRDF =Irradiance
Radiance(5)
Backscatter is R(ϕinc, ϕref) when incidence and reflection angles are equal. For monostatic SAR, the case
that is the focus of this thesis, only the backscatter case of BRDF is relevant. Bidirectional scattering would
be important if working with bistatic SAR.
The importance of proper reflection models is dependant on higher quality DTMs. Microwave backscattering
is strongly dependant on the geometry of the observed terrain, [64], and so application of advanced reflection
models is unnecessary as long as the geometric models used are poor, [49].
• Simple specular, or mirror, reflection model, also called ideal specular reflector. [50] [64] Amplitude
reflection coefficient for Fresnel reflection comes from the impedance of dielectric media in plane bound-
ary.
• Simple diffuse scattering model, also called ideal diffuse reflector. [71] [64] [50] For any angle of
incidence the radiance is isotropic. Irradiance, incident intensity onto a surface, follows the cosine law
for Lambertian scattering. This cosine law determines the angle of incidence for an emergent radiance
for a given surface reflectivity used in shape-from-shading, or the incidence dependance in Lambertian
backscattering.
Page 16 of 47
![Page 29: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/29.jpg)
3. THEORY
Other electromagnetic scattering models include small perturbation method, bragg scatter, and kirchoff
approximation. [64] The sum of diffuse and specular scattering and varying the amplitude of these scattering
mechanism represents most radiometric radar specific features. [50] These scattering mechanisms have other
names in graphics computing such as diffraction shader and microfacet models. [71]
3.3 Stereoscopic Radargrammetry
Photogrammetry and CV are not the same but mathematically equivalent. Radargrammetry is not the
type of algorithm for which a sensor model is developed but it is a simpler way of highlighting some of the
complexities of SAR in terms of positioning using image contents beyond just the geolocation of the image.
This theory is intented to make the introduction to epipolar geometry for SAR smoother.
Method of estimating position of point scatterers is presented in [72] using same-side configuration. Another
effort into Stereo SAR is in [73] where the novel idea presented is the use of bundle adjustment instead of
GCPs for dense image correspondence. The idea of bundle adjustment is to minimize a distance function,
representing reprojection error, between image points and coordinates in 3D space using a sensor model for
projection [74]. Reprojection error is only used on visible features why a visibility function is required in
the error function. Visibility is different for SAR than for optical sensors as shadowing and occlusion is the
same area and layover means features in different position in 3D space may inhabit the same pixel.
Rectangular building extraction in SAR images, as in [75] and for use in online damage assessment [76], work
on the assumption that parallel lines are preserved by SAR imaging geometry.
Crossing flight path configurations for stereoscopic radargrammetry have been described as potentially useful
for elevation extraction [77]. Practical work in stereoradargrammetry, for example [78], and simulation work,
as the work presented in [79] investigating optimal trajectory parameters as a function of surface roughness,
has generally been limited to parallel configurations from the same side.
Stereoscopic methods aim to create a sens of depth in overlapping images from different views, in stereopho-
togrammetry as well as stereoradargrammetry. This also requires an image registration but not with the
same requirements as for georeferencing or multisensor registration. In stereoscopy the demand put on im-
age registration is to only include parallax due to depth. In the case of SAR it is groundrange or elevation.
One way of doing this is to select feature points that are in the same plane and use a homographic trans-
form. Subimage correlation, or other dense matching done after registration, measures parallax. A dense
registration defeats the purpose of stereoscopy.
3.3.1 Parallax
Here follows a clarification on why image registration is required in stereoscopy. Photography captures Az-
imuthal and Elevation angles not range. Radar images capture range and azimuthal direction, not swath
angle. The effect of parallax in images is a result of missing information in a 3D scene when forming images
that are 2D. Parallax in photography is an effect of depth whereas parallax in radar images is an effect of
elevation, or more precisely position on the circular segment described early in section 3.1.
Images must be registered such that there is no Y parallax interfering with stereoscopic evaluation. This
type of parallax is induced by imaging from a different position, as can be seen in the differences in simu-
lated figures from different positions. this parallax is more so a measure of baseline, or the distance between
platform positions in the stereo channels, not a measure of the terrain.
In the case of SAR a different heading and altitude will produce rotation in the image and a scaling in the
slant range direction respectively. The parallax that we want to measure in stereoscopy is called X Parallax,
Page 17 of 47
![Page 30: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/30.jpg)
3. THEORY
or Absolute Stereoscopic Parallax.
3.3.2 Parallel Heading Configuration
Parallel trajectories have equal heading. Parallax will be on parallel lines though position will vary non-
linearly with height due to circular Doppler geometry. This will be approximated as linear using parallel ray
approximation used throughout the thesis. Figure 10 shows the circular and linearly approximated sensor
models used for parallel heading configuration stereoscopy.
a) b)
Figure 10: SAR observation geometries from different altitudes. a) contributions to pixel in a SAR Image
comes from circle segment. b) Coregistration of parallel track stereo and parallel rays assumption gives
altitude by image correlation.
References to trajectory configurations in radargrammetry is limited to parallel headings. [80] presents some
range-Doppler models of arbitrary heading but it is stated in later articles that this approach does not
improve dense reconstruction.
3.3.3 Arbitrary Heading Configuration
It is stated in [81] that a detailed feasibility and rigorous description does not yet exist for Epipolar lines
or curves and their use in computing SAR images. Even when considering Doppler circular geometry the
scatterer contributions to a pixel from equal heading are on parallel lines. Position on these lines behave
nonlinearly which can be corrected for by simply considering the curvature of circles at the range value of
pixels investigated. An arbitrary configuration, varying the heading of either aircraft, is not straightforward
in rigorous radargrammetric processing.
Arbitrary heading as described in [77] present stereoscopic procedure where trajectories intersect, meaning
the trajectories must be at the same height, such that image registration process is a rotation by the angle
of intersection. Authors of [82] present a radargrammetry-based method of geolocation of point scatterers
with difference in both heading and depression angle.
Page 18 of 47
![Page 31: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/31.jpg)
3. THEORY
a) b)
Figure 11: SAR observation geometries in 3D a) Circular segment of scatterers contributing to one pixel
seen from two different headings. b) Linearized observation model, from two headings, where scatterers
contributing to a pixel lie in a projective line normal to the slant range plane..
The parallax in the SAR images depend only on the heading and depression angles. Using parallel ray
approximation any image taken from a position along the LOS will be the same image. Heading meaning
the direction of flight in the x-y, or ground, plane, and depression angle being the angle at which the antenna
is pointing relative to the horizon. The sensor model that may be used with traditional CV algorithms is
shown in figure 11.
The use of slant range normal projection is also presented in [83]. This type of sensor model has been
described before and simplifications have been used in stereoscopic radargrammetry though rigorous use of
affine projection CV algorithms in SAR is to the authors knowledge still unpublished.
3.4 Affine Structure in SAR
Here we build a sensor model, by using theory of epipolar geometry and affine projections as well as ap-
plying what we know about SAR observation geometry, that can be used for pose estimation and scene
reconstruction using arbitrary heading and without the need of image registration.
3.4.1 Epipolar Geometry
A thorough introduction to epipolar geometry is beyond the scope of this thesis. The intention is to motivate
approximations in SAR rather than a complete treatment of the subject. Just enough to apply certain al-
gorithms and analyze results and errors will be covered with references to sources that expand on the topics
mentioned.
Epipolar geometry is typically introduced using perspective projection models, figure 12. A point in an
image reprojected out into space forms a line of projection along which any scatterer contributes to the same
point in the image. This line projected onto an image in a different view produces the epipolar line given
a point in the first image. Figures for orthographic projection is included in figure 13. This illustrates the
added ambiguity of depth due to the parallel rays. [84]
Page 19 of 47
![Page 32: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/32.jpg)
3. THEORY
Figure 12: Epipolar geometry for perspective views. Figure from [85]
Figure 13: Epipolar geometry for orthographic views. Figure from [85]
Mathematically the relationship between corresponding points in epipolar geometry is described by the fun-
damental matrix. The problem of estimating pose is reduced to estimating the fundamental matrix, for
example using 8-point algorithm. However, the fundamental matrix elements are not directly related to air-
craft states which is why an estimated fundamental matrix is decomposed or factorized into pose parameters.
[86] [74] [87] Points on a flat plane projected onto an imaging plane from different views are related by a
homography transformation.
If correspondences are found between a pixel in one image and a position on projected reprojecton, or epipo-
lar line, in another image we have solved for 3D coordinate given that we know the pose from which both
images were taken. The dual problem is that we know the structure of the scene and we can solve for poses.
Affine epipolar geometry derivations includes a lot of variable substitutions which distracts from the ideas
that this section aims to clarify. Calculations leading to affine fundamental matrix are presented in [85].
Once a fundamental matrix estimate has been made the matrix needs to be factorized into pose estimations,
for example using algorithms based on [88].
3.4.2 Affine Projective Algebra
The discussion of algebraic projective geometry will be limited to the affine case. Orthographic, weak per-
spective, and paraperspective cameras can be modeled by the same affine projection model. Weak perspective
Page 20 of 47
![Page 33: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/33.jpg)
3. THEORY
and paraperspective cameras are orthographic approximations of narrow FOV of perspective cameras, mean-
ing that objects that are small or far away are projected into perspective camera in near parallel rays. [85]
Decomposition of intrinsic, projection, and extrinsic matrices for affine projection is
PAff = CP‖G (6)
where elements written out are
PAff =
δ 0 u0
0 δ v0
0 0 1
1 0 0 0
0 1 0 0
0 0 0 1
[R v0
0 1
](7)
The intrinsic parameters in the C are camera specific. For perspective cameras we also take into account
focal length, but for parallel rays this is not considered. The extrinsic parameters in G are pose parameters
where we have rotation and translation. Image resolution δ does not necessarily have to be the same for
image coordinates u and v.
Now that we have some basis for affine epipolar geometry a linearized sensor model for SAR may implement
algorithms developed for affine cases in photography.
3.4.3 SAR Sensor Model
Using the concept for a SAR sensor model seen in figure 11 and earlier efforts in Epipolar Geometry the
conclusion that one may make is applying Affine Fundamental Matrix as a constraint in positioning SAR
sensors. A camera decomposition for SAR would be defined as
PSAR = CSARP‖G⊥. (8)
The matrices CSAR and P‖ are conceptually straightforward: coordinate space in SAR images depend on the
image size and resolution in azimuth and slant range direction and projection model is a consequence of the
parallel ray approximation. v0 depends on number of pulses, u0 depends on Doppler bandwidth (footprint).
Spot-mode is different in v0 and Squint is not considered. This is covered in section 3.1.
CSAR =
δr 0 u0
0 δaz v0
0 0 1
(9)
How radar image content appear analogous to cameras is illustrated in figure 14. Objects appear to fold over
in the direction of the sensor. We are looking at objects illuminated from the side and occlusion is the same
as shadowing in SAR. Another way of interpreting this point of view is that objects that are folded over
appear transparent. This will not affect the CV algorithms as they depend on scatterer position in space
given a coherent projection model. The matrix G⊥ describes the pose of the virtual orthographic camera in
the SAR sensor model.
Page 21 of 47
![Page 34: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/34.jpg)
3. THEORY
Figure 14: Nominal Trajectory, Slanted Range Plane, and Virtual Orthographic Camera. LOS as dotted
lines for both Trajectory and Virtual Camera.
This is an affine projection model that approximates SAR and is consistent with previous efforts in CV that
we can use for reference or build on.
Page 22 of 47
![Page 35: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/35.jpg)
4. MODEL PREPARATION
4 Model Preparation
Point cloud height map will be used to generate surfaces for rendering orthographic projections and simu-
lated SAR images. Vector data will be used in a constrained surface generation process of the point cloud for
use with different reflection models in rendering. Polygon vector data represent different areas of vegetation
or other environmental scene classifications. Roads also contribute with salient scattering mechanism and
are represented by linear vector data. The SAR simulator was developed during a PhD thesis at DLR. [50]
As is demonstrated in [89] GIS can be utilized in simulating different things with different radiometric proper-
ties. We could use the laserdata to input building models into our DEM. Furthermore, we could use GIS data,
vector data of classifications and the orthographic photography, to input 3D models of trees in areas without
a dense forest canopy. These researchers apply GIS data differently using another simulator tool in rendering.
In a cooperation effort with other actors working in the field of surveying and related areas Lantmateriet,
among others, offer geodata free of charge for use in research, education, and culture activities. Geodata
acquired for this work is under license applicable to student theses.9 Information about the process of both
the data acquisition and data representation can be found in the course compendium [90].
4.1 Lantmateriet Dataset
The geodata that is used in this thesis for preparing a simulation model are
• 2x2 m resolution elevation maps
• Polygon data of vegetation classification
• Road networks from open database
The elevation map is the terrain over which we want to simulate an image. The polygon data and road
network are vector data to be used in classifying different parts of the terrain into radar scatterers. Figure
15 presents to entire area over which we have ordered terrain and polygon data.
9 https://www.lantmateriet.se/sv/Om-Lantmateriet/Samverkan-med-andra/Forskning-utbildning-och-kulturverksamhet/
Page 23 of 47
![Page 36: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/36.jpg)
4. MODEL PREPARATION
1000 3000 5000 7000 9000
1000
2000
3000
4000
5000
6000
7000
8000
9000
Full ortophoto and heightmap
500 1000 1500 2000 2500 3000 3500 4000 4500 5000
500
1000
1500
2000
2500
3000
3500
4000
4500
450
500
550
600
650
a) b)
Figure 15: Full area for which we have datasets of vegetation classes, road network, and, seen in this figure,
a) ortofoto and b) heightmap.
Orthophoto, orthorectified aerial photograpy, can be used to check, or correct, the application of vector data
to the terrain. In figure 16 is presented the height map and ortofoto over the area most used in the later
simulation experiments. Note the three bodies of water to the left and the road in the right in figure 16a
Ortofoto and "Öppet Vatten" Polygons
6.422 6.423 6.424 6.425 6.426 6.427
E-coordinate×10
5
7.3037
7.3038
7.3039
7.304
7.3041
7.3042
7.3043
N-c
oo
rdin
ate
×106
a) b)
Figure 16: a) Ortofoto with water vectordata and b) height map over the same area.
The vectordata categories are quite extensive and so the curious reader is refered to Lantmateriet docu-
mentation for GIS users. Here we will use approximations to the data set where water is strongly specular,
forests are strongly diffuse, and roads are specular. Besides terrain classifications vector data, linear instead
of polygons, are used to represent networks of roads. The roads seen in figure 15a is presented in figure 17.
Boundary box is the geographic boundary of figure 15.
Page 24 of 47
![Page 37: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/37.jpg)
4. MODEL PREPARATION
6.34 6.36 6.38 6.4 6.42 6.44 6.46 6.48 6.5 6.52
Easting×10 5
7.295
7.3
7.305
7.31
Nor
thin
g
×10 6 All roads poly in Geodata
Väg Linje (alla vägtyper)Bounding Box GeodataÖvrig Väg (vandringsled)
Figure 17: Road network in the full set of GIS data.
Roads are linear vector data that will require additional preprocessing to represent a surface before using
the same approach as for the polygon data. Roads all have different sizes and surface characteristics but
here we only present an example of what can be accomplished using the model preparation procedures used
in this thesis.
4.2 Surface Generation
There are a multitude of ways to prepare a model for simulation and some can be found in earlier work
using RaySAR. Further tools for general use in rendering using POV-Ray can be found on the website.10
In this thesis we will be focusing on importing point clouds as surfaces using CDT. [91] RaySAR utilizes
Delaunay triangulation, [92], in preparing point clouds for rendering.To prepare different sections of the
point clouds for rendering as separate objects with varying reflection models a modification of the exist-
ing RaySAR MATLAB scripts is in using constrained Delaunay triangulation. In short, it works by using
polygons, constraints, as triangle edges, and then proceeds to generate a triangulation that is as similar to
classical Delaunay triangulation as possible.
We can also add the data points forming the polygon and line vectordata to the point cloud. The significance
of this is clear when considering multiresolution DSMs. For example, terrainmaps can have much to low
resolution to resolve road networks well enough for using these features in image matching.
4.2.1 Polygon Vectordata
Polygons add the x and y data to the height map point cloud. For z-values, or height data, interpolation
using neighbouring points in the height map is done.
10Documentation and other resources can be found from http://www.povray.org
Page 25 of 47
![Page 38: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/38.jpg)
4. MODEL PREPARATION
a) b)
Figure 18: a) Triangulated height map data in 2D and polygon vector data overlayed on top of it and b) CDT
using the same sets of data.
Figure 18 show the regularly spaced grid in the left change into incorporating the polygon as constraints to
the triangles. This tool also determines which of the triangles are inside and outside the constraints such
that selection of which triangles to include in the desired surface object can be made.
4.2.2 Line Vectordata
Roads and such are not really lines but are defined as such in the databases. To use this information in our
simulation we need to derive a surface segment defined by line vector data. Method of creating buffer zones
in GIS is similar to vector offset or polygon extension as it is named in graphical computing. For polygon
we can create different size buffer inside or outside the polygon. In GIS buffers around polygons, lines, and
points is used as an aid in decisions from spatial computation. Lines will have the same buffer on either side
and this buffer size is something we can vary with classification of roads. Road classification will also be
useful in deriving the radiometric parameters for shading.
An example of constrained triangulation using buffering linear vectordata is presented in figure 19.
Page 26 of 47
![Page 39: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/39.jpg)
4. MODEL PREPARATION
a) b)
Figure 19: a) Triangulated height data in 2D and road vector (dashed line) and buffer zone (solid line) and
b) CDT using height map and road data where red is estimated road surface and blue is environment.
Buffering GIS vector data can also be used for polygon to create buffer zones inside or outside the polygon
for different GIS arithmetic operations in planning.
4.3 Varying Reflectivity Model for Objects
The coefficients for the reflectivity parameters used in earlier case studies using RaySAR were good enough
for those studies and we will make the same argument here. [50] This is early in development and may be
supplemented by quantitative analysis later specific to the simulator that will be used in a navigation system
in conjunction with more advanced shading equations and radiometric correction of both real and simualted
images.
Here is presented a simple case in which specular and diffuse classes of scatterers are applied to two cylinders,
figure 20 with mostly specular and mostly diffuse reflection model. Another purpose of presenting these two
images together is for demonstrating how different SAR and camera imaging geometries are.
Page 27 of 47
![Page 40: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/40.jpg)
4. MODEL PREPARATION
All Reflections (dB)
20 40 60 80 100 120
Azimuth Pixels
20
40
60
80
100
120S
lan
t R
an
ge
Pix
els
50
100
150
200
250
a) b)
Figure 20: a) Simulated orthographic projection and b) SAR using two reflection models, diffuse and specular,
for cylinders.
The bright reflections on the specular cylinder in figure 20a are reflections from the ground, otherwise it
would appear completely dark. The rays making up these reflections travel same ranges, being a dihedral
scatterer with the modified ray tracer highlights described in section 3.2 why it appears as a curve in the
simulated SAR image. We see no layover of the specular cylinder in figure 20b because there is no backscatter
from the cylinder, only dihedral reflection.
4.4 Results of Simulating SAR Images
The geometry of imaging is presented with the ortofoto mapped as texture on the terrain. This pointing
configuration is only one possible out of an entire envelope that is limited by roughness and flight envelope.
Figure 22 presents simulation results from the configuration in figure 21 using only one diffuse scatter model
for the entire terrain and implementing purely specular reflection on the bodies of water.
Page 28 of 47
![Page 41: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/41.jpg)
4. MODEL PREPARATION
a) b)
Figure 21: a) DEM with texture and position with LOS of aircraft. Representative of later simulations.
Rotations and change in altitude is in reference to this configuration. Texture and height map in fig 16 b)
close up of the figure in a)
a) b)
Figure 22: Simulated SAR images of the testsite presented in figure 16a a) without and b) with specular
reflection model for surface classified as water using vectordata.
For the purposes of evaluating position algorithms we need to be able to simulate the image from a different
view. Figure 23 presents a view that will be used in demonstrating registration and stereoscopic application
in later chapters. The black border around the terrain is due to using a small sample of the entire terrain.
Page 29 of 47
![Page 42: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/42.jpg)
4. MODEL PREPARATION
Figure 23: 60◦ heading compared to simulated images above.
This concludes the simulation model preparation work. Simulated images used in this thesis will apply the
same procedures as presented for the geodata in Lappland, or use simpler geometries as cylinders or boxes
to simplify experimentation with image matching algorithms.
Page 30 of 47
![Page 43: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/43.jpg)
5. IMAGE UTILIZATION
5 Image Utilization
Image registration removes differences between images. It is challenging to remove differences between im-
ages that undesired and keep those that are required for different processes. [93] Functions that describe
mappings that are estimated in registration can utilize information about sensors or the expected scene.
For example, Planar homography is one example of a problem where the registration map can be used to
estimate view.
Survey article [94] covers different methods of feature point detection. The authors discuss robustness and
speed of different local descriptors, means of describing local image regions. Method used to evaluate the
local descriptors for image matching is presented in [95]. The survey of methods in [94], as well as other
papers in the field of image matching in remote sensing cover methods developed for optical images. The use
of the same methods and descriptors requries additional work if applied to SAR mainly due to speckle noise.
For example, A SIFT algorithm dedicated to SAR images is presented in [96] where the main modification
of SIFT is making it robust against speckle noise.
Many different descriptors are described in the scientific literature with open source availability in a lot of
cases. These have been developed for photography and not for SAR images. Direct implementation has been
demonstrated not to work as expected for other imaging techniques and additional work doing SAR-specific
alterations is required.
5.1 Stereoscopy
Purpose of registration in stereoscopy is to minimize changes between two stereo channels due to camera
pose and only investigate absolute stereo parallax to estimate values of a dimension lost in the imaging
process. Using MATLAB functions cpselect() and fitgeotrans(). Figure 22 used as fixed image in
coregistration process with figure 23 presented in figure 24b. This was done by manual selection of control
points and finding the geometric transformations that maps between the fixed and moving control point
pairs.
a) b)
Figure 24: a) Fixed image and b) registered image at 60◦. The image in figure ?? is being registered
Page 31 of 47
![Page 44: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/44.jpg)
5. IMAGE UTILIZATION
Figure 25: Color coded composite with green being fixed image and purple registered or moving image.
Automated registration or other image analysis requires a good selection of feature descriptor, geometric
transformation, and matching function and also consideration of the correspondence problem. This demon-
stration further exemplifies the problem that needs to be solved in radiometric correction of SAR images.
Changes in intensity in these images is more a measure of surface orientation and not a particular feature,
except for the edges defined by GIS vector data.
5.2 Affine Epipolar Analysis
This section presents in more practical terms fundamental matrix estimation. It will be limited to a scene
with simple building models. 2-view orthography using SAR and an orthographic camera from the same
position as virtual orthographic projector for the SAR image will be compared and the procedure will be
presented.
The authors of [87] have made some MATLAB code freely available.11 They also link to a collaborators
page, from which code used during this thesis where found. 12 2-view orthography is presented in figure
26 with an orthographic camera projection model. We have the same scene from two different views and a
figure demonstrating the CP flow used in fundamental matrix estimation. KvD elements are derived from
the estimated affine fundamental matrix. [85]
11http://www.robots.ox.ac.uk/ vgg/hzbook/code/12http://www.peterkovesi.com/matlabfns/
Page 32 of 47
![Page 45: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/45.jpg)
5. IMAGE UTILIZATION
a) b)
Centre Line
Cyclorotation
Axis of Rotation
c) d)
Figure 26: a) and b) Orthographic camera from two views where c) is CP flow and d) are derived KvD
elements.
For further aid in conceptualization in the proceeding section: orthographic and SAR images from the same
position and an orthographic image at G⊥ are set side by side in figure 27. Remember that image intensity
is dependant on surface orientation. Surfaces that are bright are angled towards the sensor.
Page 33 of 47
![Page 46: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/46.jpg)
5. IMAGE UTILIZATION
a) b) c)
Figure 27: Simulated images of the scene. a) orthographic photo, b) simulated SAR image, and c) orthography
at G⊥
Image a) in figure 27 appears brighter due to the angle of incidence. Simulated SAR images are rendered
from the same views as figure 26. These frames are presented in figure 28.13
a) b)
Figure 28: a) and b) are SAR images from the same views as figure 26
This thesis argues that the same procedure as presented in figure 26 for SAR images. CP flow from one
SAR frame to the other a derived flow is shown in figure 29. This is to illustrate the proposal of using affine
projective algebra CV algorithms for SAR.
13A sequence of 6 frames between these poses are included in appendix A
Page 34 of 47
![Page 47: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/47.jpg)
5. IMAGE UTILIZATION
Figure 29: Image with CPs marking the fixed and points in image with 30 degree changed heading.
Some sources cited in this work does apply affine geometry in reconstruction of various sorts. However, as
stated in [97] a full investigation of bundle adjustment for orthographic cameras is still considered open.
Page 35 of 47
![Page 48: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/48.jpg)
6. CONCLUSION
6 Conclusion
Some highlights of what differentiates a method of positioning based on the sensor model presented in this
thesis from other methods are
• Georeferencing is not required
• Images in sequence have radiometric similarity
• Translation and rotation are separably determined
• Velocity is an intrinsic parameter
• Strong geometric distortions are useful features
Some of these benefits come directly from using CV algorithms. The velocity aspect mentioned above is
unique to SAR.
In regards to the simulation work it is shown that GIS data can be leveraged in increasing radar specific
features in simulated SAR.
6.1 Discussion
Adaptive noise cancellation and other filtering methods error has a direction being based on gradient descent.
[98] Image matching error does not have a direction. Geolocation of an image carries no information whether
update depends on translation or rotation. Decomposition of fundamental matrix, in contrast to geolocation
and range-Doppler equations, does. Image matching error carries no information and does not help in devel-
oping phenomenological description of the error. This is why function minimization without gradients, [99],
as a function of extrinsic and intrinsic parameters are proposed for positioning using geolocated imagery.
Transform estimations does map from image to image but needs decomposition into meaningful information.
Epipolar resampling, applicable to binocular stereorigs as a calibration procedure, reduces reconstruction to
1D search. reconstruction and pose estimation are dual problems where optimal match, using knowledge of
terrain or object, can be related to pose.
The feature extraction process ought to ignore features that heavily depend on viewing angle. This includes
geometric features that are deformed by projections and transformations, and radiometric features that are
dependant on the effect of shading and illumination. SAR image registration that is already difficult is made
even more difficult when discussing registration from multiple views. [100] Features in SAR are, as explored
in this thesis, dependant on geometric and radiometric effects. The method presented as 2-view Orthography
as applied to SAR images show that it is dependant on strong geometric distortion. Radiometric correction
may only be practical for smooth surface where DTM and Lambertian scatter assumption can be used. For
structure finer than is represented by DTM this may not useful.[101] Here it is also relevant to mention that
image matching error can depend on changes in the scene.
It is brought up in [102] that CV and photogrammetry are different approaches though mathematically
identical. It could be argue that it is also the case for radargrammetry [80] and epipolar descriptions of
SAR. This means that a rigorous sensor model based on the range-Doppler equations may reduce to affine
epipolar constraint. This requires further theoretical analysis of state estimation models.
The review article [103] presents many different techniques of reconstruction using radar images. The Stereo
model presented in this thesis only discusses features in 3D space. Features in SAR are also dependant on
radiometric effects. Even though Lambertian scattering is a model that has historically been good enough for
Page 36 of 47
![Page 49: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/49.jpg)
6. CONCLUSION
clinometry we can use modern computing to utilize more advanced models for a radiometric correction step
in geocoding SAR images for the registration and matching process in reconstruction or pose estimation. [104]
Any isotropic backscattering material contributes to image intensity with a dependance on angle of incidence
of the radar signal. This angle of incidence, or the orientation of a normal surface, represents a cone in which
the surface can be oriented and by smoothness and other assumptions one can estimate topography. One
way of disambiguating orientation of a surface is by polarimetric measurements, also mentioned in [103].
This review ends on a note discussing fusion of information from different image acquisition modes using
the same sensor or platform. This is more relevant today with the development and implementation of AESA.
The evaluation of a proposed epipolar model for SAR in [105] is that of a distribution of isotropic scatterers
in 3D space, thus no radiometric distortions are induced that are necessary for uneven natural terrain.
As discussed there exist measures to compensate deviations from a nominal straight trajectory for SAR
imaging. Another type of trajectory that is being utilized is a circular trajectory for a circular aperture,
from which deviations are compensated in [106]. It is entirely possible in the near future for continuous SAR
Video using circle- and line-segmented contiguous trajectories. When a frame in such a video changes ac-
quisition mode the sensor model presented here, with modification for spotlight and squint, can account for it.
There is some discussion in [105] on how to address varied trajectories and observation modes. The same
can be done for the orthographic model presented here as mentioned in section 3.4.
6.2 Answers to research questions.
The work presented in the thesis compiles the premises from which the following conclusions have been
drawn.
• 1) Is the method of using 3D terrain maps for SAR image reference good enough for use in positioning?
For low relief areas, simulated images have no salient features where there may be distinctive radiometric
change besides shading. It is fine for urban environments and for mountainous areas features drift due to
shading.
• 2) Can texture, based on optical information, be used to generate reference images with more infor-
mation than only elevation maps?
Application of texture from optical images can not be used directly as the contributing factors to optical
and SAR images are very different. The vector data used in this thesis, to define different types of reflectors
in a terrain, are based on measurements and classifications in optical spectrum from which we can define
radar-specific behaviour. In short, representations of radar-specific behaviour can be inferred from other
measurements.
• 3) How to increase the amount of radar specific information in simulated reference images?
Combinations of diffuse and specular reflection contributions can be used to approximate most behaviour in
real SAR images. Realism attainable in the proposed solution is good enough to get radar-specific features
in geometrically non-salient environments for the purposes of georeferencing or other applications discussed.
• 4) What information in SAR images is used in registration and quantitative analysis?
Again we need to refer to geometric and radiometric features but now in the context of imaging geometry.
If we interpret the SAR image as looking normal to the slant range plane we see, in the case of layover,
semi-transparent scene with illumination from the position of the trajectory.
Page 37 of 47
![Page 50: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/50.jpg)
6. CONCLUSION
• 5) Is it worth developing radar-specific image analysis methods and algorithms?
Because optical and radar images are so different it is highly relevant to develop radar-specific models and
methods. Decades of SAR development history shows how difficult it is to develop analysis methods of radar
images. The main difficulty of implementing SAR-aided navigation is reported as lack of SAR-specific tools.
SAR image analysis algorithms are also applicable in other coherent range-based imaging.
• 6) Is it possible to orient an image by direct matching from different views?
Another way of putting this question is: can we utilize matching error directly? A solution to arbitrary
stereo SAR configurations is presented. This can be used in formulating epipolar constraint, similar to other
approaches in SFM, VO, and SLAM.
6.3 Future Work
This section is divided into near future implementation and validation of reconstruction or positioning using
affine projections in SAR imaging geometry. Multistatic SAR is an interesting expansion of this work into
novel SAR research. PolSAR provides rgb rather than greyscale data that may increase reliability in using
SAR for positioning.
Other areas that are interesting to explore or just to keep in mind is radiometric correction of shading
as a method of dense matching. Understanding video algorithms, linear and circular trajectories as well
as changing image acquisition mode between scanning and spotlight, enables using a continuous stream of
images with segmented linear and circular trajectories. Fundamental matrix and homography estimation in
transform domains without use of correspondences is also an interesting future development.
6.3.1 Implementation of CV in SAR
The presentation of practical implementation has been limited to 3D scenes. Planar homography would be
very interesting to implement for SAR as a lot of scenes are functionally flat unless surveying mountainous,
that carry additional complexity regarding shading, or urban areas. Functionally in the sense that funda-
mental matrix estimation requires that scatterers are not in the same plane as this introduces ambiguity
in the estimation. In terms of SAR we need layover to make use of fundamental matrix based estimations.
Homography relates planar scene orientation with view and this is well understood for perspective cam-
eras. [86] For the orthographic case this is not as well developed and very few publications consider affine
homography. The paper [107] may be a starting ground for future efforts.
6.4 Multistatic SAR
Given the introductory background it is of interest to also discuss different modes of cooperative and non-
cooperative navigation systems based on SAR. [108] [109]
• Non-cooperative Mode
– Simulated Reference Image for real-time positioning.
– Using a sequence of images as input to virtual stereo configuration.
∗ Video SAR in non-linear trajectories
∗ Sequence or disordered set of images with higher temporal displacement.
• Cooperative Mode
– Multipass over the same area.
Page 38 of 47
![Page 51: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/51.jpg)
6. CONCLUSION
∗ Similar to non-cooperative multiple pass or sequence. Timing can be shorter and stereo
configuration can be optimized for.
– Simultaneous Mode
∗ Requires that at most one SAR sensor has a transmitter element.
Multistatic SAR would be either a passive mode using illuminator of opportunity or an active system with a
dedicated transmitter. A similar mathematical sensor model as the one presented in this thesis may be used,
but additional effort is required to describe the imaging geometry.Tthe skewing of projection that happens
due to the fact that the isorange lines are normal to the vertex of line of illumination and line of reception
as shown in figure 30.
Figure 30: The isorange lines will be skewed relative to monostatic imaging. They will be oriented such that
they are perpendicular to the bisector of the LOS of reciever and transmitter.
A skewing parameter is sometimes used in traditional CV as smaller correction. [87] For multistatic SAR the
position of transmitter relative to receiver is is intrinsic to the sensor model. The estimated pose will still be
perpendicular with respect to slant range plane though this imaging plane will have a different orientation
in SAR. Rather than lying in the same plane as trajectory and antenna pointing it will lie in the same plane
as the pointing of receiver and transmitter.
6.5 Polarimetric Decomposition
As an extension to the model preparation work is that of including polarimetry. The SAR simulator used
enable the use of RGB textures for terrain that simplify some aspects of model preparation for future devel-
opment. Rather than limiting the discussion to greyscale textures we may use RGB to represent other radar
scattering phenomena.
[110] presents three element decomposition of PolSAR transmit and receive data in terms of primary scat-
tering mechanism:
• HH + VV Double-bounce Scattering (tree stem)
• 2HV Volume Scattering (foliage)
• HH - VV surface Scattering (ocean)
Page 39 of 47
![Page 52: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/52.jpg)
6. CONCLUSION
The aforementioned decomposition is based on predominant scattering mechanisms where real scenes will
have various shades that can be classified using GIS data. We can also leverage GIS data in applying texture
map according to PolSAR classifications. [111] This may enable use of superpixel classification algorithms
such as the approach presented in [17] for SAR-based navigation. Because the simulated PolSAR data comes
from applying a texture the simulation ought to be limited to purely diffusive scattering.
Other papers bringing up PoLSAR aim to simulate how polarized signals scatter rather than applying an
RGB texture directly to an elevation model. One of the reasons why this may not be applicable is because
PolSAR data dependant on orientation of surface to LOS. [112] Simulation of PolSAR using GIS data as
described here requires only using diffuse scattering model. This simulates the emergent decomposition
into scattering mechanisms directly rather than simulating ray behaviour in polarization models followed by
decomposing into scattering mechanisms, as described in for example [48], [45], or [44].
Page 40 of 47
![Page 53: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/53.jpg)
REFERENCES
References
[1] Alberto Moreira, Pau Prats-Iraola, Marwan Younis, Gerhard Krieger, Irena Hajnsek, and Konstantinos P
Papathanassiou. A tutorial on synthetic aperture radar. IEEE Geoscience and remote sensing magazine,
1(1):6–43, 2013.
[2] Zoran Sjanic and Fredrik Gustafsson. Simultaneous navigation and sar auto-focusing. In Information Fusion
(FUSION), 2010 13th Conference on, pages 1–7. IEEE, 2010.
[3] Zoran Sjanic and Fredrik Gustafsson. Navigation and sar auto-focusing based on the phase gradient approach.
In Information Fusion (FUSION), 2011 Proceedings of the 14th International Conference on, pages 1–8. IEEE,
2011.
[4] Zoran Sjanic and Fredrik Gustafsson. Fusion of information from sar and optical map images for aided nav-
igation. In Information Fusion (FUSION), 2012 15th International Conference on, pages 1705–1711. IEEE,
2012.
[5] Zoran Sjanic and Fredrik Gustafsson. Navigation and sar focusing with map aiding. IEEE Transactions on
Aerospace and Electronic Systems, 51(3):1652–1663, 2015.
[6] Zoran Sjanic and Fredrik Gustafsson. Simultaneous navigation and synthetic aperture radar focusing. IEEE
Transactions on Aerospace and Electronic Systems, 51(2):1253–1266, 2015.
[7] Tomas Toss, Patrik Dammert, Zoran Sjanic, and Fredrik Gustafsson. Navigation with sar and 3d-map aiding.
In Information Fusion (Fusion), 2015 18th International Conference on, pages 1505–1510. IEEE, 2015.
[8] John Canny. A computational approach to edge detection. IEEE Transactions on pattern analysis and machine
intelligence, (6):679–698, 1986.
[9] Gunilla Borgefors. Hierarchical chamfer matching: A parametric edge matching algorithm. IEEE Transactions
on pattern analysis and machine intelligence, 10(6):849–865, 1988.
[10] Todd E Humphreys. Detection strategy for cryptographic gnss anti-spoofing. IEEE Transactions on Aerospace
and Electronic Systems, 49(2):1073–1090, 2013.
[11] George T Schmidt. Navigation sensors and systems in gnss degraded and denied environments. Chinese Journal
of Aeronautics, 28(1):1–10, 2015.
[12] Tim Bailey and Hugh Durrant-Whyte. Simultaneous localization and mapping (slam): Part ii. IEEE Robotics
& Automation Magazine, 13(3):108–117, 2006.
[13] Davide Scaramuzza and Friedrich Fraundorfer. Visual odometry part i: The first 30 years and fundamentals.
IEEE robotics & automation magazine, 18(4):80–92, 2011.
[14] Hugh Durrant-Whyte and Tim Bailey. Simultaneous localization and mapping: part i. IEEE Robotics Au-
tomation Magazine, 13(2):99 – 110, 2006.
[15] Sina Sharif Mansouri, Christoforos Kanellakis, David Wuthier, Emil Fresk, and George Nikolakopoulos. Co-
operative aerial coverage path planning for visual inspection of complex infrastructures. arXiv:1611.05196
[cs.RO], 2016.
[16] Friedrich Fraundorfer and Davide Scaramuzza. Visual odometry part ii: Matching, robustness, optimization,
and applications. IEEE Robotics & Automation Magazine, 19(2):78–90, 2012.
[17] Fredrik Lindsten, Jonas Callmer, Henrik Ohlsson, David Tornqvist, Thomas B Schon, and Fredrik Gustafsson.
Geo-referencing for uav navigation using environmental classification. In Robotics and Automation (ICRA),
2010 IEEE International Conference on, pages 1420–1425. IEEE, 2010.
[18] Jonas Callmer, David Tornqvist, Fredrik Gustafsson, Henrik Svensson, and Pelle Carlbom. Radar slam using
visual features. EURASIP Journal on Advances in Signal Processing, 71, 2011.
[19] Eric B Quist and Randal W Beard. Radar odometry on fixed-wing small unmanned aircraft. IEEE Transactions
on Aerospace and Electronic Systems, 52(1):396–410, 2016.
[20] Christoforos Kanellakis and George Nikolakopoulos. Survey on computer vision for uavs: Current developments
and trends. Journal of Intelligent & Robotic Systems, pages 1–28, 2017.
[21] John Oliensis. A critique of structure-from-motion algorithms. Computer Vision and Image Understanding,
80(2):172–214, 2000.
Page 41 of 47
![Page 54: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/54.jpg)
REFERENCES
[22] Jose Melo and Anıbal Matos. Survey on advances on terrain based navigation for autonomous underwater
vehicles. Ocean Engineering, 139:250–264, 2017.
[23] Michael P Hayes and Peter T Gough. Synthetic aperture sonar: a review of current status. IEEE Journal of
Oceanic Engineering, 34(3):207–224, 2009.
[24] Davide O Nitti, Fabio Bovenga, Maria T Chiaradia, Mario Greco, and Gianpaolo Pinelli. Feasibility of using
synthetic aperture radar to aid uav navigation. Sensors, 15(8):18334–18359, 2015.
[25] Roland E Weibel and R John Hansman. Safety considerations for operation of unmanned aerial vehicles in the
national airspace system. Technical report, 2006.
[26] Jiali Yan, Ji Guo, Qianrong Lu, Kaizhi Wang, and Xingzhao Liu. X-band mini sar radar on eight-rotor mini-
uav. In Geoscience and Remote Sensing Symposium (IGARSS), 2016 IEEE International, pages 6702–6705.
IEEE, 2016.
[27] Liu Weixian, Feng Hongchuan, Aye Su Yee, Tan Shen Hsiao, Ng Boon Poh, Dominic New, Hu Yi, Wang
Zhixiang, and Peh Ruijie. Premier results of the multi-rotor based fmcw synthetic aperture radar system. In
Radar Conference (RadarConf), 2016 IEEE, pages 1–4. IEEE, 2016.
[28] Michael Caris, Stephan Stanko, Rainer Sommer, Alfred Wahlen, Arnulf Leuther, Axel Tessmann, Mateusz
Malanowski, Piotr Samczynski, Krzysztof Kulpa, Michael Cohen, et al. Sarape-synthetic aperture radar for all
weather penetrating uav application. In Radar Symposium (IRS), 2013 14th International, volume 1, pages
41–46. IEEE, 2013.
[29] D Gromek, P Samczynski, K Kulpa, GCS Cruz, TMM Oliveira, LFS Felix, PAV Goncalves, CMBP Silva, ALC
Santos, and JAP Morgado. C-band sar radar trials using uav platform: Experimental results of sar system
integration on a uav carrier. In Radar Symposium (IRS), 2016 17th International, pages 1–5. IEEE, 2016.
[30] Piotr Kaniewski, Czeslaw Lesnik, Piotr Serafin, and Michal Labowski. Chosen results of flight tests of watsar
system. In Radar Symposium (IRS), 2016 17th International, pages 1–5. IEEE, 2016.
[31] Henry D Baird Jr. Autofocus motion compensation for synthetic aperture radar and its compatibility with
strapdown inertial navigation sensors on highly maneuverable aircraft. Technical report, DTIC Document,
1984.
[32] Brian J Young. An integrated synthetic aperture radar/global positioning system/inertial navigation system
for target geolocation improvement. Technical report, DTIC Document, 1999.
[33] Jeffrey R Layne and Erik P Blasch. Integrated synthetic aperture radar and navigation systems for targeting
applications. Technical report, WRIGHT LAB WRIGHT-PATTERSON AFB OH, 1997.
[34] Shesheng Gao, Yongmin Zhong, Xueyuan Zhang, and Bijan Shirinzadeh. Multi-sensor optimal data fusion for
ins/gps/sar integrated navigation system. Aerospace Science and Technology, 13(4):232–237, 2009.
[35] Shesheng Gao, Li Xue, Yongmin Zhong, and Chengfan Gu. Random weighting method for estimation of error
characteristics in sins/gps/sar integrated navigation system. Aerospace Science and Technology, 46:22–29, 2015.
[36] M Greco, K Kulpa, G Pinelli, and P Samczynski. Sar and insar georeferencing algorithms for inertial naviga-
tion systems. In Photonics Applications in Astronomy, Communications, Industry, and High-Energy Physics
Experiments 2011, pages 80081O–80081O. International Society for Optics and Photonics, 2011.
[37] M Greco, S Querry, G Pinelli, K Kulpa, P Samczynski, D Gromek, A Gromek, M Malanowski, B Querry, and
A Bonsignore. Sar-based augmented integrity navigation architecture. In Radar Symposium (IRS), 2012 13th
International, pages 225–229. IEEE, 2012.
[38] Davide O Nitti, Fabio Bovenga, Alberto Morea, Fabio M Rana, Luciano Guerriero, Mario Greco, and Gianpaolo
Pinelli. On the use of sar interferometry to aid navigation of uav. SPIE Remote Sensing, pages 853203–853203,
2012.
[39] Alfredo Renga, Maria Daniela Graziano, Marco DErrico, Antonio Moccia, Flavio Menichino, Sergio Vetrella,
Domenico Accardo, Federico Corraro, Giovanni Cuciniello, Francesco Nebula, et al. Galileo-based space-
airborne bistatic sar for uas navigation. Aerospace Science and Technology, 27(1):193–200, 2013.
[40] Ming Xiao, Wanli Li, Tianjiang Hu, Liang Pan, Lincheng Shen, and Yanlong Bu. Sar aided navigation based on
fast feature. In Software Engineering and Service Science (ICSESS), 2013 4th IEEE International Conference
on, pages 861–864. IEEE, 2013.
Page 42 of 47
![Page 55: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/55.jpg)
REFERENCES
[41] Hongxing Liu, Zhiyuan Zhao, and Kenneth C Jezek. Correction of positional errors and geometric distortions
in topographic maps and dems using a rigorous sar simulation technique. Photogrammetric Engineering &
Remote Sensing, 70(9):1031–1042, 2004.
[42] Giorgio Franceschetti, Maurizio Migliaccio, Daniele Riccio, and Gilda Schirinzi. Saras: A synthetic aperture
radar (sar) raw signal simulator. IEEE Transactions on Geoscience and Remote Sensing, 30(1):110–123, 1992.
[43] G Domik, J Raggam, and F Leberl. Rectification of radar images using stereo-derived height models and
simulation. International Archives of Photogrammetry, Remote Sensing and Spatial Information Sciences,
25:109–116, 1984.
[44] Gerard Margarit, Jordi J Mallorqui, and Carlos Lopez-Martinez. Grecosar, a sar simulator for complex targets:
Application to urban environments. In Geoscience and Remote Sensing Symposium, 2007. IGARSS 2007. IEEE
International, pages 4160–4163. IEEE, 2007.
[45] Krzysztof S Kulpa, Piotr Samczynski, Mateusz Malanowski, Artur Gromek, Damian Gromek, Wojciech Gwarek,
B Salski, and Grzegorz Tanski. An advanced sar simulator of three-dimensional structures combining geomet-
rical optics and full-wave electromagnetic methods. IEEE Transactions on Geoscience and Remote Sensing,
52(1):776–784, 2014.
[46] Timo Balz, Horst Hammer, and Stefan Auer. Potentials and limitations of sar image simulators–a comparative
study of three simulation approaches. ISPRS Journal of Photogrammetry and Remote Sensing, 101:102–109,
2015.
[47] Horst Hammer and Karsten Schulz. Dedicated sar simulation tools for atr and scene analysis. In SPIE Remote
Sensing, pages 81790N–81790N. International Society for Optics and Photonics, 2011.
[48] Timo Balz and Uwe Stilla. Hybrid gpu-based single-and double-bounce sar simulation. IEEE Transactions on
Geoscience and Remote Sensing, 47(10):3519–3529, 2009.
[49] Stefan Auer, Richard Bamler, and Peter Reinartz. Raysar-3d sar simulator: Now open source. In Geoscience
and Remote Sensing Symposium (IGARSS), 2016 IEEE International, pages 6730–6733. IEEE, 2016.
[50] Stefan Auer. 3D synthetic aperture radar simulation for interpreting complex urban reflection scenarios. PhD
thesis, Technische Universitat Munchen, 2011.
[51] Robert Eckardt, Nicole Richter, Stefan Auer, Michael Eineder, Achim Roth, Irena Hajnsek, Christian Thiel,
and Christiane Schmullius. Sar-edu-a german education initiative for applied synthetic aperture radar remote
sensing. In Geoscience and Remote Sensing Symposium (IGARSS), 2012 IEEE International, pages 5315–5317.
IEEE, 2012.
[52] Stefan Auer and Stefan Gernhardt. Linear signatures in urban sar images—partly misinterpreted? IEEE
Geoscience and Remote Sensing Letters, 11(10):1762–1766, 2014.
[53] Junyi Tao and Stefan Auer. Simulation-based building change detection from multiangle sar images and digital
surface models. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 9(8):3777–
3791, 2016.
[54] Horst Hammer, Silvia Kuny, and Karsten Schulz. Amazing sar imaging effects-explained by sar simulation. In
EUSAR 2014; 10th European Conference on Synthetic Aperture Radar; Proceedings of, pages 1–4. VDE, 2014.
[55] Richard Bamler and Michael Eineder. The pyramids of gizeh seen by terrasar-x—a prime example for unex-
pected scattering mechanisms in sar. IEEE Geoscience and Remote Sensing Letters, 5(3):468–470, 2008.
[56] Thomas K Sjogren, Viet T Vu, Mats I Pettersson, Anders Gustavsson, and Lars MH Ulander. Moving target
relative speed estimation and refocusing in synthetic aperture radar images. IEEE Transactions on Aerospace
and electronic systems, 48(3):2426–2436, 2012.
[57] Letao Xu, Dejun Feng, and Xuesong Wang. Improved synthetic aperture radar micro-doppler jamming method
based on phase-switched screen. IET Radar, Sonar & Navigation, 10(3):525–534, 2016.
[58] J Cutrona, E.N. Leith, L.J. Porcello, and W.E. Vivian. On the application of coherent optical processing
techniques to synthetic aperture radar. Proceedings of the IEEE, 54(8), 1966.
[59] Joseph W. Goodman. Introduction to Fourier Optics. McGraw-Hill, 2nd edition, 1996.
[60] Dale A Ausherman. Digital versus optical techniques in synthetic aperture radar (sar) data processing. Optical
Engineering, 19(2):192157–192157, 1980.
Page 43 of 47
![Page 56: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/56.jpg)
REFERENCES
[61] Yunhua Zhang, Jingshan Jiang, et al. Why optical images are easier to understand than radar images?—from
the electromagnetic scattering and signal point of view. In Proc. PIERS, pages 1411–1415, 2014.
[62] John C Curlander. Location of spaceborne sar imagery. IEEE Transactions on Geoscience and Remote Sensing,
(3):359–364, 1982.
[63] Ivan Petillot, Emmanuel Trouve, Philippe Bolon, Andreea Julea, Yajing Yan, Michel Gay, and Jean-Michel
Vanpe. Radar-coding and geocoding lookup tables for the fusion of gis and sar data in mountain areas. IEEE
Geoscience and Remote Sensing Letters, 7(2):309–313, 2010.
[64] William Gareth Rees. Physical principles of remote sensing. Cambridge University Press, 2013.
[65] Stefan Auer, Stefan Gernhardt, and Richard Bamler. Ghost persistent scatterers related to multiple signal
reflections. IEEE Geoscience and Remote Sensing Letters, 8(5):919–923, 2011.
[66] Gottfried Schwarz and Mihai Datcu. Calibration aspects for time series of high resolution terrasar-x images.
In 5. TerraSAR-X / 4. TanDEM-X Science Team Meeting, 2013.
[67] Fabrizio Argenti, Alessandro Lapini, Tiziano Bianchi, and Luciano Alparone. A tutorial on speckle reduction
in synthetic aperture radar images. IEEE Geoscience and remote sensing magazine, 1(3):6–35, 2013.
[68] Hua Xie, Leland E Pierce, and Fawwaz T Ulaby. Statistical properties of logarithmically transformed speckle.
IEEE Transactions on Geoscience and Remote Sensing, 40(3):721–727, 2002.
[69] Andreas Danklmayer, Bjorn J Doring, Marco Schwerdt, and Madhu Chandra. Assessment of atmospheric
propagation effects in sar images. IEEE Transactions on Geoscience and Remote Sensing, 47(10):3507–3518,
2009.
[70] Sophie Paquerault, Henri Maitre, and J-M Nicolas. Radarclinometry for ers-1 data mapping. In Geoscience
and Remote Sensing Symposium, 1996. IGARSS’96.’Remote Sensing for a Sustainable Future.’, International,
volume 1, pages 503–505. IEEE, 1996.
[71] Tomas Akenine-Moller, Eric Haines, and Naty Hoffman. Real-time rendering. CRC Press, 2008.
[72] Kanika Goel and Nico Adam. Three-dimensional positioning of point scatterers based on radargrammetry.
IEEE Transactions on Geoscience and Remote Sensing, 50(6):2355–2363, 2012.
[73] Daiki Maruki, Shuji Sakai, Koichi Ito, Takafumi Aoki, Jyunpei Uemoto, and Seiho Uratsuka. Stereo radargram-
metry using airborne sar images without gcp. In Image Processing (ICIP), 2015 IEEE International Conference
on, pages 3585–3589. IEEE, 2015.
[74] Klas Nordberg. Introduction to representation and estimation in geometry. Linkoping University, 2013.
[75] Elisabeth Simonetto, Helene Oriot, and Rene Garello. Rectangular building extraction from stereoscopic air-
borne radar images. IEEE Transactions on Geoscience and remote Sensing, 43(10):2386–2395, 2005.
[76] Dominik Brunner, Guido Lemoine, and Lorenzo Bruzzone. Earthquake damage assessment of buildings using
vhr optical and sar imagery. IEEE Transactions on Geoscience and Remote Sensing, 48(5):2403–2420, 2010.
[77] M Kobrick, F Leberl, and J Raggam. Radar stereo mapping with crossing flight lines. Canadian Journal of
Remote Sensing, 12(2):132–148, 1986.
[78] Franck Fayard, Stephane Meric, and Eric Pottier. Matching stereoscopic sar images for radargrammetric
applications. In Geoscience and Remote Sensing Symposium, 2007. IGARSS 2007. IEEE International, pages
4364–4367. IEEE, 2007.
[79] Michael A Pisaruck, Verne H Kaupp, Harold C Macdonald, and William P Waite. Model for optimal parallax
in stereo radar imagery. IEEE Transactions on Geoscience and Remote Sensing, (6):564–569, 1984.
[80] Franz W Leberl. Radargrammetric image processing. 1990.
[81] Karlheinz Gutjahr, Roland Perko, Hannes Raggam, and Mathias Schardt. The epipolarity constraint in stereo-
radargrammetric dem generation. IEEE transactions on geoscience and remote sensing, 52(8):5014–5022, 2014.
[82] Sergi Duque, Alessandro Parizzi, Francesco De Zan, and Michael Eineder. Precise and automatic 3d abso-
lute geolocation of targets using only two long-aperture sar acquisitions. In Geoscience and Remote Sensing
Symposium (IGARSS), 2016 IEEE International, pages 7415–7418. IEEE, 2016.
[83] Henry J Theiss and Edward M Mikhail. An attempt at regularization of a sar pair to aid in stereo viewing.
ASPRS Conference 2005 Proceedings, 2005.
Page 44 of 47
![Page 57: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/57.jpg)
REFERENCES
[84] Jan J Koenderink and Andrea J Van Doorn. Affine structure from motion. JOSA A, 8(2):377–385, 1991.
[85] Larry S Shapiro, Andrew Zisserman, and Michael Brady. 3d motion recovery via affine epipolar geometry.
International Journal of Computer Vision, 16(2):147–182, 1995.
[86] Yi Ma, Stefano Soatto, Jana Kosecka, and S Shankar Sastry. An invitation to 3-d vision: from images to
geometric models, volume 26. Springer Science & Business Media, 2012.
[87] Richard Hartley and Andrew Zisserman. Multiple view geometry in computer vision. Cambridge university
press, 2003.
[88] Carlo Tomasi and Takeo Kanade. Shape and motion from image streams under orthography: a factorization
method. International Journal of Computer Vision, 9(2):137–154, 1992.
[89] Horst Hammer, Silvia Kuny, and Karsten Schulz. On the use of gis data for realistic sar simulation of large urban
scenes. In Geoscience and Remote Sensing Symposium (IGARSS), 2015 IEEE International, pages 4538–4541.
IEEE, 2015.
[90] Lars Harrie, Bengt Andersson, Clas-Goran Persson, Milan Horemuz, Anders Boberg, Perola Olsson, Helen
Rost, and Yuriy Reshetyuk. Geodetisk och fotogrammetrisk matnings-och berakningsteknik, 2013.
[91] L Paul Chew. Constrained delaunay triangulations. Algorithmica, 4(1-4):97–108, 1989.
[92] Stefan Auer. Raysar - 3d sar simulator. Documentation v1.1, 2016.
[93] Barbara Zitova and Jan Flusser. Image registration methods: a survey. Image and vision computing, 21(11):977–
1000, 2003.
[94] Sajid Saleem, Abdul Bais, and Robert Sablatnig. Towards feature points based image matching between satellite
imagery and aerial photographs of agriculture land. Computers and Electronics in Agriculture, 126:12–20, 2016.
[95] Krystian Mikolajczyk and Cordelia Schmid. A performance evaluation of local descriptors. IEEE transactions
on pattern analysis and machine intelligence, 27(10):1615–1630, 2005.
[96] Flora Dellinger, Julie Delon, Yann Gousseau, Julien Michel, and Florence Tupin. Sar-sift: a sift-like algorithm
for sar images. IEEE Transactions on Geoscience and Remote Sensing, 53(1):453–466, 2015.
[97] Keith F Blonquist and Robert T Pack. A bundle adjustment approach with inner constraints for the scaled
orthographic projection. ISPRS journal of photogrammetry and remote sensing, 66(6):919–926, 2011.
[98] Monson H Hayes. Statistical digital signal processing and modeling. John Wiley & Sons, 2009.
[99] John A Nelder and Roger Mead. A simplex method for function minimization. The computer journal, 7(4):308–
313, 1965.
[100] Dapeng Li. A novel method for multi-angle sar image matching. Chinese Journal of Aeronautics, 28(1):240–249,
2015.
[101] Johan ES Fransson, Mattias Magnusson, Klas Folkesson, Bjorn Hallberg, Gustaf Sandberg, Gary Smith-
Jonforsen, Anders Gustavsson, and Lars MH Ulander. Mapping of wind-thrown forests using vhf/uhf sar
images. In Geoscience and Remote Sensing Symposium, 2007. IGARSS 2007. IEEE International, pages 2350–
2353. IEEE, 2007.
[102] Jae-In Kim and Taejung Kim. Comparison of computer vision and photogrammetric approaches for epipolar
resampling of image sequence. Sensors, 16(3):412, 2016.
[103] Thierry Toutin and Laurence Gray. State-of-the-art of elevation extraction from satellite sar data. ISPRS
Journal of Photogrammetry and Remote Sensing, 55(1):13–33, 2000.
[104] J Thomas, W Kober, and F Leberl. Multiple image sar shape-from-shading. Photogrammetric Engineering and
Remote Sensing, 57(1):51–59, 1991.
[105] Dong Li and Yunhua Zhang. A rigorous sar epipolar geometry modeling and application to 3d target recon-
struction. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 6(5):2316–2323,
2013.
[106] Leping Chen, Daoxiang An, and Xiaotao Huang. A backprojection-based imaging for circular synthetic aperture
radar. IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing, 2017.
[107] Toby Collins and Adrien Bartoli. Planar structure-from-motion with affine camera models: Closed-form solu-
tions, ambiguities and degeneracy analysis. IEEE transactions on pattern analysis and machine intelligence,
39(6):1237, 2017.
Page 45 of 47
![Page 58: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/58.jpg)
REFERENCES
[108] Lars MH Ulander, Per-Olov Frolind, Anders Gustavsson, Rolf Ragnarsson, and Gunnar Stenstrom. Vhf/uhf
bistatic and passive sar ground imaging. In Radar Conference (RadarCon), 2015 IEEE, pages 0669–0673. IEEE,
2015.
[109] Marco D’Errico (EDT). Distributed Space Missions for Earth System Monitoring. Springer, 2013.
[110] Anthony Freeman and Stephen L Durden. A three-component scattering model for polarimetric sar data. IEEE
Transactions on Geoscience and Remote Sensing, 36(3):963–973, 1998.
[111] Jong-Sen Lee, Mitchell R Grunes, Thomas L Ainsworth, Li-Jen Du, Dale L Schuler, and Shane R Cloude. Unsu-
pervised classification using polarimetric decomposition and the complex wishart classifier. IEEE Transactions
on Geoscience and Remote Sensing, 37(5):2249–2258, 1999.
[112] Jong-Sen Lee and Thomas L Ainsworth. The effect of orientation angle compensation on coherency matrix and
polarimetric target decompositions. IEEE Transactions on Geoscience and Remote Sensing, 49(1):53–64, 2011.
Page 46 of 47
![Page 59: Simulated SAR with GIS Data and Pose Estimation using ...](https://reader033.fdocuments.in/reader033/viewer/2022042516/62619c09eda29365dc48fa9c/html5/thumbnails/59.jpg)
A. SAR FRAME SEQUENCE
A SAR Frame Sequence
A sequence of simulated SAR frames equal depression angle and altitude but for heading varying from 0◦ to
30◦. Intended to aid in conceptualizing observation geometry and affine fundamental matrix determination
in two views.
Figure 31: Simulated sequence of SAR images at different headings from 0 to 30 degrees.
Page 47 of 47