Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time...

86
Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo 1st December 2004 Thesis presented for the degree of Master of Science

Transcript of Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time...

Page 1: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

Fast time domain beamforming for syntheticaperture sonar

Nina ØdegaardUniversity of Oslo1st December 2004

Thesis presented for the degree of Master of Science

Page 2: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo
Page 3: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

CONTENTS 3

Contents

Abstract 5

Aknowledgments 6

List of acronyms 7

List of symbols 9

1 Introduction 13

1.1 Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.2 Problem to be addressed . . . . . . . . . . . . . . . . . . . . . . . . . 13

1.3 Contributions made by this thesis . . . . . . . . . . . . . . . . . . . . 14

1.4 Key results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

1.5 Thesis outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14

2 Synthetic aperture sonar (SAS) fundamentals 16

2.1 SAS imaging principle . . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.2 SAS image reconstruction . . . . . . . . . . . . . . . . . . . . . . . . 21

2.3 Sampling constraint . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26

2.4 Multielement receiver systems . . . . . . . . . . . . . . . . . . . . . 27

2.5 Phase centre approximation (PCA) . . . . . . . . . . . . . . . . . . . 28

2.6 Motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29

2.7 Image quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32

2.8 SAS signal processing overview . . . . . . . . . . . . . . . . . . . . . 33

3 Beamforming 35

3.1 Time domain beamforming (TDB) . . . . . . . . . . . . . . . . . . . 35

3.2 Backprojection (BP) . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36

3.3 Dynamic focusing (DF) . . . . . . . . . . . . . . . . . . . . . . . . . . 37

3.4 Fast time domain methods . . . . . . . . . . . . . . . . . . . . . . . . 38

3.5 Applications across diciplines . . . . . . . . . . . . . . . . . . . . . . 38

3.5.1 Radar . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

Page 4: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

4 CONTENTS

3.5.2 Seismology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

3.5.3 Computed tomography (CT) . . . . . . . . . . . . . . . . . . 41

3.5.4 Medical ultrasound . . . . . . . . . . . . . . . . . . . . . . . . 42

4 Fast Factorized Back-Projection (FFBP) 43

4.1 Principle . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

4.2 Details . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

4.3 Error analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

5 FFBP performance 54

5.1 Simulation setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

5.2 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54

5.2.1 Processing load details . . . . . . . . . . . . . . . . . . . . . . 54

5.2.2 Quality . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 58

5.3 Remarks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

6 Experimental results from the HUGIN AUV 69

6.1 The HUGIN family of AUVs . . . . . . . . . . . . . . . . . . . . . . . 69

6.2 The SAS program for HUGIN . . . . . . . . . . . . . . . . . . . . . . 71

6.3 Experimental setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

6.4 Imaging results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73

7 Conclusion 82

7.1 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 83

Page 5: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

CONTENTS 5

Abstract

To this day the most widely used imaging methods in synthetic aperture sonar(SAS) are frequency domain inversion methods. This is due to the fact that thesemethods are faster than the corresponding time domain methods, as the fastFourier transform can be utilized. However, methods using the fast Fourier trans-form make a number of approximations, for example that the sensor array ele-ments can be modeled to lie on a straight line. In many applications, for instancein SAS imaging, this is seldom the case. It is inherent in the time domain inversionmethods that they can handle an arbitrary array geometry, but these methods arealso considerably slower than their frequency domain counterparts.

This thesis will discuss a relatively new and fast time domain inversion methodbased on factorizing the image scene and the sensor array. Ulander, Hellsten andStenstrom were the first to publish their development of this method in [Ulanderet al., 2003]. The algorithm introduces an error than can be tuned to suit therequirements of the current application. Speed can be traded for image quality.

FFBP was implemented and tested on both real and simulated data. The al-gorithm was shown to be fast and give images of good quality for widebam SASsystems. For narrowbeam SAS systems, however, the quality of the images wasgood, but the speed was lower than with standard backprojection.

The thesis is part of a research program at FFI (Norwegian defence research es-tablishment) and Kongsberg Simrad AS that aims to make an interferometric syn-thetic aperture sonar for the HUGIN AUV.

Page 6: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

6 CONTENTS

Aknowledgments

I would like to thank Roy Edgar Hansen and Hayden John Callow at FFI (Norwe-gian defence research establishment) and Andreas Austeng with the Departmentof Informatics at the University of Oslo for all their help with the implementationof the algorithm and the writing of the thesis, as well as with providing data andliterature.

Page 7: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

CONTENTS 7

List of acronyms

Acronym DescriptionAUV Autonomous underwater vehicle

BP BackprojectionCAT Computerized axial tomographyCBP Convolution backprojectionCT Computed tomography

CTD Conductivity, temperature and depthCVL Correlation velocity logDF Dynamic focusing

DPCA Displaced phase center approximationFBP Filtered backprojection

FFBP Fast factorized backprojectionFFI Norwegian defence research establishmentFFT Fast Fourier transform

FHBP Fast hierarchical backprojectionFRA Fourier reconstruction algorithm

HUGIN High-precision untethered geosurvey and inspection systemIFFT Inverse fast Fourier transformINS Inertial navigation system

Table 1: List of acronyms part 1

Page 8: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

8 CONTENTS

Acronym DescriptionISLR Integrated side lobe ratioKM Kirchhoff migrationLBL Long baselineMI Multilevel inversionMR Magnetic resonancePCA Phase center approximationPF Polar formatPM Phase modulation

PSLR Peak to side lobe ratioRAS Real Aperture SonarRD Range-Doppler

RNN Royal Norwegian NavyRx Receiving element

SAR Synthetic Aperture RadarSAS Synthetic Aperture Sonar

SDFFT Sparse data fast Fourier transformTDB Time domain beamformingTx Transmitting element

UUV Untethered underwater vehicle

Table 2: List of acronyms part 2

Page 9: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

CONTENTS 9

List of symbols

Type Symbol DescriptionAxes u Direction of travel for the platform

t Timex Along-track, azimuth, cross-range or slow time domainy Range or fast time domainz Vertical axisku Wavenumber in u-directionkx Wavemnumber in x-directionky Wavenumber in y-direction

kx(ω, ku) Wave number in x-direction as a function of ω and kuky(ω, ku) Wave number in y-direction as a function of ω and ku

Operators F−1 Inverse Fourier transform∗ Convolution∗ Complex conjugate

O(·) Order of magnitude

Table 3: List of axes and operators

Page 10: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

10 CONTENTS

Type Symbol DescriptionData spaces p(t) Pulse as a function of time

P (ω) Pulse in the frequency domains(t) Received one-dimensional signal

ss(t, u) Received two-dimensional signalSs(ω, u) Received signal in the fast-time frequency domainSS(ω, ku) Two-dimensional Fourier transform of the received signalssM(t, u) Matched filtered received signalgg(u, y) Ideal target reflectivity function as a function of the sensor

elements position in azimuth and rangeGG(kx, ky) Two-dimensional Fourier transform of the target reflectivity

functionrect( t

τ) Rectangular pulse of duration τ

δ(x) Delta impulse functionas(ω, θ) Sonar radiation pattern as a function of angular frequency,

and angleσ Reflection coefficient for an omnidirectional target

Table 4: List of data spaces

Page 11: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

CONTENTS 11

Symbol Units Description∆t s Delay for received signal∆ts s Steering delay∆tf s Focusing delay∆z m Distance between vertically displaced sensor elementsR m Rangec m/s Propagation speedδx m Azimuth resolutionδy m Range resolutionλ m WavelengthL m Length of apertureD m Length between pingsM Number of pingsN Number of receiversd m Azimuth sample spacingh m heightB Hz Pulse bandwidthτ s Pulse lengthTs s Time of arrival for the first sample in the pulseTf s Time of arrival for the last sample in the pulseω rad/s Angular frequencyα Chirp ratef Hz Frequency∆fD Hz Doppler band widthk rad/m Wave numberv m/s Platform velocity

Table 5: List of parameters part 1

Page 12: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

12 CONTENTS

Symbol Units DescriptionΩθ rad Beam widthθ rad Angle from sensor element to targetΩx rad/m Target support in kxΩy rad/m Target support in kyuTx m Position of transmitter in u-directionuRx m Position of receiver in u-directionuTx−Rx m Position of the collocated transmitter-receiver pair in u-directionup m Position of the platform in u-direction∆tr m Distance between transmitter and receiver∆cr m Distance between phase center and receiverPRF Hz Pulse repetition frequencyδt s Sampling criterion for the signalQR(j) Factorization factor for the receivers in stage jQI(j) Factorization factor for the image in stage jNs Number of stagesNr Number of range samples∆u m Distance between centers of subaperturesX Number of pixels in x-directionY number of pixels in y-direction∆r Range errorLA m Length of subapertureLI m Length of subimageγi Unknown factor for stage i

Table 6: List of parameters part 2

Page 13: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

1 INTRODUCTION 13

1 Introduction

1.1 Background

Sonar means SOund Navigation and Ranging. A sonar transmits acoustic waves.The waves travel until they hit an object, then they are reflected back to the sonar.By measuring the time it takes for a signal to return to the sonar, one can determ-ine how far from the sonar the object is located. Sonar systems have been usedfor more than a century to detect underwater objects.

Synthetic aperture sonar (SAS) is a technique to make the resolution of the systemindependent of range. The sonar transmits and receives signals while movingalong a line. The signals received from various sonar positions are combinedto form an image. The technique was not developed until the 1950s (by Wiley[Wiley, 1985]). The technique was quickly adopted in the radar world, but theapplication to sonar was slower. The first results of SAS experiments in a testtank was published in [Sato et al., 1973] in 1973. The traditional real aperturesonar (RAS) is still most used.

The purpose of using SAS over RAS is that you get a much better resolution inthe final image because you synthesize a long array by moving the physical arrayalong a line. Images of the seafloor are useful for many applications, e.g. to findobjects such as mines and wrecks, and for making maps of the seafloor. There area variety of methods for forming images out of SAS data. Most SAS systems thesedays use frequency domain methods; these methods are fast, but have constraintsas to how much the sonar platform motion can diverge from a straight line. Atthe other extreme are the time domain methods, which are slow, but can handleany platform motion. No books about SAS are published to this day, but as theprinciples are the same as for synthetic aperture radar (SAR), good books to readare [Franceschetti and Lanari, 1999] and [Soumekh, 1999]. However, some PhD-thesis are written that cover the field of SAS well. These are e.g. [Banks, 2002],[Callow, 2003] and [Hawkins, 1996].

1.2 Problem to be addressed

This masters thesis was written for the degree of Master of Physics at the Uni-versity of Oslo. It addresses the field of syntetic aperture sonar image formation,and one imaging method in particular; fast factorized backprojection (FFBP).

The goal of the thesis was to implement FFBP and test it on both real and simu-lated data. The real data was collected by Edgetech SAS mounted on a HUGINAUV (HUGIN autonomous underwater vehicle). FFBP is a time domain method.It is faster than standard backprojection (BP), and it doesn’t face the limitationsof the frequency domain methods when it comes to platform motion. The tests

Page 14: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

14 1.3 CONTRIBUTIONS MADE BY THIS THESIS

were supposed to tell how the algorithm would perform with regards to speedand quality of the images for different datasets and different parameters of thealgorithm. The algorithm should also be compared to standard backprojection toshow that it is fast and that the quality of the images can be made adequate formost applications.

1.3 Contributions made by this thesis

Both FFBP and standard BP has been implemented in this process, as well as asimulator of SAS data. Tests of algorithm speed and image quality were carriedout. Real data were provided by FFI (Norwegian defence research establishment).

1.4 Key results

The results are promising; they show that FFBP is in fact much faster than BPfor certain datasets, at the same time as beeing able to handle an arbitrary arraygeometry, something fast methods working in the frequency domain cannot. Thequality of the images can also be tuned to suite any requirement. However, higherquality of the images comes at the expence of an increased computation time,so a trade-off must be made. There are some constraints to the algorithm. Anapproximation error controls the quality of the images, and this error dependson both parameters of the sonar system, as well as the scene to be imaged. Fornarrowbeam SAS systems, such as Edgetech SAS, the algorithm is not suitable.For widebeam SAS systems, however, the results show that FFBP can produce agood image in substantially shorter time than BP can, as well as supporting anarbitrary array geometry.

1.5 Thesis outline

Section 2 will cover the basics of SAS and is intended to give the reader anoverview of the field.

Section 3 gives an overview over different beamformers, and concentrates esp-cially on time domain beamformers. It also compares the advantages and disad-vantages of time domain and frequency domain methods. A description of twotime domain methods is given; BP and dynamic focusing (DF). The section alsocovers fast time domain mehods as well as list applications of beamforming inother diciplines.

Section 4 describes the main method of this thesis; fast factorized backprojection(FFBP).

Page 15: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

1 INTRODUCTION 15

Section 5 presents the results of testing FFBP on simulated data. Both perform-ance with respect to speed and image quality is described.

Section 6 presents the HUGIN AUV SAS system. It also shows the results ofapplying this implementation of FFBP to data from Edgetech SAS.

Section 7 gives the conclusion of the thesis as well as give some suggestions forfuture work related to FFBP.

Page 16: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

16

y

x

h

Cross track

Along track

Swath

u

Pulse locations

Range

Slant range

Cro

ssra

nge

Real aperturefootprint

Figure 1: Illustration of the geometry of a synthetic aperture sonar system. The platformcarrying the sonar moves along the u-axis. It transmits pulses at the locationsmarked as pulse locations. The area marked as real aperture footprint is the areathe physical sonar can ’see’ at any given time. The figure is taken from [Hansen,2001].

2 Synthetic aperture sonar (SAS) fundamentals

This section covers the basic principles of SAS imaging and is intended to givereaders who are not familiar with this field an understanding of the terminologyused and the assumptions made. This section will also show why a better resol-ution can be acheived with SAS than with RAS.

2.1 SAS imaging principle

To get a visual picture of how the SAS system works, it is instructive to study theimaging system shown in figure 1.

The platform where the sensor elements are mounted follows a path in the x-direction (also called along-track, azimuth, cross-range direction, or slow timedomain). We still refer to the direction the sonar is moving as u, to separate thesonar location from the location of the scene. At the positions marked as pulselocations, the transmitting sensor element sends out a pulse p(t) of length τ . t iscalled the fast time domain due to the fact that the acoustic waves travel witha much higher speed than the platform does. See e.g. [Johnson and Dudgeon,1993, chapter 2] for a more thorough study of the physics of propagating waves.The pulse is reflected by objects on the sea floor and the receiving sensor elementrecords it as s(t−∆t). ∆t is related to the range from the transmitter to the objectand back to the receiver. By assuming that the receiving and the transmitting

Page 17: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

2 SYNTHETIC APERTURE SONAR (SAS) FUNDAMENTALS 17

C r o s s t r a c k

A l o ng t r a c

k

S w a t h

P o s i t i o n f o r p i n g p + 3

y

x

h

R a n g e

T i m e r e c o r d

C r o ss r a n

g e

R e a l a p e r t u r ef o o t p r i n t

R 1

R N

S l a n t r a n g e

B a n k 2B a n k 1

p + 1p + 2

Figure 2: A sonar system consisting of two vertically displaced receiver arrays. This con-figuration is useful for finding the height profile of the image scene. The figure istaken from [Hansen and Sæbø, 2003].

element are at the same positon we have that

∆t =2R

c(1)

where R is the range to the object in meters and c is the speed of the propagatingwave (for acoustic waves in water c ≈ 1500 m/s). Both the transmitter and thereceiver have a radiation pattern that dictates how much of the imaging scene thesensor elements can ’see’ at any given time. This area is marked as the real aper-ture footprint in figure 1, and is what the sonar ’sees’ at each ping (the transmissionand reception of a pulse is called a ping). By moving the sonar in the u-direction,data from several footprints can be combined to give a better resolution imagethan could be made using only one ping. This is the essence in SAS imaging.

Because we are trying to image a 3-dimensional space using only 2 dimensions(x and y), there will be ambiguities as to the height of the imaged scene and as towhich side of the sonar the signal is coming from. The first problem can be solvedusing a system of two (or more) vertically separated receivers (shown in figure 2for two receivers). This gives us a chance to measure the phase difference betweenthe same pulse received at the two receivers (see figure 3), and hence solve for theunknown height. The other ambiguity is solved by placing the sonar on one sideof the platform as opposed to underneath it. This is referred to as a side-lookingSAS system.There will also be shadowing and layover of objects, which can be removed byinterferometry or multiple runs over the same area. However, the shadow cansometimes be useful in classifying objects on the seafloor. More information onthese effects can be found in [Franceschetti and Lanari, 1999, chapter 1].

Page 18: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

18 2.1 SAS IMAGING PRINCIPLE

y

Swath

Tx

Rx1

Rx2r

1

r2

reflector

Dz

Figure 3: Two receivers are vertically separated by ∆z. By measuring the phase differencebetween the same pulse receives at the two receivers, the height of the reflectorcan be resolved. The figure is taken from [Hansen, 2001]. Some of the symbolsmight not be consistent with the text.

x

y

Rangeresolution

Crossrange

(azimuth)

resolution

Swath

d

bLD

R

d

yd

x

Figure 4: Range and azimuth resolution in a synthetic aperture sonar. The azimuth resolu-tion of the synthetic aperture is given by the maximum synthetic aperture length,and is range independent. The figure is taken from [Hansen, 2001]. Some of thesymbols might not be consistent with the text.

As stated above, the motivation for using SAS as opposed to RAS is the huge im-provement in resolution. Resolution is defined as the minimum spacing we canhave between two objects and still see that they are in fact two different objects.We define both azimuth and range resolution. See figure 4 for a visualization.

From [Franceschetti and Lanari, 1999, page 25] we have that two targets, if theyare separated by one beamwidth, can be resolved in azimuth for a RAS systemonly if they are not within the sonar beam Ωθ at the same time. This means that

δx ≈ Rλ

L(2)

where R is the range to the objects, λ is the wavelength corresponding to the cen-ter frequency and L is the length of the aperture. This is only an approximationof the true resolution, however, and it corresponds to the width of the mainlobe

Page 19: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

2 SYNTHETIC APERTURE SONAR (SAS) FUNDAMENTALS 19

of the beampattern of the aperture. This can also be derived from the Rayleighcriterion as stated in [Johnson and Dudgeon, 1993, page 65].

In a SAS system the effective length of the aperture is the synthesized length ofthe aperture i.e. the length the aperture moves along the u-axis. This takes awaythe dependence of the azimuth resolution on range, because we can have as longan effective aperture as we want. The azimuth resolution for a SAS system is

δx =L

2. (3)

A smaller antenna gives a better resolution, due to the fact that the beam is wider.See e.g. [Franceschetti and Lanari, 1999, page 28] for details.After matched filtering in range the range resolution is the same for both systemsand is according to [Franceschetti and Lanari, 1999, page 17] given by

δy =c

2B(4)

where B is the pulse bandwidth. B ≈ 1/τ where τ is the length of the pulse.From this we see that the only way to improve the range resolution is to useshorter pulses. Short pulses have little transmitted power, which again reducesthe practical range. This can be avoided by the use of modulated long ones fol-lowed by pulse compression. There are various ways to code pulses. By far themost common pulse coding scheme used is binary phase coding. In this schemethe phase of the signal is changed within the duration of the pulse, according to abinary code. One criteria for choosing a code is that it should produce low rangesidelobes on decoding. The decoding, or compression, involves correlating thereceived signal with a replica of the transmitted code.

Another example of pulse coding is a chirp pulse. The pulse is sent as

p(t) = rect

(t

τ

)ej(ωt+αt2

2

)(5)

where rect( tτ) is a rectangular pulse of duration τ , ω = 2πf is the angular fre-

quency with f the carrier frequency, and α is the chirp rate describing the ratewith which the frequency of the chirp will vary. It is related to the pulse bandwidth by αt ≈ 2πB. The phase of a chirp pulse varies quadratically versus timewhile the frequency changes linearly versus time, and we say that the chirp signalis a phase modulated (PM) signal. The deriviative of phase gives the instantan-eous frequency of the signal. If α > 0, the instantaneous frequency increases withtime, and the signal is said to bee upsweep. In the opposite situation, it is said to bedownsweep. In the reconstruction, the complex conjugate of the received signal ismixed with the phase of the transmitted chirp to get the compressed or derampedsignal that we need. Chirps are described further in [Soumekh, 1999, page 23].The only disadvantage with using pulse coding is added transmitter and receiver

Page 20: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

20 2.1 SAS IMAGING PRINCIPLE

Figure 5: A SAR operating in strip-map mode. The squint angle of the system are keptconstant throughout the data collection period to illuminate a fixed strip in the(slant) range domain. The figure is taken from [Franceschetti and Lanari, 1999].

complexity, but the advantages generally outweigh the disadvantages, and so itis widely used.

SAR have three different operating modes, whereas SAS only uses one of them;strip-map mode. However, it seems like a good idea to eventually start using allof them on SAS to, so a description of all the modes for SAR is given below. Theillustrations are also for SAR. The descriptions are based on information found in[Franceschetti and Lanari, 1999].

• Strip-map SAR/SASIn this configuration the radar/sonar maintains the same broadside radi-ation pattern throughout the data collection period on a fixed strip in the(slant) range domain. This means that the illuminated azimuth area is var-ied from one ping to the next. A broadside radiation pattern means that themain lobe of the radiation pattern is perpencdicular to the synthetic aper-ture. One can also have a squinted radiation pattern (the main lobe of theradiation pattern is not at broadside, but at some other angle) in strip-mapmode, as long as the squint angle is kept constant througout the data collec-tion period. Strip-map SAR/SAS is mainly encountered in reconnaissanceor surveillance problems. See figure 5. Broadside strip-map mode is as-sumed througout this thesis.

• Spot-light SAS

Page 21: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

2 SYNTHETIC APERTURE SONAR (SAS) FUNDAMENTALS 21

Figure 6: The beam of a spot-light SAR system is steered to point at the same area on theground at all times during the data collection period. The figure is taken from[Franceschetti and Lanari, 1999].

This mode uses mechanical or electronic steering of the physical radar sothat it always illuminates the same finite area of interest during the data col-lection period. It is often used to take a closer look at areas on the groundthat looks interesting for the current application after strip-map imaging.One could imagine that this would also be useful for sonar, e.g. whensearching for mines. See figure 6.

• Scan-mode SASIn this mode the radar is continously on, but the antenna beam is periodic-ally moved to illuminate neighbouring subareas. See figure 7. The reasonto use this mode is to overcome the limitation one faces in strip-map modeas to the range extension of the imaged area.

2.2 SAS image reconstruction

Most of the derivation in this subsection is based on information from [Soumekh,1999]. Assume a reflector in the scene with an ideal reflectivity function given by

ggn(u, y) = σnδ(u− xn)δ(y − yn) (6)

that we want to measure. (xn, yn) is the location of the target and (u, y) is the posi-tion of the sensor element. σn is the reflection coefficient. This equation does only

Page 22: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

22 2.2 SAS IMAGE RECONSTRUCTION

Figure 7: In scan-mode, the beam of the SAR is periodically moved to illuminate neigh-bouring subareas. The figure is taken from [Franceschetti and Lanari, 1999].

describe non-dispersive point reflectors, however. A pulse is transmitted, andwhen it is received it has been delayed and convolved with the target reflectivityfunction. The received signal is also subject to losses and absorption. Read moreabout these effects in e.g. [Urick, 1983]. The received signal can be written as

ssn(t, u) =

x

y

1

R2n

ggn(u, y)p(t− 2

cRn)dydx (7)

where Rn =√

(u− xn)2 + (y − yn)2. By applying the Born approximation, 1 thetotal signal received from the point reflector is

ss(t, u) =∑

n

ssn(t, u) =∑

n

1

R2n

σnp(t−2

cRn). (8)

If we also take the sonar amplitude pattern into account, we can write this ex-pression in the frequency fast-time domain as

Ss(ω, u) = P (ω)∑

n

1

R2n

σnas(ω, θ)e−j2kRn (9)

where k is the wave number with |k| = 2πλ

, and as(ω, θ) is the amplitude patternof the sonar. The latter is made up of the radiation patterns of both the receiving

1The signal received from one reflector can be modelled as independent from signals fromother reflectors, hence superposition can be applied.

Page 23: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

2 SYNTHETIC APERTURE SONAR (SAS) FUNDAMENTALS 23

and the transmitting element. It can, for a rectangular aperture, be derived likethis (according to [Van Trees, 2002, page 72]):

as(ω, θ) =

∫ L/2

−L/2

1

Le−jπkxxdx =

ejL2kx − e−j L2 kx2jkx

L2

=sin(L

2kx)

L2kx

= sinc(L

2kx). (10)

Since kx = 2πλ

sin(θ), we get

as(ω, θ) = sinc(L

λsin(θ)) (11)

where θ is the angle the incoming/outgoing signal makes with the aperture. Withrespect to a target in the scene, this angle can be defined as

θn = arctan(u− xny − yn

). (12)

We assume the aperture to be rectangular throughout this thesis. The supportband of as(ω, θ) is a decreasing function of L, which implies that we can get anarrower beamwidth with a long sensor array.

The Fourier transform of the received signal with respect to the slow-time u is

SS(ω, ku) = P (ω)∑

n

as(ω, θ)e−jkx(ω,ku)xn−jky(ω,ku)yn

= P (ω)GG[kx(ω, ku), ky(ω, ku)] (13)

whereky(ω, ku) =

√4k2 − k2

u (14)

andkx(ω, ku) = ku. (15)

Because of the nonlinear nature of ky, the mapping from (kx, ky)-domain to (x, y)-domain to get the final image is far from trivial. The process is called Stolt-mapping and is shown in figure 8. The Stolt mapping is essentially a polar tocartesian conversion.

The baseband version of these expressions is given by multiplying them withej2kR and ejkx(ω,ku)xn+jky(ω,ku)yn , for the time domain and frequency domain case,respectively. By working with basebanded data, we use lower frequencies, andthus interpolation is simplified. Also, the sample rate can be reduced.To findGG[(kx(ω, ku), ky(ω, ku))] given only SS(ω, ku), one would intuitively statethat

GG[(kx(ω, ku), ky(ω, ku))] =SS(ω, ku)

P (ω). (16)

Page 24: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

24 2.2 SAS IMAGE RECONSTRUCTION

Figure 8: This figure illuistrates the process of Stolt-mapping. Some of the symbols mightnot be consistent with the text. From the non-liear grid the spectral data are on,we wish to transform them to lie within the dashed line. Bky and Bkx limitssupport band of ky and kx, respectively. The figure is taken from [Pat, 2000].

Page 25: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

2 SYNTHETIC APERTURE SONAR (SAS) FUNDAMENTALS 25

This is, however, not possible, because the bandwidth of the pulse is finite. In-stead we use

GG[kx(ω, ku), ky(ω, ku)] = P ∗(ω)SS(ω, ku) = |P (ω)|2SS(ω, ku) (17)

gg(x, y) = F−1[GG(kx(ω, ku), ky(ω, ku))]. (18)

We call this process matched filtering. It can also be implemented in the time do-main through a convolution of ss(t, u) with p∗(−t). Matched filtering is a correl-ation of the received signal with the transmitted puls. F−1[|P ∗(ω)|2] is called thepoint spread function of the imaging system. The inverse two-dimensional Four-ier transform of the support region of the signal dictates the shape of the pointspread function. To develop an analytical model for the point spread function,one could approximate the target support via a rectangle in the (kx, ky) domainwith widths

Ωy = 2(kmax − kmin) (19)

in the ky domain and

Ωx =8π

L(20)

in the kx domain. The model is composed of the following separable two-dimensionalsinc functions in the (x, y) domain:

sinc(Ωxx

2π)sinc(

Ωyy

2π). (21)

This two-dimensional sinc pattern represents the point spread function of thestripmap SAS systems for an ideal reflector. In reality, when we are not dealingwith ideal reflectors, the shape of the point spread function will also be influencedby the azimuth window function, and the full point spread function is represen-ted by the Dirichlet function. Read more about it in [Franceschetti and Lanari,1999, page 28]. This funcion is periodic, and to avoid grating lobes in the pointspread function, the azimuth sampling has to be chosen after a certain criteria.This is discussed furter in subsection 2.3. It is only in the narrowband case thatapproximating the target spectral support through a rectangular region is a goodapproximation. In wideband systems (where the carrier frequency is comparablewith the bandwidth) the point spread function cannot be approximated by a two-dimensional sinc patten. In this case the point spread function takes the shape ofa funnel 2 which must be calculated numerically.

The system model presented in this section ignores the effects of refraction, mul-tipath and the effects of the medium on the propagation speed. The interestedreader can read more about these effects in e.g. [Urick, 1983]. The model alsoassumes that the platform is stationary during transmission and reception of the

2 [Soumekh, 1999] discusses this further.

Page 26: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

26 2.3 SAMPLING CONSTRAINT

signals. This is the common ”stop and hop” assumption that usually works wellfor SAR. For SAS however, this is not the case, and the reconstruction must com-pensate for the sonars movement between transmission and reception. See e.g[Hawkins, 1996] for a more thorough discussion of this assumption.

2.3 Sampling constraint

We need a certain sampling criterion in both the fast time and slow time do-main. In the fast time domain the time between transmitting pulses must be longenough to prevent the echo returns from the previous pulse to interfere with thecurrent. The first echoed signal sample arrives at the receiver at the fast-time

Ts =2Rmin

c(22)

and the last echoed signal sample arrives at

Tf =2Rmax

c+ τ (23)

where Rmin and Rmax are the closest and farthest radial range distances of therange swath, respectively. This means that the pulse repetition frequency mustsatisfy

PRF ≤ 1

Tf. (24)

Since the bandwidth of the signal is ±f , where f is the highest frequency inthe basebanded data, the fast-time sample spacing should satisfy the followingNyquist criterion

δt ≤1

2f=

1

B. (25)

B is the bandwidth of the signal.According to [Franceschetti and Lanari, 1999, page 28] the point spread functionin azimuth has grating lobes with successive maxima at

2πd

Lu′ = qπ, q = 0,±1,±2... (26)

where u′ is u normalized with respect to the azimuth footprint. To completelyaviod grating lobes in azimuth we require that no grating lobes are within thevisible area of the point spread function (±90 in the azimuth signal extention|u′| ≤ 1

2. This leads to

2πd

L

1

2≤ π

2⇒ d ≤ L

2(27)

Page 27: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

2 SYNTHETIC APERTURE SONAR (SAS) FUNDAMENTALS 27

where d is the azimuth sample spacing. This sampling constraint in azimuth canalso be derived from a Doppler concept. Without going into further detail, theDoppler bandwidth will be

∆fD =2v

L(28)

and we must have∆fD ≤

v

d⇒ d ≤ L

2(29)

where v is the platform velocity. See [Franceschetti and Lanari, 1999, page 30] fordetails. These equations gives the lower and upper limit for the PRF as

2v

L≤ PRF ≤ 1

Tf. (30)

Several processing steps must be performed in the reconstruction:

• Pulse compressionThe first step in the reconstruction is the pulse compression (or matchedfiltering). The raw data and the matched filter must be appropriately zeropadded before the convolution to avoid aliasing caused by wrap aroundeffects. It is customary to perform the matched filtering in the frequenydomain and make use of the fast Fourier transform (FFT). Read more aboutmatched filtering in [Franceschetti and Lanari, 1999].

• Baseband conversionBaseband conversion of the fast-time data is applied by multiplying the sig-nal with ej2kR.

• Hilbert transformA Hilbert transform 3 is used to make an analytic signal.

• FilteringTo get a better resolution signal it is also customary to upsample and low-pass filter the signal.

2.4 Multielement receiver systems

One of the significant differences between sonar and radar systems is that mostsynthetic aperture sonars travel faster, compared to the propagation speed of thewaves used, than that required to meet the spatial sampling criterion, and so theaperture is insufficiently sampled. When this happens we get aliasing in the ku-domain and grating lobes in the final image. From the last section we have that

3See [Mitra, 2001, chapter 11.7] for more information on the Hilbert transform.

Page 28: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

28 2.5 PHASE CENTRE APPROXIMATION (PCA)

Tx

Rx

Equivalent Tx-Rx

phase centers

dL = Nd

L/2d/2

Tx

Rx

D = L/2

Ping p

Ping p+1

Figure 9: An illustration of how phace center antenna elements are calculated from thereceiver and transmitter positions. The figure is taken from [Hansen, 2001].

the maximum distance the sensor element can move between transmitting pulsesis

d =L

2. (31)

For anything but a very short range sonar, this implies a very low platform speed.The sonar has severe trouble with maintaining a straight line of travel at lowspeeds, and the quality of the image is lowered. Since an increase in PRF wouldimply a reduction of range extention of the image, this is often not an option. Ac-cording to [Gough and Hawkins, 1997] there are some circumstances where thegrating lobes in the images can be minimized by a preprocessing step, however, itis almost always impossible to retrieve the missing data without errors, approx-imations or a priori information. The solution is to use a multiple receiver array;an array of receivers in the u-direction. A multielement receiving array is shownin figure 9 with a transmitter located between the first and the second receiver.The transmitter can also be located elsewere. The effect of a multiple receivingarray is an increase in the sampling frequency in azimuth without having to re-duce the speed of the platform. Read more about multiple receiver arrays in [Pat,2000].

2.5 Phase centre approximation (PCA)

The position midway between the transmitter (Tx) and each receiver (Rx) is calledthe phase center. An array of phase centers are called a phace center antenna(PCA). By replacing each Tx-Rx pair by the equivalent PCA element, the bistaticsystem is replaced with a monostatic one. The idea is illustrated in figure 9. PCAis used to make the reconstruction more computationally efficient by instead ofcomputing the range from the transmitter to the target and from the target to thereceiver, one only computes the range from the phase center to the target. Manyalgorithms are not developed for single Tx/multiple Rx systems, so it is usefulto be able to convert a displaced Tx-Rx pair configuration to that of a collocatedTx-Rx pair configuration to use these algorithms. After we have synthesized an

Page 29: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

2 SYNTHETIC APERTURE SONAR (SAS) FUNDAMENTALS 29

array of collocated Tx-Rx pairs, we must rearrange the data space so that it cor-responds to what we would have received from the collocated Tx-Rx pairs, andnot the displaced Tx-Rx pairs used in reality. For each transmitted ping, N re-ceivers sample the aperture at a position in azimuth equivalent to the position oftheir phase centers. For each ping, we must find the position of the phase centersin order to find the correct order of the sampling positions. It may be that somedata lines have to be moved to the front or behind other data lines, as the orderof the sampling positions have been changed. However, it is important that thenew constructed data space still fulfills the azimuth sampling constraint of d ≤ L

2.

As the value of d increases, the less overlap we have between the collocated Tx-Rx pairs for successive pings. The maximum distance the platform can travelbetween successive pings equals the extension of the full phace center antenna.From [Pat, 2000] we can read that given the azimuth positions of the transmit-ter uTx and the receivers uRxn relative to the platform as well as the position ofthe platform up along the aperture, it is possible to calculate the position of eachcollocated transmitter-receiver pair uTx−Rx along the aperture as

uTx−Rx(m,n) = up(m) +uRx(n)− uTx

2(32)

where m is the ping number and n is the receiver number. These positions areused in the data space conversion process.

For the phase center approximation to be valid, the range from the transmitter viathe target and back to the receiver has to diverge little from the range between thephase center and the target. This gives us the following constraint (taken from[Bellettini and Pinto, 2002])

∆2tr

4λ(1− cos2(Ωθ/2)) ∆cr (33)

where ∆tr is the distance between the transmitter and the receiver, Ωθ is the azi-muth beamwidth and ∆cr is the distance from the phase center to the receiver.The constraint can be a problem with wide beams, but the condition is usuallysatisfied if the transmitter is positioned such that it is in the middle of the re-ceiving array during reception, as this gives the least strict constraint. Thus thetransmitter is located to the right of the middle of the receiving array. Since thesonar is moving between transmission and reception, the criterion becomes rangedependent.

2.6 Motion

Synthetic aperture processing for sonar application has not gained widespreaduse, because of the complexity of the processing. Both erroneous platform motion

Page 30: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

30 2.6 MOTION

Y a w

P i t c h

R o l lS u r g e

S w a y

H e a v e

x

y

z

Figure 10: The different ways in which the platform can move. The figure is taken from[Hansen, 2001].

and random disturbances within the medium cause phase errors in the data. Un-less phase compensation (also called motion compensation) is applied, a mean-ingful image cannot be produced. Figure 10 shows the different ways in whichthe platform can move. An inertial navigation system (INS) can be used to ob-tain continous position fixes of the platform, and one can thus perform phasecompensation based on the absolute position of the platform. From [Sheriff,1992] we can read that because of their inherent drift and bias errors, however,typical inertial navigation systems do not have the accuracy needed for high fre-quency phase compensation. And even if they had the needed accuracy, theystill wouldn’t be able to solve the entire phase compensation problem. Theywould only correct for erroneous platform motion, and not the problems asso-ciated with medium instability. An unstable medium complicates the predictionof propagation speed, and the rest of the processing chain might inherit theseerrors. [Sheriff, 1992] states that experiments have established that the ocean re-main stable for periods up to 8 minutes, however, the periods are discontinuous,and the instabilities increase with frequency. When we have platform movementnormal to the intended line of motion, the travel time for the pulse from the trans-mitter to the receiver will be altered in unknown amounts, and data can not beintegrated correctly. Also when the actual receiver positions along the aperturein u-direction does not correspond to their assumed positions due to differenttrue and predicted forward velocities of the platform, we cannot make a focusedimage. However, movement normal to the intended line of motion is the mostsevere problem, and can be mitigated by the use of the displaced phase centerantenna (DPCA) algorithm.

The DPCA algorithm relies on ping to ping correlation of phase centers. Thetechnique was first suggested by [Raven, 1978]. The signal received at a phasecenter is equivalent to that which would be received there by a single Tx/Rx ele-ment. This consept is applied to a synthetic aperture to estimate and correct forphase errors from one receive position to the next. If the spacing between two

Page 31: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

2 SYNTHETIC APERTURE SONAR (SAS) FUNDAMENTALS 31

T xR x

d L = N d

T xR x

D = M d / 2

P i n g p + 1

P i n g pN - M o v e r l a p p i n g p h a s e c e n t e r s

Figure 11: The displaced phase center antenna algorithm uses overlapping phase centers tosee if the actual position of the receivers is the same as their predicted position.If not, a phase compensation factor can be applied to the data. The figure istaken from [Hansen, 2001].

receiving elements is d, the phase centers are separated by d2. The pulse repetition

interval can be adjusted such that it coincides with a forward movement of theplatform equal to d

2. Consider an array of receivers labeled from 1 to N, with a

transmitter located somewhere on the array. If the platform moves at a constantvelocity along a straight line, the phase center from element no. 2s present po-sition perfectly overlaps element no. 1s phase center location from the previousping. Provided that the acoustic medium is constant between pings, the phasemeasured at element no. 2s present output and element no. 1s output from theprevious pulse should be identical. If the platform does not move with constantlinear motion, and/or the medium has changed between pings, a phase differ-ence will appear at the output of the two elements. By measuring the phase dif-ference, a correction factor can be applied to element no. 1s present output. Datacorrected in this manner can be passed to a beamformer to produce an image.It is possible to reduce the speed of the platform, and thus get a bigger overlapbetween the phase centers, and again a better accuracy (illustrated in figure 11where 4 pcas overlap at each ping). This will, however, come at the expenceof a reduced size of the imaged scene or longer survey times. It is also harderfor the platform to maintain a straight line of motion at low speeds. [Bellettiniand Pinto, 2002] have estimated the optimal overlap factor. The discription of theDPCA algorithm is based on information taken from [Sheriff, 1992].

After the DPCA algorithm has been applied to the data, if compensation for azi-muth motion errors has not been applied, the image may still be out of focus.However, it will be possible to apply autofocus techniques to remove the remain-ing errors in the image. There are two broad classes of autofocusing algorithms:

• Those tracking the phase of point reflectors (phase gradient algorithms) ormeasuring the Doppler frequency bandwidth (power spectrum analysis).

• Those measuring the geometrical displacement between separate looks or

Page 32: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

32 2.7 IMAGE QUALITY

the defocusing blur (map-drift, reflectivity displacement method or com-plex correlation in frequency domain).

More on autofocus can be found in [Nahum, 1998] and [Callow, 2003, chapter7].

2.7 Image quality

It is important to establish the SAS image quality to benchmark different beam-formers, different navigation strategies etc. There are several measures used tofind the quality of an image. Those described here are based on the imaging ofone reflector. We look at the point spread functions in both azimuth and range,and extract various parameters. Some are described here:

• Peak to side lobe ratio (PSLR)

PSLR is the ratio between the peak of the main lobe and that of the mostprominent side lobe. There are four different measurements of PSLR; onboth sides of the main lobe both in azimuth and range. It is customary toreport the largest value for both directions. It is not necessarily the closestlobe to the main lobe that is the most prominent one, and so it can be con-venient to report the two computed PSLRs corresponding to the lobe closestto the main lobe and to the absolute one. The sinc-function that is the ap-proximated point spread function of an ideal point reflector has a noticeabledisadvantage; the first sidelobes are quite high (PSLR ≈ -13 dB). Such highsidelobes can produce artifacts in the image if high intensity reflectors arepresent. It is often desireable to reduce these sidelobes at the expense ofgeometric resolution. This can be achieved by introducing weighing func-tions. By applying a Hamming filter, the value drops to about -43 dB, butthe width of the main lobe is increased.

• Integrated side lobe ratio (ISLR)ISLR is defined as the ratio between the energy of the main lobe and thatintegrated over all the side lobes. Because the extent of the scene is lim-ited, we typically integrate over several (10 to 20) lobes on both sides of themain one. Its value is about -10 dB for the sinc-function, and drops to ap-proximately -20 dB with the use of a Hamming filter. There are also otherdefinitions of ISLR, but we will stick with this one in this thesis (see e.g.[Martinez and Marchand, 1993] for an overview of other definitions).

Information is taken from [Franceschetti and Lanari, 1999, page 112].

Page 33: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

2 SYNTHETIC APERTURE SONAR (SAS) FUNDAMENTALS 33

Sonar

raw data

MotionEstimation

Beamforming

Interferometry

Autofocus

Broadside interferometry

SAS Image(echo strength)

Bathymetric

map

Slant-range DPCA (2D)

Contrast Optimization

Coarse: Cross Correlation

Full: 2D Phase Unwrap

DPCA+INS Full

Chirp Scaling

Map drift

PGA, PCA, Mosaic PGA

Beamforming

MotionCompensation

Time-domain Beamforming

Fast Backprojection

Wavenumber

Straight line Arbitrary motion

DPCA+INS Simple

Figure 12: To form an image from SAS data is a complicated process. This diagram showsthe main parts in the processing chain. The figure is taken from [Hansen andSæbø, 2003].

2.8 SAS signal processing overview

This subsection gives an overview of the different parts in the processing chainof image formation from SAS data. The chain is illustrated in figure 12. Thesubsection is intended to give the reader a better understanding of where themethods described in this thesis comes into the processing chain.

• Motion estimation is always done, as a focused image require knowledgeof the positions of all the PCA elements. In order to use frequency domainbeamformers, the data has to be motion compensated to lie on a straightline if they don’t already do. For time domain beamformers, this step is notnecessary, which is a great advantage.

• The beamforming step is where the image is formed, and it is also whereFFBP can be used.

• Autofocus can be applied if the image is not well focused after beamform-ing.

Page 34: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

34 2.8 SAS SIGNAL PROCESSING OVERVIEW

• Interferometry is a processing technique for utilization of the extra inform-ation gotten from vertically separated receivers. The technique produces abathymetric map of the seafloor, i.e. a height profile.

Page 35: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

3 BEAMFORMING 35

3 Beamforming

The process of constructing an image from synthetic aperture data consist of atwo-dimensional matched filtering. First the received echo from each ping is com-pressed. Then the azimuth compressed synthetic aperture is formed by matchedfiltering the variation of the signal across the synthetic aperture. A beamformerperforms this operation. Time domain beamforming is, as the name implies,beamforming done in the time domain. The other main group of beamformersperform all of their work in the frequency domain. They both have clear advant-ages and disadvantages. The time domain methods due to the fact that they canhandle an arbitrary system geometry, but are slow, and the frequency domainmethods due to the fact that they are relatively fast since they can utilize the FFT,but require the sampling positions to lie uniformly on a straight line. Also, thefrequency domain methods needs a lot of memory to store and evaluate the 2-dimensional Fourier transforms, and the data must also be zero-padded to avoidwrap-around effects from the Fourier transforms. There are different algorithmsin both main groups, all developed to suit different types of systems, requiredquality, required speed and other criteria. One important issue is wether or notthe receiving array is in the near or far field of the scene. The reflections from anomnidirectional target propagates in spherical ‘shells‘. The further the apertureis located from the target, the larger the diameter of the ‘shells‘ will be when theyarrive at the aperture, and the more the ‘shells‘ will look plane. To find out ifthe wave can be modeled as plane (which highly simplifies the beamforming),the length between the true position of the wave and the assumed position ofthe wave (assuming plane wave) measured at receiver is calculated. We usuallydemand that this length is kept below λ

8( [Bruce, 1992]). This gives

λ

8≥√R2 + u2 − R ≈ u2

2R

R ≥ 4u2

λ(34)

where u is the position of the sensor element. If the range satisfies this criteria, weare in the far field of the scene, and the wave can be modeled as plane, otherwiseone must take into account the spherical nature of the wave when beamform-ing. For SAS, we are always in the near field. For more general information onbeamformers, see [Johnson and Dudgeon, 1993] or [Van Trees, 2002].

3.1 Time domain beamforming (TDB)

As the main algorithm of this thesis (FFBP) takes place in the time domain, wewill look more closely at TDB. We can divide time domain methods into un-

Page 36: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

36 3.2 BACKPROJECTION (BP)

focused and focused algorithms. The unfocused ones are the simplest. For oneping, one finds the direction of arrival of the incoming wave, and from this thetime differences from when the wave meets one receiving element to when itmeets the next is calculated. The data are delayed the appropriate amount (calledthe steering delay) and then summed. Consequently, this type of unfocusedbeamformer is called a delay-and-sum beamformer. It is on the form

y(t) =∑

n

sn(t−∆sn) (35)

where y(t) is the output of the beamformer and ∆sn is the steering delay for then’th Rx element. It is very efficient because it is range-independent, but is onlyvalid if the scene to be imaged is in the far field of the aperture. This is never thecase with a SAS system. When one is in the near field, in addition to the steeringdelay, a focus delay must also be applied to account for the spherical nature ofthe wave. The focus delay is range-dependent, so a focused beamformer will beslower than its unfocused counterpart. It can be written as

y(t) =∑

n

sn(t−∆sn −∆fn) (36)

where ∆fn is the focus delay for the n’the Rx element. See [Johnson and Dudgeon,1993] for more details. Within the group of focused time domain beamformingalgorithms, there are two important methods we shall look at next.

3.2 Backprojection (BP)

BP is an exact inversion technique. It works in both near and far field, whichmeans that the range to the different contributions will be important for findingthe focusing delays. Calculating these is essentially what the algorithm is allabout. We start with the fast-time matched-filtered signal

ssM(t, u) = ss(t, u) ∗ p∗(−t), (37)

which has also been mixed to baseband. The algorithm has its name from the factthat for a given synthetic aperture location u, the fast-time data of ssM(t, u) aretraced back in the fast-time domain (backprojected) to isolate the echo return ofthe reflector at (xn, yn). This can be written as

gg(xn, yn) =

u

ssM [2Rn/c, u]du =

u

ssM [t−∆tn, u]du (38)

The equations are taken from [Soumekh, 1999].

In other words, we wish to coherently add the data at the fast-time bins thatcorresponds to the location of that pixel for all synthetic aperture locations u.

Page 37: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

3 BEAMFORMING 37

To implement this method in practice, the available discrete fast-time samples ofssM(t, u) must be interpolated to recover ssM [t−∆tn, u], and the integral takes theform of a sum. Often linear interpolation is used. However, it is possible to applyas advanced an interpolator as wanted at the expence of increased computationtime. Although this algorithm can handle an arbitrary array geometry and makesno approximations, except for the interpolation, it has one major drawback. Con-sider an image with X × Y pixels and an array with N sensors. For each aperureand pixel position we need to compute the range between the sensor element andthe pixel, interpolate in the received signal and finally add the value found to theimage matrix. In total, the number of operations is proportional to

X × Y ×N = N3 (39)

if we assume that X = Y = N . This fact limits the use of the algorithm to verysmall aperture and image sizes. For small images, direct backprojection is quiteefficient and often preferred due to its simplicity and robustness. The algorithmhas another major advantage in that it conserves memory requirements. Onlyone time series has to be stored in memory besides the image matrix. As soon asthe data line has been backprojected to all image pixels, it is no longer needed.This makes the algorithm suitable for real time processing.

3.3 Dynamic focusing (DF)

In the sonar community, dynamic focusing usually means focused polar beam-forming. The data collected by an imaging system are in polar coordinates. Infact, one line of raw data can be considered as one polar image with a resolutionof 360 (or the beamwidth of the sensor element). To obtain all the information ne-cessary to make an image, one could sample less frequently in azimuth at largerrange than at shorter range. As we saw in the last subsection, this is not taken intoaccount in BP. In BP, the pixels define the sampling grid. In polar processing, orDF, one makes the image entirely in polar coordinates. This is a common methodin ultrasound RAS imaging (see [Haun, 2004]). To use this method with SAS,however, introduces problems. At a given PCA position the data are on a po-lar grid. But as one moves the platform, the data are on another polar grid, andthese grids are difficult to combine. A a polar-to-cartesian transformation mustbe performed in order to coherently combine the data from ping to ping, and thecomputational cost of such a transformation is very high. The process are fur-ther complicated if one has an array of receivers. DF are never used for a bistaticsystem.

Page 38: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

38 3.4 FAST TIME DOMAIN METHODS

3.4 Fast time domain methods

With these facts in mind it is possible to construct in-between-algorithms, whereone takes the best from both worlds (BPand DF). Method that lie on this cross-road are called fast time domain methods. One such method is FFBP, which isthe main topic in this thesis and is described in section 4. Other fast time domainmethods also exist for various applications, and we will look at some of them inthe next subsection. Most fast time domain methods have that in common thatthey (at least in the beginning) use polar coordinates and in some way dividethe backprojection problem into smaller subproblems that can be solved withless computation. The solutions to these subproblems are then combined in anappropriate way to retrieve the final image. It is essentially a smart rearrangingof the order of the calculations.

3.5 Applications across diciplines

To get an overview of the field of beamforming we will look at the system require-ments and the imaging methods used for some of the most common applicationsof beamforming apart from sonar. The main differences are due to huge vari-ations of the physical parameters involved. Much of the mathematics involvedin the different fields are quite similar, but a lot of it has been obscured by differ-ent terminology and different names on essentially the same algorithm.

3.5.1 Radar

The radar area is the area in which antenna arrays were first used. Radars areusually mounted on an air plane or a satellite, but can also be ground- or ship-based. They are almost always in the far field of the scene, and so they only haveto deal with plane waves. This simplifies the imaging process. Radars mountedon satellites have the easiest imaging process due to the fact that a satellite canfollow a nearly linear path with constant velocity, which is harder for airplanesand especially for ships. However, airborne radars have to deal with reflectionsfrom the ground, something the ship- and groundbased radars can ignore. Aswith sonar, the three different modes of SAR have different algorithms especiallytailored for each one.

Frequency domain methods utilizing the FFT are common as the receiver posi-tions can be modelled to lie on a straight line. One must separate the cases inwhich we have a narrowband system and the cases in which we have a wide-band system. If we are working with a narrowband system, we can perform a2D FFT, a multiplication with the transfer function and a 2D IFFT (inverse FFT)to construct the image. However, in reality we can seldom assume a narrowband

Page 39: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

3 BEAMFORMING 39

system. In these cases a range cell migration compensastion has to be applied toaccount for the non-linear nature of the range samles. This process is also calledStolt-interpolation.

The range-Doppler (RD) algorithm is the oldest and still the most popular fre-quency domain method in the area of SAR processing ( [Franceschetti and Lanari,1999]). Other popular methods are the wavenumber algorithm (also called theΩ−K algorithm) and the chirp scaling algorithm. They all perform some kind ofrange cell migration compensation, but are relatively fast due to the use of FFTs.

Time domain methods are also used in this area, although on a much smallerscale. If the image scenes are small or if the trajectory of the platform deviatesmuch from a straight there is no point in using FFT-based method. If the imagedscene is small, the FFT-based methods are not much faster than the time domainmethods. And if there is much motion error, the cost of applying motion com-pensation and autofocus algorithms in order to use FFTs is so large that the com-putational savings are lost. As radars are usually in the far field, delay-and-sumand BP are the relevent algorithms in time domain.

Some algorithms work solely in polar coordinates, e.g. the polar format (PF)algorithm and the convolution backprojection (CBP) algorithm . They are de-veloped for radars working in spotlight-mode, and are adopted from the worldof computed tomography ( [Franceschetti and Lanari, 1999, page 274]). They arebased on the projection slice theorem and avoids the traditional nemesis of mo-tion through resolution cells. The Fourier transform cannot be used for a signalsampled on a polar grid. The algorithms are still relatively fast, as the Hankeltransform can be used instead. [Milman, 2000] gives a good explanation of theHankel transform and its use.

[Ulander et al., 2003] demonstrated FFBP on an airborne ultra-wide band, widebeamwidth VHF SAR system named CARABAS. However, there are other fasttime domain methods that have been proposed in the recent years. A quadtreebackprojection algorithm was proposed by [Oh et al., 1999] in 1999. [McCorkleand Rofheart, 1996] introduced a similar algorithm as early as 1996 with the samecomputational cost as FFBP by factorizing both the image scene and the aperturewith a factorization factor of 2. Another fast time algorithm use only two stages(called local backprojection). These algorithms are in fact special cases of FFBP.[Olofsson, 2000] discusses a method highly related to FFBP; fast polar packpro-jection. As these fast time methods work in the time domain, they can handle anarbitrary platform trajectory without the same computational burden as stand-ard time domain methods. The computational cost is traded for image quality,but the error can be controlled by setting certain parameters. It looks as thoughthese methods will be more and more common for SAR in the near future.

Page 40: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

40 3.5 APPLICATIONS ACROSS DICIPLINES

3.5.2 Seismology

The objective of imaging seismology is to conctruct an image of how the earthlooks under the surface by the use of two types of waves; preassure and shearwaves. One is interested in the structure and physical properties of the media.In the same manner as the sea, the earth is an inhomogeneous medium for theacoustic wave to travel in, and this must be taken into account in the beamform-ing process. Various approximations are applied to the composition of the earth.Often the earth is modelled as consisting of a stack of layers, where the physicalproperties are equal in each layer. However, the estimation of the velocity dis-tribution and the various attenuation factors at different positions in the mediais a big part of the image formation process in seismic imaging. The whole pro-cess of finding the velocity distribution and the attenuation factors and formingthe image is called migration. In the same manner as water, the earth attenuatesand spreads different amounts of the signal depending upon both the physicalproperties of the medium and the frequency of the sent signal. An importantquality measurement in sesmic beamformers, is how they handle dips. Accord-ing to [Rocca et al., 1989] a dip is another term for a Doppler frequency shift.Sesmic images can be displayed in both time or depth.

Kirchhoff migration (KM) is a popular collection of algorithms in time domainthat can be somewhat compared to BP in SAR and SAS, and works both in nearand far field with an arbitrary array geometry. They sum received signals alonga diffraction hyperbola whose curvature is governed by the medium velocity.Amplitude and phase corrections are added before summation to account forspherical spreading. KM can be cumbersome in handling lateral velocity vari-ations i.e. are only accurate in media with only vertical velocity variations. KM iscomputationally expensive, can generate strong far-field migration artifacts, andthey have problems when it comes to migrating complex subsurface structures.

Finite-difference algorithms is another class of imaging methods where some ofthem work in the time domain and others work in the frequency domain. Theycan handle any type of velocity variation, but they have different degrees of dipapproximations. Furthermore, differencing schemes, if carelessly designed, canseverely degrade the intended dip approximation. The algorithms are quite slow,but can overome some of the limitations of the KM algorithms, so they are pre-ferred when complex subsurface structures are to be imaged.

Fourier-based methods using Stolt-interpolation are also in use. They are fast,they are based on an assumption of constant propagation velocity, and thus in-troduces considerable approximations to the model. One such method is thephase-shift method. Some of the frequency domain methods are modified toalso comprise variable velocities by introducing the consept of stretching in time.Here, time sections are converted to approximately constant-velocity sections,and then migrated by the constant-velocity Stolt-method. This conversion is es-

Page 41: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

3 BEAMFORMING 41

sentially stretching in the vertical (time) direction. Once the section is migratedin the stretched domain, it is converted back to the original time domain.

Because none of the groups of algorithms can handle all possible events, manyalgorithms that combine the different groups have been developed. The inform-ation on the different algorithms are taken from [Yilmaz, 2001].

3.5.3 Computed tomography (CT)

Most of the information in this subsection is taken from [Basu and Bresler, 2001].Computed tomography (also known as CAT scanning (Computerized axial tomo-graphy)) is the cross-sectional imaging of objects from transmitted or reflecteddata. Pulses are sent through the object of interest from different angles and dataare collected at a receiving array placed on the othe side of the object. The send-ing of a pulse in a given direction is called a projection. The different projectionsare combined to give a cross-sectional image of the object. Tomography is widelyused in the area of medical diagnosics.

The type of algorithm used in the reconstruction process depends almost exclus-ively on from how many directions we have projection data. If data are availablefrom all possible directions, and the measurement noise is negligible, the filteredbackprojection (FBP) method is chosen. This is a time domain technique muchlike BP in SAR ans SAS. The difference is the filtering operation before the back-projection. The backprojection is the bottleneck of the method, as the filtering canbe done using FFT’s.

Fourier reconstruction algorithms (FRAs) are also widely used in CT. They arebased on the Fourier slice-projection theorem and involve FFTs of the projections,a transform from polar to cartesian coordinates, and an 2D IFFT to recover theimage. Unfortunately, the interpolation step generally requires a large number ofcomputations to avoid the introduction of artifacts in the image. Although thismethod is fast in theory, experiments show that the gain by using this methodover FBP is less than predicted for reasonable image sizes.

[Basu and Bresler, 2000] proposed a fast recursive method much like FFBP in2000. They called it fast hierarchical backprojection (FHBP). It also factorisesthe image into smaller and smaller images, at the same time as the number ofview-angles are reduced. In this algorithm (which takes place in polar coordin-ates) radial and angular oversampling can be controlled to trade off complexityfor accuracy of reconstruction. The complexity is O(N 2logN) and if parametersare chosen wisely, the distortions in the image is negligible while the speedup isconsiderable. [Basu and Bresler, 2001] describes how to choose the interpolatorlength, the oversampling factor and the number of stages amongst other paramet-ers to get as small an error as possible in the reconstruction. [Boag et al., 2000]deals with the inverse operation. Like in the areas of SAS and SAR, these types

Page 42: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

42 3.5 APPLICATIONS ACROSS DICIPLINES

of methods are predicted a bright future. Other fast methods have also been pro-posed for CT, such as the multilevel inversion (MI), The linogram method, andthe links method ( [Basu and Bresler, 2001]).

3.5.4 Medical ultrasound

Ultrasound imaging is one of the most popular imaging methods in medicine.The equipment is simpler and more portable than for for instance X-ray, CT ormagnetic resonance (MR) (it consists of a handheld transducer coupled to a pro-cessor and a monitor), and it has few sideeffects. The transducer (consisting of anarray of elements) both transmits and receives the signals, and the objective is tocombine the different received sigals to an image. The transducer can have a vari-ety of geometries, from linear to curved in two dimensions. Tissue is a highly in-homogeneous medium, and different bodyparts reflect different amounts. Bonesand air gives great reflection and it can be difficult to obtain meaningful data ofobjects behind these media. Contrast fluid are often used to enhance certain partsof the body. This information was taken from [Holm].

As with sonar, we are in the near field of the scene, and the spherical nature ofthe wave must be accounted for. Time domain methods are almost exclusivelyused in this area, and the algorithm of choise is dynamic receive focusing, wherethe focusing delays can be changed continously to obtain new focal points. Seee.g. [Haun, 2004] for details.

Page 43: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

4 FAST FACTORIZED BACK-PROJECTION (FFBP) 43

4 Fast Factorized Back-Projection (FFBP)

This section will decribe the main method of this thesis, FFBP. As we have seen,it is seldom correct to assume that the platform carrying the sonar moves in astraight line. This limits the use of frequency domain reconstruction methods, asa lot of pre- and postprocessing must be applied to the data to correct for platformmotion. There is clearly a need for time domain methods in this field, however,the standard time domain methods are slow and not very often used. FFBP is afast time domain method, and can thus handle an arbitrary array geometry. Itcan also give computation times as low as those obtained using frequency do-main methods. Thus it is a method that takes the best from both time and fre-quency domain methods; it is fast, at the same time as it can account for an arbit-rary array geometry. The algorithm is fast because it introduces approximations.However, these approximations can be varied. If set to zero, no approximationsare made, and the algorithm is equivalent to BP. Larger approximations lowersthe computation time, but also introduces more errors in the final image. Theapproximations consists of representing points in a small region of the scene byonly one time series. A factorization of the image scene, as well as a factoriza-tion of the synthetic aperture is applied to control the size of the areas that canbe represented this way. The method was originally developed for SAR, and hasnot been used to any extent in the SAS community yet, but it shows promisingresults for this field too. [Banks, 2002, chapter 6] has applied the method to SASwith good results. [Frolind et al., 2004] also describe the mehod for SAS, as wellas compare it to fast polar backprojection (FPBP), a method strongly related toFFBP. [Aydiner et al., 2003] have developed a method to perform a sparse datafast fourier transform (SDFFT), which is somewhat similar to FFBP.

4.1 Principle

In the standard backprojection algorithm one backprojects all the raw data toevery pixel in the final image, and the complexity was given in (44). In FFBPthe final image is reconstructed through a number of stages. Polar images withincreasingly higher resolution are formed in each stage. They are formed usingonly a subsection of the synthetic aperture, and the size of the subapertures usedto make a polar image is increased in each stage. At first, each original receiveris one subaperture, but as we go through the stages, a subset of the aperture po-sitions are combined to form a subaperture. The aperture positions in a subaper-ture are always adjacent ones, i.e. a subaperture cannot be formed of e.g. elementnumber 3 and 6 from the original receivers, but one could form a subapertureof e.g. elements 1 to 6. One can look at a raw data line (time series) as a polarimage with an angular resolution of 360 (or the transmitter/receiver beamwidthΩθ), as this data contains correct information about the range to the targets, but

Page 44: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

44 4.1 PRINCIPLE

Figure 13: Two receivers are formed to a subaperture by beamforming them to the centerline. It is also visible here that the two receivers have almost the same circularpattern within the beam. The figure is taken from [Ulander et al., 2003].

not of the targets angular location. The origin of a polar image is in the middleof the subaperture. This means that if two receivers are beamformed in a cer-tain direction, the two resulting data lines can be looked at as a polar image withorigin in the middle of the two receiver positions. This new polar image willhave the same range resolution as the two original polar images, but it will havea higher angular resolution along the direction the receivers were beamformedto. The more receivers that are beamformed in a given direction, the better theangular resolution of the obtained polar image. Hence, as the algorithm goesthrough the stages, more and more receivers are used to make a polar image, andthe polar images thus has better and better angular resolution. The algorithmexploits the fact that within a given angular sector, adjacent aperture posistionshave essentially the same circular pattern in the collected data. Figure 13 showsa polar image made up of a subaperture of two original receivers. They havebeen matched along the beam center line (the line that starts in the middle of thetwo receivers, and goes through the middle of the angular sector) which corres-ponds to beamforming in this direction. Thus the values obtained on the centerline is correct, while the errors are larger and larger the further out on the edgethe points lie, as illustrated in figure 14. Two elements are combined to form asubaperture and focused along the subimage center line. This line is the dashedline in the figure. All points along this line will have the correct value, whereaspoints elsewhere will have approximate values. The errors are largest for pointsthat lie the furthest away from the center line. In this polar image the point PTwill be represented as if it lay at PP, along the centre line and at an angle b fromPT. [Ulander et al., 2003] states that the maximum range error between PP and

Page 45: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

4 FAST FACTORIZED BACK-PROJECTION (FFBP) 45

Figure 14: By using the assumed position of a point instead of the actual position, an erroris introduced. The error in range between PP and PT is given in 40. The figureis taken from [Banks, 2002]. Some of the symbols might not be consistent withthe text.

PT is

∆r = r′(a+ b)− r′(a) =√r2 + t2 − 2rt cos(a+ b)−

√r2 + t2 − 2rt cos(a) (40)

where 2t is the size of the subaperture. This error is an important parameter inFFBP, and we will look more closely at it in subsection 4.3.

The range line matched to the center line (i.e. the two original data lines, delayedand combined to one) can be used to represent the whole angular sector with littleerror if the beam is narrow enough, corresponding to using nearest neighbourinterpolation in angle. A beam gets narrower when more and more receivers areused in the beamforming, thus one can also say that the range line matched to thecenter line can be used to represent the whole angular sector if enough data lineswere used in the beamforming. Thus, as the algorithm goes through the stages,and the subaperture sizes are increased (i.e. they contain more and more originalreceivers), the beams are increasingly narrower, and the errors at the edge of theangular sector are increasingly smaller. There are also other way to think of thisprocess that may ease the understanding. The increasingly narrower beams canbe thought of as a far-field constraint. If the angular sector is narrow enough, thearcs depicted in figure 13 will appear to be plane, and the errors introduced byapproximating the whole arc by the value at the center line will be small. Onecan also say that as we go through the stages and the sizes of the subaperturesincrease, the Doppler bandwidth of the data increases, thus the beams can bemade narrower when beamforming along the center line, and the errors at theedges of the sector are smaller.

Page 46: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

46 4.2 DETAILS

Figure 15: An example of the process of forming polar images. Both the image scene andthe aperture are divided before beams are formed to the centres of the newsubimages. The image is taken from [Banks, 2002]. Some of the symbols mightnot be consistent with the text.

A summation of beams using range interpolation are performed in each stage.The resolution in range are the same in every polar image, independent of stage,but the resolution in angle gets better and better in every stage as the beamsare made narrower. Narrower beams corresponds to increasingly narrower mainlobes in the beam patterns. The approximations used in FFBP, and hence the noisein the image, are related to the angular resolution with which the images in theintermediate stages are formed. To keep the number of data samples constant,an increase in the size of a subaperture must imply beamforming to a narrowerangular region and vice versa, because of the fact that the Doppler bandwidth ofthe data increases for increasing subaperture lengths. This fundamental principlecombined with the approximation error is what enables computationally efficientbackprojection algorithms to be constructed.

4.2 Details

When new polar images are formed in each stage from the polar images of theprevious stage, the image scene is factorized into smaller and smaller patches.This is in order to make the angular sector to beamform to decrease in size.The number of polar images increase with each stage, the rate depending on thechoice of factorization factor. An example of the forming of an intermediate polarimage can be seen in figure 15. In this example, one subimage from the previousstage is divided into 9 new subimages, at the same time as the subaperture sizefrom the previous stage is doubled. Each of these 9 polar subimages will have

Page 47: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

4 FAST FACTORIZED BACK-PROJECTION (FFBP) 47

a higher angular resolution than the one from the previous stage, but they arealso smaller in size. For each subaperture, we beamform along the center line ofthe polar image. Because the number of polar images increases in each stage, wemust change the dimensions of the raw data matrix in each stage. Instead of hav-ing a raw data line for each ping and each receiver througout the algorithm, wenow have one raw data line for each ping, each subaperture in the current stageand each subimage in the current stage. In this example, one polar subimage wasdivided into 9 polar subimages, which implies an image factorization factor of 3(the factorization factor is applied to each side of the scene, thus an image factor-ization factor of 3 will divide each polar image of the previous stage into 32 = 9new polar images). The factorization factor for the aperture in this example was2. The factorization factors can be chosen at will, and need not be the same inevery stage, but the choice will influence the quality of the final image and thespeed of the algorithm. We will look more closely at this in subsection 5.2. Theimage is assumed to be quadratic to make the implementation simpler. Apply-ing an image factorization in each stage corresponds to placing an increasinglyfiner grid over the original imaging scene. To beamform along the center line ofa given subimage, the data are processed according to

ssi(t, ui) =

QR(i)∑

n=1

ssi−1(t−∆t, n∆uui−1) (41)

where si−1 are the data from the previous stage, si are the data in the current stage,ui−1 are the subaperture center positions from the previous stage for the subap-ertures of interest, ui are the subaperture center positions in the current stage,QR(i) is the aperture factorization factor for the current stage and and ∆u is thedistance between each subaperture center position from the previous stage. If wecall the travel times from the subaperture center positions of the previous stageto the center point of the subimages in the current stage ti−1, and the travel timesfrom the subaperture center positions in the current stage to the center points ofthe subimages in the current stage ti,

∆t = ti−1 − ti. (42)

From these equations we see that an essential part of the implementation is beingable to determine the parent-child relationship of the subimages. These relation-ships can be illustrated by figure 16 for an image factorization factor of 2 (i.e.each subimage produces 4 new subimages). The receiver positions are processedaccording to

ui =1

QR(i)

QR(i)∑

n=0

n∆uui−1 (43)

in each stage. The choice of interpolation method will affect the quality of theimage and the speed of the algorithm. Linear interpolation in range and nearest

Page 48: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

48 4.2 DETAILS

Figure 16: The quadtree structure the subimages are related to eachother through. Thefigure is taken from [McCorkle and Rofheart, 1996].

neighbour interpolation in angle is used in this implementation, but other inter-polation methods will possibly give better images at a higher computational cost.

It is not necessary to make an image in cartesian coordinates at any stage butthe final; all of the information is kept in the polar images which need not bedisplayed. The number of stages to apply depends on several parameters in thealgorithm, and it is discussed further in subsection 5.2. One could in theory runthe algorithm all the way through, i.e. until each pixel is one subimage. However,there is a cross-over in the number of stages where there is nothing more to gainin speed. To go on from there will make the algorithm slower (see (44) below).In practice, FFBP is interupted at a given stage, and standard backprojection isran on each of the subimages with a reduced number of receivers. The reducednumber of data lines are treated as if they were the original raw data, and thiswill work fine if the range error is small enough. Error analysis is the topic of thenext subsection.

The steps in the algorithm can be summarized as follows

• Determine the factorization factors.

• Divide the image scene into QI(i)2 subimages. QI(i) are the factorization

Page 49: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

4 FAST FACTORIZED BACK-PROJECTION (FFBP) 49

Figure 17: An example of 3 stages of FFBP. In each stage both the subaperture and thesubimages are divided before beams are formed from the new subaperture pos-itions to the new subimages. The figure is taken from [Banks, 2002].

factors for the image in stage i.

• Determine how many range samples are needed to cover the given subim-age.

• Beamform to the centers of the resulting subimages.

• Interpolate and combine QR(i) beams together. QR(i) are the facotorizationfactor for the recivers in stage i.

• Repeat these steps Ns−1 times. Ns is the number of stages in the algorithm.

• Backproject to all image pixels with a reduced number of receivers in thelast step.

A summary of the method for an image factorization factor of 4 and an aperturefactorization factor of 2 can be seen in figure 17. The method in this illustrationstarts without dividing the image scene in the first stage, however.

The last stage is where the computation time is saved. The savings are largestwhenN,X and Y are large. The order of computation in each of the intermediatestages is the number of subapertures × the number of subimages × the numberof range samples needed for the subimages. The order of computation in the laststage is the number of pixels × the number of subapertures in the last stage. Letus look at an example to illustrate this. We set the number of stages to 3, thenumber of pixels to be 1024 × 1024, the number of pings to be 10, the number of

Page 50: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

50 4.2 DETAILS

receivers to be 64 and the number of range samples to be 3802. The factorizationfactor for the image is set to 2, it is the same in all stages, and the factorizationfactor for the receivers is set to 2, it is also the same for all the stages. In each stagethe number of range samples is reduced. The number of range samples neededto cover the area of the subimage is not the same for all the subimages; it is range-dependent. To simplify the implementation, the largest number of range samplesneeded is chosen for all the polar images in a given stage. It is possible to startthe first stage using the whole imaging scene as the subimage, and beamformingto the center line, but it is also possible to start with a divided scene, which isthe approach used in this thesis. By starting with only one subimage, the rangesamples needed in the first stage would be the original number of range samples,but by starting with a divided scene, the number of range samples needed in thefirst stage is reduced. In this example we start the first stage beamforming fromthe original number of receivers to 4 subimages. The beams are also combinedin this stage, to be ready for use in the next stage. The number of range samplesneeded was found to be 3757, and the computational load for the first stage is

640× 4× 3.757 = 9.617.920

In the next stage 2 original receivers have been combined to form new receiversso the number of data lines is now 320, and the number of subimages is 4 × 4 =16. The number of range samples needed has been reduced to 1990. Thus thecomputational load for the second stage is

320× 16× 1.990 = 10.188.800

In the last stage, standard backprojection is performed with 160 receivers. It isbackprojected to all pixels. The computational load in the last stage is thus

160× 1.024× 1.024 = 167.772.160

and the total computational load for this example is

9.617.920 + 10.188.800 + 167.772.160 = 187.578.880

For comparison, the computational load for BP with the same parameters wouldbe

640× 1.024× 1.024 = 671.088.640

From this we can see that FFBP represents a considerable improvement in speed.These calculations are, however, simplifications of the real computational loadsfor the separate stages. Additional computation is needed to keep track of theparent-child relationships, for determining the number of range samples neededin each stage, the calculations of the new subimages and subapertures etc. Sincethe computational burden these factors represents are highly dependent on the

Page 51: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

4 FAST FACTORIZED BACK-PROJECTION (FFBP) 51

implementation, this thesis will not try to make a clear expression for it. Insteadall the extra computation is combined into one variable for each stage, γi. Atsome point, this factor is so big that it is best not to run any more stages, butinstead backproject to all the pixels from the remaining receivers. The growth ofγi through the stages depend on the the choice of factorization factors, and it canlead to a very slow algorithm for some choices of factorization factors.

The total computational load for a general data set can be written as

N

Ns−2∑

i=0

(∏ij=0QI(j)

2)Nr

∏ij=0 QR(j)

+ γi

+

XY∏Ns−1j=0 QR(j)

(44)

where N is the number of original receivers, X is the number of pixels in x-direction, Y is the number of pixels in y-direction, Ns is the number of stages,QR(j) is the factorization factor for the aperture at stage j, QI(j) is the factor-ization factor for the image at stage j and Nr is the number of range samplesneeded to cover a given subimage. The γi fators are very much dependent onthe implementation, and must be determined empirically. We see here that atsome point the expression for the intermediate stages will take up so much ofthe computation time that the savings are no longer increasing or where theyare completely lost (we saw in the last example that the computational load forthe intermediate stages increase with each stage). When the savings are com-pletely lost, FFBP will take longer or the same amount of time as BP. However,it is nearly impossible to make a general expression for where these points lie,both because of the implementation-dependent γi, but also because of the manychoices of parameters involved. [Banks, 2002, chapter 6] has made a programto find the optimum number of stages given the image and aperture geometry.[Ulander et al., 2003] states that a suitable number of stages for SAR is around3, but [Banks, 2002, chapter 6] states that it can be up to around 6 for SAS. Inthe next section we will look at some simulations to see how the computationtime comes out in practice, and we will see that it is possible to set up someguidelines as to how the parameters should be chosen to get good performanceof FFBP. The derivation of the computational load in this section has assumeda collocated transmitter/receiver configuration. When using a bistatic transmit-ter/receiver configuration with FFBP, the displacement needs only be accountedfor in the first stage, hence the difference in computation times between the twoconfigurations are smaller than when standard backprojection is used ( [Banks,2002, chapter 6]).

[Ulander et al., 2003] has derived that the theoretical order of computation isO(QN2 logQN) if N = X = Y and if the same factorization factors QR = QI

(called Q) are used throughout the stages. The range and sample spacings arealso assumed to be equal. This is the order proportional to that obtained by FFT-algorithms.

Page 52: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

52 4.3 ERROR ANALYSIS

[Banks, 2002, chapter 6] has found out that the savings in computational loadusing FFBP are independent of the frequency of the transmitted pulse. This is be-cause a higher center frequency means that smaller polar images must be formed.This again increases the computation time. However, a higher frequency also re-quire a denser azimuth sampling.

As FFBP is phase preserving, the method is suitable for interferometric processing.A more thorough derivation of the algorithm can be found in [Ulander et al.,2003]. See also [Hunter et al., 2003], [Banks, 2002, chapter 6], [Frolind et al.,2004] and [Olofsson, 2000].

4.3 Error analysis

The performance of FFBP is controlled by the approximation error (or range error)from equation (40). [Ulander et al., 2003] gives the maximum error in a givenstage when using nearest neighbour interpolation in angle as

|∆R|max =

LALI4Rmin

, LA ≤ Rmin

LI2 , LA > Rmin

(45)

where LA is the length of the subaperture, LI is the length of the subimage inx-direction and Rmin is the minimum range to the image. From this equation wecan see that to keep the range error constant throughout the stages, we can bal-ance an increase in LA by a decrease in LI , which is exactly what happens whenthe aperture and image factorization factors are equal. When the subaperture be-comes longer than Rmin there is no longer a need to decrease the subimage size.The resolution then increases very slowly, due to the fact that the Doppler coneangle changes very slowly. If the factorization factors are not equal, the errorsfrom the individual stages will be successively larger or smaller, depending onwhich factorizaton factor is bigger.

This range error applies to only one stage. To find the total error of the algorithm,the range errors from all the stages must be summed. By varying the total error,image quality can be traded for speed and vice versa. A small total error gives abetter image quality, but needs more computation. A large total error has morenoise, but needs less computation. It is useful to talk about the total error as afraction of λ. Both [Ulander et al., 2003] and [Banks, 2002, chapter 6] states thatif the total error is greater than λ

4the image is destroyed. This is because the total

path difference between the centre and the edge of the polar image exceeds λ2,

and the sonar data is no longer coherently combined. This constraint can also bederived from the fact that the phase error has to be less than or equal to π. Thephase error can be obtained by multiplying the range error by 2k. And 2k|∆R| ≤π ⇒ |∆R| ≤ λ

4. In some cases, when the minimum range to the scene is not

Page 53: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

4 FAST FACTORIZED BACK-PROJECTION (FFBP) 53

much larger than the length of a subaperture, the maximum phase error has to beless than π. [Ulander et al., 2003] means that the maximum phase error must belowered to π

8in these cases, and the total approximation error the algorithm can

have and still produce an image of acceptable quality will also be lowered.

A list of ways to decrease the approximation error follows:

• Start the image scene as far away from the aperture as possible.

• Use high factorization factors for the image.

• Use low factorization factors for the receivers.

• Use short subapertures.

• Use small image scenes .

Examples of how the total approximation error affects the speed and image qual-ity can be seen in the next section. [Frolind et al., 2004] has investigated howother interpolation methods in angle than nearest neighbour will affect the azi-muth beampattern. They found that the sidelobe level was reduced a consider-able amount by using better interpolation methods. This does increase the com-putation time, however.

[Banks, 2002, chapter 6] has found out that, by good planning of the allocationand freeing of memory, the memory requirements are around that of the finalimage plus object code.

According to [Ulander et al., 2003] using FFBP leads to a shift-variant pointspread function; it depends on the exact spatial locations of the subaperture beamsused to form the given pixel value.

Page 54: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

54

5 FFBP performance

In this section an analysis of the performance of FFBP is made. The speed of thealgorithm and the quality of the images are measured for different approximationerrors and number of stages used in the algorithm.

5.1 Simulation setup

The images in this section have been made using simulated data of a point re-flector, with 10 pings of a physical array of 64 receivers in the reconstruction. Thetotal size of the syntheic aperture is 5.28 m. A center frequency of 1 MHz wasused.

5.2 Results

5.2.1 Processing load details

As the speed of the algorithm is dependent upon several parameters, it is difficultto determine which set of parameters gives the fastest reconstruction. There are,however, some general trends.

• The number of receivers to use in the backprojection in the last stage de-creases as the number of stages increases. Therefore, it sounds like a goodidea to use as many stages as possible. This is only true to a certain point.One must keep in mind that the intermediate stages also require their shareof computation, and that each successive intermediate stage requires morecomputation than the previous one. So, at one given stage, the computationneeded for the intermediate stages outweighs the savings in the last stage,and there is no longer anything to save by using more stages. When run toofar, FFBP can even use more time than BP. A plot of the computation timesversus number of stages for some simulations can be seen in figure 18. 1stage corresponds to BP. Here, the factorization factors were all equal to 2.We can clearly see the cross-over point at stage 5. At stage 6, the computa-tion time increases. This would indicate that an optimal number of stagesfor this dataset is 5. However, the analysis is not that simple.

• The approximation error is additive through the stages. This means thatthe approximation error increases with each stage, and the quality of theimage decreases stage by stage. If a certain quality requirement is given,it may not be possible to run the algorithm all the way to the cross-overpoint. The development of the approximation error through the 7 stages

Page 55: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

5 FFBP PERFORMANCE 55

1 2 3 4 5 6 7400

600

800

1000

1200

1400

1600

Number of stages

Tim

e [s

]

Figure 18: A comparison of the computation time for various number of stages. The fac-torization factors for both the image and the receivers were 2.

of figure 18 can be seen in figure 19. We see that the approximation errorincreases linearly with the number of stages for these parameters. As thefactorization factors need not be the same in every stage, the approximationerror will not always evolve linearly.

• It is possible to change the rate with which the approximation error in-creases. To keep the approximation error constant (or increasing less rapidlythan with equal factorization factors), one can apply different factorizationfactors. Not only can there be a different factorization factor for the imageand the receivers, but there can also be different factorization factors in eachstage. To get the algorithm to run really fast, one could use few stages, buthigh receiver factorization factors. This, in turn, would lead to a very highapproximation error.

• It is the approximation error and the choice of factorization factors that con-trols the speed of the algorithm. Generally, a small approximation errorgives a long computation time (as the algorithm is approaching BP) andvice versa. The same total approximation error can be obtained in differentways. This in turn affect the speed of the algorithm and quality of the finalimage. As an example, assume that we can obtain the same approximationerror in 2 stages as in 3 stages, by varying the facorization factors. Let thesize of the scene in azimuth be 5 m, and the minimum range to the scene be40 m (the same as in the dataset above). We start with 2 stages, and applyan image factorization factor of 4 and a receiver factorization factor of 2.This gives an approximation error of λ

30.3and the algorithm takes 415.8 s to

Page 56: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

56 5.2 RESULTS

1 2 3 4 5 6 70

0.5

1

1.5x 10

−3

Number of stages

App

roxi

mat

ion

erro

r

Figure 19: Approximation error as a function of number of stages. The factorization factorsfor both the image and the receivers were 2.

complete. The same approximation error could be achieved with 3 stages,by applying equal factorization factors of 2 in all stages. However, the com-putation time has now been raised to 568.5 s. This seems to conflict withthe results in figure 18. The important thing to consider here is the ap-proximation error. The reason that the computation time decreased with anincreasing number of stages in figure 18, was that for each successive stage,the approximation error was doubled.

To end up with a given approximation error using more stages than 2, eitherthe approximation error in the first stage has to be quite high, or the factor-ization factors for the image must be higher than the factorization factorsfor the receivers in order to keep the approximation errror from increas-ing too rapidly. These two approaches will increase the computation timecompared to operating with equal factorization factors for the image andthe receivers. When the factorization factors for the image are higher thanthe factorization factors for receivers, the number of subimages to beam-form to in each intermediate stage increases faster, and more computationis required in the intermediate stages. Thus, the general trend is that a smallapproximation error gives small computation times, but for a given approx-imation error, the algorithm will be fastest for the smallest number of stagesused to obtain it.

• However, two runs of the algorithm with the same number of stages andthe same approximation error can produce images with different quality,and the speed can also be different. It depends on how the factorization

Page 57: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

5 FFBP PERFORMANCE 57

−4 −3.5 −3 −2.5 −2 −1.50

500

1000

1500

Approximation error (logarithmic scale)

Tim

e [s

]

2 stages3 stages4 stages5 stages6 stages

Figure 20: A comparizon of the compuatation time for different number of stages and dif-ferent approximation errors.

factors are chosen. If the vector of factorization factors (for all stages) for theimage is equal to the vector of the factorization factors for the receivers, theapproximation error is the same, independent of the values in the vectors.However, the speed of the algorithm will be lower if high values are placednear the end of the vector, as that leads to many subimages. It also makesthe quality of the images lower, as more interpolation on the same data isapplied. As a consequense, it will not be meaningful to directly compare theapproximation error versus time for different number of stages, since thesame approximation error in a certain stage can produce different results.Figure 20 tries to do this anyhow, but one must keep in mind that this isjust an example of computation times; other choices of factorization factorswould produce slightly different results. The plot can, however, give us anindication of the general behaviour of the computation time for differentapproximation errors and number of stages. The approximation error isplotted on a logarithmic scale to enhance the features. For a higher numberof stages, not so many approximation errors are shown. This is because ofthat both the image and the number of receivers has to be large if they areto be divided many times. For this dataset, 7 stages was only possible whenall the factorization factors were equal to 2. For 6 stages it was not possibleto obtain as high approximation errors as in the simulations where a lowernumber of stages was applied. Thus the graphs in the plot are not of equallength. For a larger number of receivers, lower approximation errors for ahigher number of stages could have been obtained.

• Also remember that the number of receivers and pixels has to be of such

Page 58: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

58 5.2 RESULTS

an order that dividing them by the factorization factors will produce wholenumbers. It is also important to note that the approximation error dependsupon the size of and minimum range to the scene, as well as number ofpings and receivers.

For all these reasons, it is nearly impossible to find the optimum set of parametersfor a general data set. Hence, [Banks, 2002, chapter 6] has made a program to findthe optimum number of stages given the image and aperture geometry. There are,however, some guidelines one can follow in order for the algorithm to be fast:

? Use as high an approximation error as possible.

? If a certain approximation error is required, try to use as few steps as pos-sible to obtain it.

? If it is desireable to use one (or a few) high value(s) for factorization factorsin some stage(s), apply these in the first stage(s).

It was not the purpose of this paper to compare FFBP with the wavenumber al-gorithm. [Hunter et al., 2003] have done so, and they state that FFBP can onlyoutperform the wavenumber code with regards to speed for high approximationerror, in which case the image is usually useless.

5.2.2 Quality

Let us look at bit closer at the quality of the final image, and how it is connected tothe speed and the approximation error of the algorithm. As stated above, a highapproximation error gives a short computation time. However, it also producesimages with low quality. The image quality versus approximation error is testedin figures 21 to 36. 2 stages were applied for all the images. All the images has1024 × 1024 pixels.

Thus, a compromise has to be made between speed of the algorithm and qualityof the final image. As the number of stages increase, the algorithm tolarance forthe approximation error decreases. This means that an image which looks goodwith a certain approximation error and number of stages does not necessarilylook good with the same approximation error, but made through more stages.The reason for this is that as the algorithm runs through more stages, more andmore interpolation is done on the raw data. This, in turn, means that approxima-tion errors (not to be confused with the approximation error of the algorithm) ofthe data increase. The limit for when the data can no longer be coherently com-bined (when the approximation error is larger than λ

4), can thus only be used in

practice for one separate stage. We can see in figures 21 to 36, where only 2 stageswas applied, that the resulting image qualities fit well with the theory. In figure

Page 59: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

5 FFBP PERFORMANCE 59

Azimuth [m]

Rang

e [m]

Backprojection

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 540

40.5

41

41.5

42

42.5

43

43.5

44

44.5

45

−40

−35

−30

−25

−20

−15

−10

−5

0

Figure 21: Image made with BP.

0 1 2 3 4 5 6−80

−70

−60

−50

−40

−30

−20

−10

0

Azimuth [m]

Inten

sity [

dB]

Test number 0: Point spread function azimuth (maxvalues)

Figure 22: Point spread function for the previous image.

Page 60: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

60 5.2 RESULTS

Azimuth [m]

Rang

e [m]

Test number 1: FFBP

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 540

40.5

41

41.5

42

42.5

43

43.5

44

44.5

45

−40

−35

−30

−25

−20

−15

−10

−5

0

Figure 23: Image made with FFBP and an approximation error of λ121 .

0 1 2 3 4 5 6−80

−70

−60

−50

−40

−30

−20

−10

0

Azimuth [m]

Inten

sity [

dB]

Test number 1: Point spread function azimuth (maxvalues)

Figure 24: Point spread function for the previous image.

Page 61: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

5 FFBP PERFORMANCE 61

Azimuth [m]

Rang

e [m]

Test number 2: FFBP

1 1.5 2 2.5 3 3.5 4 4.5 530

30.5

31

31.5

32

32.5

33

33.5

34

34.5

35

−40

−35

−30

−25

−20

−15

−10

−5

0

Figure 25: Image made with FFBP and an approximation error of λ60 .

1 1.5 2 2.5 3 3.5 4 4.5 5−90

−80

−70

−60

−50

−40

−30

−20

−10

0

Azimuth [m]

Inten

sity [

dB]

Test number 2: Point spread function azimuth (maxvalues)

Figure 26: Point spread function for the previous image.

Page 62: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

62 5.2 RESULTS

Azimuth [m]

Rang

e [m]

Test number 1: FFBP

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 540

40.5

41

41.5

42

42.5

43

43.5

44

44.5

45

−40

−35

−30

−25

−20

−15

−10

−5

0

Figure 27: Image made with FFBP and an approximation error of λ30 .

0 1 2 3 4 5 6−80

−70

−60

−50

−40

−30

−20

−10

0

Azimuth [m]

Inten

sity [

dB]

Test number 1: Point spread function azimuth (maxvalues)

Figure 28: Point spread function for the previous image.

Page 63: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

5 FFBP PERFORMANCE 63

Azimuth [m]

Rang

e [m]

Test number 1: FFBP

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 540

40.5

41

41.5

42

42.5

43

43.5

44

44.5

45

−40

−35

−30

−25

−20

−15

−10

−5

0

Figure 29: Image made with FFBP and an approximation error of λ15 .

0 1 2 3 4 5 6−90

−80

−70

−60

−50

−40

−30

−20

−10

0

Azimuth [m]

Inten

sity [

dB]

Test number 1: Point spread function azimuth (maxvalues)

Figure 30: Point spread function for the previous image.

Page 64: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

64 5.2 RESULTS

Azimuth [m]

Rang

e [m]

Test number 1: FFBP

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 540

40.5

41

41.5

42

42.5

43

43.5

44

44.5

45

−40

−35

−30

−25

−20

−15

−10

−5

0

Figure 31: Image made with FFBP and an approximation error of λ7 .

0 1 2 3 4 5 6−80

−70

−60

−50

−40

−30

−20

−10

0

Azimuth [m]

Inten

sity [

dB]

Test number 1: Point spread function azimuth (maxvalues)

Figure 32: Point spread function for the previous image.

Page 65: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

5 FFBP PERFORMANCE 65

Azimuth [m]

Rang

e [m]

Test number 0: FFBP

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 540

40.5

41

41.5

42

42.5

43

43.5

44

44.5

45

−40

−35

−30

−25

−20

−15

−10

−5

0

Figure 33: Image made with FFBP and an approximation error of λ3 .

0 1 2 3 4 5 6−80

−70

−60

−50

−40

−30

−20

−10

0

Azimuth [m]

Intensit

y [dB]

Test number 0: Point spread function azimuth (maxvalues)

Practical azimuth resolution = 0.10088 mTheoretical azimuth resolution = 0.0075 mPGLR left side = 6.6113 dB

ISLR = 24.7079 dBPGLR right side = 6.1588 dB

Figure 34: Point spread function for the previous image.

Page 66: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

66 5.2 RESULTS

Azimuth [m]

Rang

e [m]

Test number 1: FFBP

0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 540

40.5

41

41.5

42

42.5

43

43.5

44

44.5

45

−40

−35

−30

−25

−20

−15

−10

−5

0

Figure 35: Image made with FFBP and an approximation error of λ1 .

0 1 2 3 4 5 6−45

−40

−35

−30

−25

−20

−15

−10

−5

0

Azimuth [m]

Inten

sity [

dB]

Test number 1: Point spread function azimuth (maxvalues)

Figure 36: Point spread function for the previous image.

Page 67: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

5 FFBP PERFORMANCE 67

2 2.5 3 3.5 4 4.5 53

4

5

6

7

8

9

10

11

Number of stages

App

roxi

mat

ion

erro

r in

frac

tions

of l

ambd

a

Figure 37: The limit of approximation error where the images are of so bad quality thatthey are useless.

37 it is illustrated at which approximation error the quality of the images becomeunacceptable for the respective stages when more than 2 stages are applied. Thisis just an example for some given simulations; other choices of parameters wouldproduce different results, but the general tendency is correct. This feature of FFBP(that the tolerance for the approximation error increases through the stages) is notdesireable, but can probably be minimized by use of better interpolation meth-ods in azimuth. However, good interpolation methods are computationally ex-pensive, so the savings of FFBP may be lost. A compromise between speed andinterpolation method must be made. It may also be caused simply by bad imple-mentation.

Like with computational speed, here are some guidelines for how to produceimages of good quality:

? Use as big an approximation error as possible.

? If a certain approximation error is required, try to use as few steps as pos-sible to reach it.

? If it is desireable to use one (or a few) high value(s) for factorization factorsfor the image in some stage(s), apply these in the first stage(s).

The two last points in the list is valid for obtaining both high speed and goodimage quality with FFBP. However, the first point in the list is in conflict withthe first point in the corresponding list for high speed given in subsection 5.2.1,hence there will always be a trade-off between speed and quality.

Page 68: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

68 5.3 REMARKS

5.3 Remarks

This code is by no means optimized, and probably not without bugs. The memory-use has not been optimized according to [Banks, 2002, chapter 6] either. Hencethere may be a gap between these results and the results obtained by others. E.g.[Banks, 2002, chapter 6] has found that it in some cases can be feasible to use 6stages in FFBP. With this code, 6 stages was never the optimum number of stages.These gaps may be caused only because of different choice of parameters of theSAS system, but it is hard to tell without further testing of the code. So the resultspresented here should not be interpreted as any final solutions, more as a pointerof how the algorithm behaves under certain conditions. A lot more testing wouldhave to be done before any waterproof results could have been presented here.

Page 69: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

6 EXPERIMENTAL RESULTS FROM THE HUGIN AUV 69

6 Experimental results from the HUGIN AUV

In this section the SAS system the code is intended for will be presented. First, anexplanation of what an AUV is, and why it is a good idea to use one over standarddeep-towed sonar systems, will be given. Then some spesifications of the HUGINfamily of AUVs are stated, before the SAS system intended for HUGIN (whichis under development) is presented. At the end, we will look at results fromapplying FFBP to data from the Edgetech SAS. HUGIN means High-precisionUntethered Geosurvey and INspection system.

6.1 The HUGIN family of AUVs

The majority of sonar systems are towed systems where the sonar is carried by aplatform (often called the towfish) towed after a boat. There are several obstaclesby using this approach. A towfish can be positioned acoustically from the boatin depths of less than about 800 meters, but this method breaks down in greaterdepths. In these cases other positioning methods, such as using a long baseline(LBL) positioning system or using a second boat (often called a chase boat), can beapplied. A LBL works by measuring the propagation time of the signals from thetowfish to a number of separate sensor elements. Thus, the measured times canbe used to find the distances from the towfish to the sensor elements and hencethe position of the towfish with respect to the sensor elements. If a chase boatis used, the posisiton of the towfish is sent from the chase boat to the main boatusing radio waves. Due to the massive amounts of tow cable required (10 000meters is not uncommon), the costs of deep-towed systems are extremely high.Such cable lengths demand huge handling systems and result in a substantialdrag when towed. As a result, the survey speeds are limited to 2.0 - 2.5 knots,and turns often take 4 - 6 hours to accomplish. A lot of time and money are spentunnecessary on these operations. In addition, a deep-towed sonar hardly evermanages to stay on the prescribed survey line. Currents often push the towfishoff line by hundreds of meters. The information was taken from [Northcutt et al.,2000].

To overcome these problems, Kongsberg Simrad AS and FFI (Norwegian de-fence research establishment) have developed a family of autonomous under-water vehicles (AUVs) to carry the sonar. They have been commersially in usein the offshore industry since 1997. They are also used by the military for minehunting among other things. Within this program, a prototype single-sided highresolution interferometric SAS named SENSOTEK for the HUGIN AUV is underdevelopment. In addition, a two-sided SAS system has been procured on com-mercial terms from Edgetech. The Edgetech SAS is installed on the HUGIN 1000AUV, which recently was mobilised on the Royal Norwegian Navy (RNN) minehunter KNM Karmøy, while the SENSOTEK SAS is to be installed on HUGIN 1

Page 70: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

70 6.1 THE HUGIN FAMILY OF AUVS

Figure 38: HUGIN 1000 at launch. The image is taken from [Hagen et al., 2003].

(FFIs own research vehicle) in the near future. The SAS system from Edgetech issupplied complete with postprocessing software and hardware, where the SASprocessing software is ProSAS from Dynamics Technology. A HUGIN AUV canbe seen at launch in figure 38.

An AUV is a self-propelled, unmanned underwater vehicle that is controlled byan onboard computer. A sonar data collection process is highly simplified by us-ing an AUV instead of a deep-towed vessel. Only one boat is required, and it cancommunicate directly with the AUV. Cost and logistics are reduced substantiallywhen the tow vessel, tow cable, winch, etc. are eliminated. In addition, turns aredone in practically no-time, and the average speed the AUV can maintain whileon track, is higher than that of a towfish. AUVs have the advantage over stand-ard deep-towed systems that thay can move steadily through the water, and alsonear the sea floor. Although there may be some disturbances from currents, theAUV will stay within a few meters of the programmed line. The survey data willalso be improved due to the fact that the AUV can maintain a constant heightover the sea floor. This is very difficult with a deep-towed system. When thetowfish is too high over the bottom, the data quality will be poor, and if it is toolow, the imaged area will be very small, and the probability of a collision with thebottom will also increase. AUVs can have a high quality inertial navigation sys-tem (INS) installed, and the potential for making better images is thus present.One of the historic problems with AUVs has been limited power resources.FFIhas developed semi fuel-cell battery technology that can deliver up to 60 hours

Page 71: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

6 EXPERIMENTAL RESULTS FROM THE HUGIN AUV 71

Length 3.85 - 5.0 mMax diameter 0.75 mVolume 1.1 - 1.6 m3

Depth rating 1000 mSpeed range 2 - 6 knotsNominal speed 3 - 4 knotsMotion stability < 0.5 at 4 knotsMax pitch angle ± 50

Turn radius 10 m at 4 knotsEnergy capacity 5 - 15 kWhMax power 2 kWEndurance 24 hours at 3 knots

Table 7: Specifications for the HUGIN 1000 vehicle

endurance, and lithium polymer battery technology that gives up to 24 hours.

One can hear about both AUVs and UUVs. An AUV is an autonomous under-water vehicle, while an UUV is an untethered underwater vehicle. To be trulyautonomous, one could, for example, launch an AUV from the dock, let it goout and perform the required survey without external supervision, and the AUVwould return to the dock a week later with all the data. So in reality, many AUVsare really UUVs. However, because AUV is the more recognized commercialterm, most UUVs are referred to as AUVs, and this is also the case in this thesis.

The information below is taken from [Hagen et al., 2003]. There are two vehicle-classes within the HUGIN family of AUVs. HUGIN 3000 can go as low as 3000meters, has an endurance of 50 - 60 hours and uses a semi fuel-cell power source.These vehicles have enjoyed great success in the civilian survey industry, andhave accumulated more than 40 000 billed line kilometers. In 2002, FFI andKongsberg started a project aimed to develop a smaller vehicle, named HUGIN1000. This AUV can go down to 1000 meters and uses the Lithium polymer bat-tery technology. Early in 2004, the first HUGIN 1000 prototype AUV for mil-itary applications was completed and delivered to the Royal Norwegian Navy.HUGIN 1000 has a reduction in weight and volume of up to 50 % comparedto HUGIN 3000, and it thus provides a more comfortable handling onboard aboat. Table 7 lists some of the key ratings and specifications for the HUGIN 1000vehicle.

6.2 The SAS program for HUGIN

The information in this subsection is taken from [Hansen et al., 2004].

The main goal for the SAS program is to develop a two-sided interferometric

Page 72: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

72 6.3 EXPERIMENTAL SETUP

SENSOTEK SAS• One-sided interferometric SAS• Theoretical resolution of 1× 2 cm• Goes on HUGIN 1 in 2004

Edgetech SAS• Two-sided non-interferometric SAS• Theoretical resolution of 10× 10 cm• Installed on HUGIN 1000 late 2003

Possible delivery to RNN• Two-sided interferometric SAS• Theoretical resolution better than 5× 5 cm• HUGIN 1000-MR mid 2005

Table 8: Overview of the different SAS systems in the HUGIN AUV program

SAS for the HUGIN 1000-MR, which is to be delivered to the Royal Norwe-gian Navy mid 2005. The hardware and electronics are developed by Kongsberg,while the signal processing is developed by FFI. The system consists of two ver-tically diplaced full-length receivers each with 96 elements of size λ and a two-dimensional phased array transmitter with full flexibility in the vertical plane.The system has a bandwidth of up to 50 kHz and can can operate at 270 metersrange at 1.5 m/s with an overlap of 1.33 (24 elements), equivalent to 1.5 km2/hfor a one-sided system.

Motion compensation is done by integrating the DPCA technique with the aidedINS. DPCA estimated sway and surge is used as aiding sensors (similar to a cor-relation velocity log (CVL) for the navigation system. There are two differentoperational modes; conventional strip-map mode and multi-look mode. In multi-look mode, the independent images can be used either to reduce speckle in theimage, produce images in different aspect angles for multi-aspect shadow clas-sification or to improve the phase unwrapping for full resolution interferometricprocessing. The aim of this thesis was to make FFBP run with data from theEdgetech SAS.

6.3 Experimental setup

The data set used here were provided by FFI. The data was recorded by HUGIN1 off the coast of Horten on 1 December 2003. When reconstructed correctly, oneshould see a wreck, which is probably a fishing vessel. The physical array consistsof 6 elements and is 1.2012 m in extent. 200 pings were used in the reconstruction.The center frequency was 125 kHz with a bandwidth of 15 kHz.

Page 73: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

6 EXPERIMENTAL RESULTS FROM THE HUGIN AUV 73

0 10 20 30 40 50 60 70 80−0.2

0

0.2y−

varia

tion

[m]

0 50 100 150 200 2500.025

0.03

0.035

0.04

Rol

l

0 50 100 150 200 2500.01

0.015

0.02

Pitc

h

0 50 100 150 200 250−0.02

−0.01

0

0.01

Yaw

Figure 39: Plots of how the sonar moved through the data collection period. This wasaccounted for in the data received from FFI.

The dataset used in this thesis has been navigated using the realtime navigationsolution from the inertial navigation system. No DPCA has been applied. Asimple form of autofocusing (related to contrast optimization) has been appliedto compensate for an unwanted squint in the system. The data has been motioncompensated and regridded onto a rectangular grid. This causes some resid-ual grating lobes, which can be reduced by doing bistatic imaging directly. Thevehicle motion is shown in figure 39.

6.4 Imaging results

The imaged reconstructed with BP can be seen in figure 40. This image gives agood example of that the shadow can be useful in classifying objects. One canclearly see the mast as well as the rest of the outline of the wreck. The big test forFFBP will thus be to reconstruct the image with the same clear shadow.

By trying to reconstruct real data by FFBP, one can face some limitations if thedata are not appropriate. By appropriate it is meant that it will be possible toreconstruct using FFBP with an acceptable error. In this case, the data providedwere not appropriate; there was no way of both getting an acceptable error andseeing the whole wreck in the image. The wreck is about 25 meters in x-direction,

Page 74: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

74 6.4 IMAGING RESULTS

Azimuth [m]

Ra

ng

e [

m]

Backprojection

10 20 30 40 50 60 7020

30

40

50

60

70

80

90

100

0

10

20

30

40

50

60

Figure 40: Image of the whole scene made by BP.

Page 75: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

6 EXPERIMENTAL RESULTS FROM THE HUGIN AUV 75

and located approximately 60 meters from the sonar. Edgetech SAS is a narrow-beam system, which means that the size of each element is quite big. The size ofthe elements affect the total approximation error of FFBP. By inserting the para-meters of Edgetech SAS into the equation for the maximum range error in 40, wesee that the maximum size image size we can use FFBP on is about 7 m, muchless than the extent of the wreck. The maximum image extent can be raised byapplying high numbers for the factorization factors for the image, however, thisdemands huge amounts of memory, and as we saw with the simulated data, thealgorithm will be slow, maybe even slower than BP. Some operations could re-duce the approximation error in this case, without applying high numbers for thefactorization factors of the image. It was either increasing the minimum range tothe scene or decreasing the image size in x-direction. The minimum range to thescene can be no larger than 60 meters if the wreck is to be seen, and we wish tosee the whole wreck. This is still not enough to obtain an acceptable approxima-tion error. The fact that the data set was not suitable for FFBP, brought up someinteresting questions; is it possible to divide up the image scene, run FFBP on thedifferent parts and then combine them to an image? And how will this methodperform with respect to quality and computational load compared to standardFFBP?

The image was divided into a mosaic, and FFBP was ran on each patch. Oneapparent drawback with this mosaicing is that much of the gain in computationtime by using FFBP is lost.

However, the mosaicing is valid for investigating the quality of the images madewith real data and FFBP. Thus, this chapter will not discuss the speed of the al-gorithm for the real data set, but instead concentrate on the quality of the images.The test results regarding speed will not differ much from those that could be ob-tained by simulating the same SAS system as the one the real data were collectedby.

To image the whole area in figure 40, the number of patches in the mosaic wouldhave to be very high to get an accetable error. The tests were instead made with asmaller patch of the image including the wreck. For comparison, a smaller patchof the image made with BP is shown in 41. As the size of the image was reduced,the number of pixels were also reduced from 1024 to 512. The same was done onthe following images made with FFBP.

The different patches in the mosaic will have different approximation error whenmade with FFBP because of their varying minimum range to the aperture. Thus,only the worst error is stated here. An image made with 64 patches is shown infigure 42. Ns = 2, QR = 2 and QI = 2 were used in all the tests of FFBP.

The maximim error in the image is λ13

, but it is lower at long ranges. As can beseen, FFBP is capable of reconstructing the shadow. Errors due to the mosaicingcomes in addition to the FFBP errors, hence these tests can not be treated as reli-

Page 76: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

76 6.4 IMAGING RESULTS

Azimuth [m]

Ra

ng

e [

m]

Backprojection

20 25 30 35 40 45 5055

60

65

70

75

80

85

90

95

100

0

10

20

30

40

50

60

Figure 41: Image made with BP of a part of the area.

Page 77: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

6 EXPERIMENTAL RESULTS FROM THE HUGIN AUV 77

Azimuth [m]

Ra

ng

e [

m]

Test number 206: FFBP

25 30 35 40 45

65

70

75

80

85

90

95

100

0

10

20

30

40

50

60

Figure 42: Image made by FFBP and mosaicing. The maximum approximation error is λ13

Page 78: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

78 6.4 IMAGING RESULTS

able. They are only intended to show that the algorithm works. An image madewith 16 patches is shown in figure 43. The maximum approximation error in thisimage is λ

6. There are not much difference.

For comparison, an image without mosaicing is shown in 44. It is useless. Themaximum approximation error is λ

1. More images could have been made with

other approximation errors, but since these tests were biased anyway, only threeimages made with FFBP are shown.

Also for comparison, FFI provided an image of the same scene made with thewavenumber algorithm. It is shown in figure 45.

After these tests, it is clear that FFBP is not a good algorithm for narrowband SASsystems, as they will have to use mosaicing to obtain an acceptable approxima-tion error. Another aspect of using FFBP with narrowbeam SAS systems is theazimuth filtering. Azimuth filtering is important also when using BP to recon-struct narrowbeam SAS data. The problem with azimuth filtering in FFBP is how(and in what stage) to apply it. It has not yet been documented anywhere in theliterature. For these tests, a method that seemed to work was applied. It will notbe discussed here, however, as it requires much more studying. This is definetelyan aspect of FFBP that should be addressed in the future.

Page 79: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

6 EXPERIMENTAL RESULTS FROM THE HUGIN AUV 79

Azimuth [m]

Ra

ng

e [

m]

Test number 206: FFBP

25 30 35 40 45

65

70

75

80

85

90

95

100

0

10

20

30

40

50

60

Figure 43: Image made by FFBP and mosaicing. The maximum approximation error is λ6

Page 80: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

80 6.4 IMAGING RESULTS

Azimuth [m]

Ra

ng

e [

m]

Test number 206: FFBP

25 30 35 40 45

65

70

75

80

85

90

95

100

0

10

20

30

40

50

60

Figure 44: Image made by FFBP. The maximum approximation error is λ1

Page 81: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

6 EXPERIMENTAL RESULTS FROM THE HUGIN AUV 81

50 100 150 200 250 300 350 400

200

400

600

800

1000

1200

1400

1600

1800

2000

2200

−50

−45

−40

−35

−30

−25

−20

−15

−10

−5

Figure 45: Image made by the wavenumber code. The image was provided by FFI.

Page 82: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

82

7 Conclusion

The results from this thesis show that FFBP is a good replacement for stand-ard backprojection in some situations. FFBP is most suited for processing datafrom widebeam SAS systems; in those systems we see a considerable gain in pro-cessing speed while still maintaining the required image quality. The algorithmis capable of handling an arbitrary array geometry, which is a big advantage overfrequency domain methods. It is also a highly flexible algorithm, as image qual-ity can be traded for speed and vice versa. To this point the results in this thesiscoincide with results in other literature.

The aim of this thesis was to find out if FFBP would be a good choice of recon-struction algorithm for data from the Edgetech SAS, a narrowbeam SAS system.How FFBP performs for narrowbeam SAS systems is not documented anywherein the literature.The conclusion in this thesis is that FFBP is not well suited fordata from such systems. There is no way of obtaining images of an acceptablequality in less time than with BP. For Edgetech data, the scene to be imaged wasalso too wide and too close to the aperture for FFBP to work. It did work if theimage was divided into many patches and FFBP was ran on each patch, but thenthe algorithm was no longer fast. Azimuth filtering also becomes a major issuein FFBP with narrowbeam systems, and it is not documented anywhere in theliterature.

FFBP is thus most suited for imaging small scenes on long distances with wide-beam systems. On these kinds of data, however, it performs very well, and themethod looks promising for further use in the SAS community.

Page 83: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

7 CONCLUSION 83

7.1 Future work

As FFBP seems to be a suitable reconstruction algorithm for some kinds of SASdata, various aspects of the method should be further investigated:

• Angular interpolationAll material on FFBP states that a better image quality could be obtainedby using a better interpolation method between the beams. [Frolind et al.,2004] gives an example of how the sidelobes in the azimuth beampatternare lowered by use of other interpolation methods than nearest neighbour.It remains to find out how a better interpolator will affect the speed of the al-gorithm, and which interpolator will give the best trade-off between speedand quality.

• System requirementsA more thorough analysis of what types of SAS systems FFBP is suitablefor, is needed.

• Range-dependent factorizationsBecause the data are kept in polar coordinates in all stages but the last,it should be possible to have a range-dependent factorization of the im-age. One could have fewer and larger subimages at long ranges, and moreand smaller subimages closer to the aperture. This would possibly give adecrease in computational load, as fewer beamforming-operations wouldhave to be carried our, while at the same time producing images with thesame quality as without range-dependent factorization.

• Azimuth filteringThere are no literature on how to filter in azimuth using FFBP. When work-ing with data from a narrowbeam system, this filtering must be done, butit is not straightforward to do it. An analysis of how to apply the azimuthfiltering is therefore needed.

• OptimizationAs this code is not optimized, the test results may be biased. A closer lookat the time consuming part of the algorithm is needed in order to be able tospeed it up. The code probably contains some bugs as well.

The Matlab-code made for this thesis can be obtained by sending an e-mail [email protected].

Page 84: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

84 REFERENCES

References

A. A. Aydiner, W. C. Chew, J. Song, and T. C. Cui. A sparse data fast fouriertransform (SDFFT). IEEE Transactions on Antennas and Progagation, 51(11):3161–3170, November 2003.

S. M. Banks. Studies in high resolution synthetic aperture sonar. PhD thesis, Univer-sity of London, September 2002.

S. Basu and Y. Bresler. O(N 2 log2 N) filtered backprojection reconstruction al-gorithm for tomography. IEEE Transactions on Image Processing, 9(10):1760–1773,October 2000.

S. Basu and Y. Bresler. Error analysis and performance optimization of fast hier-archical backprojection algorithms. IEEE Transactions on Image Processing, 10(7):1103–1117, July 2001.

A. Bellettini and M. A. Pinto. Theoretical accuracy of synthetic aperture sonar mi-cronavigation using a displaced phase center antenna. IEEE Journal of OceanicEngineering, 27(4):780–789, 2002.

A. Boag, Y. Bresler, and E. Michielssen. A multilevel domain algorithm for fastO(N2 logN) reprojection of tomographic images. IEEE Transactions on ImageProcessing, 9(9):1573–1582, September 2000.

M. P. Bruce. A processing requirement and resolution capability comparison ofside-scan and synthetic-aperture sonars. IEEE Journal of Oceanic Engineering, 17(1):106–117, January 1992.

H. J. Callow. Signal processing for synthetic aperture sonar image enhancement. PhDthesis, University of Canterbury, April 2003.

G. Franceschetti and R. Lanari. Synthetic Aperture Radar Processing. CRC Press,1999.

P-O. Frolind, L. Ulander, and G. Shippey. Sas processing using polar version offast factored back projection (ffbp). In Proceedings of International Conference onSonar Signal Processing, Loughborough University, UK, September 2004.

P. T. Gough and D. W. Hawkins. Imaging algorithms for a strip-map syntheticaperture sonar: Minimizing the effects of aperture errors and aperture under-sampling. IEEE Journal of Oceanic Engineering, 22(1):27–39, 1997.

P. E. Hagen, N. Størkersen, K. Vestgard, and P. Kartvedt. The HUGIN 1000autonomous underwater vehicle for military applications. Proceedings of Oceans2003 MTS/IEEE, pages 1141–1145, 2003.

Page 85: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

REFERENCES 85

R. E. Hansen. SENSOTEK interferometric synthetic aperture sonar for HUGINAUV. FFI/RAPPORT (in Norwegian) 2001/03193, Norwegian Defence Re-search Establishment, 2001.

R. E. Hansen and T. O. Sæbø. Synthetic aperture sonar signal processing: Resultsfrom insas-2000. FFI/RAPPORT 2003/02740, Norwegian Defence Research Es-tablishment, 2003.

R. E. Hansen, T. O. Sæbø, and P. E. Hagen. Development of synthetic aperturesonar for the HUGIN auv. Proceedings of the Seventh European Conference onUnderwater Acoustics, 2004.

M. A. Haun. New approaches to abberation correction in medical ultrasound imaging.PhD thesis, University of Illinois, 2004.

D. W. Hawkins. Synthetic aperture imaging algorithms: with application to wide band-width sonar. PhD thesis, University of Canterbury, October 1996.

S. Holm. Medisinsk ultralydavbildning.

A. J. Hunter, M. P. Hayes, and P. T. Gough. A comparison of fast factorised back-projection and wavenumber algorithms for SAS image reconstruction. WorldCongress on Ultrasonics, September 2003.

D. H. Johnson and D. E Dudgeon. Array Signal Processing: Consepts and Techniques.Prentice Hall, 1993.

A. Martinez and J. L. Marchand. SAR image quality assessment. Revista de Tele-deteccion, November 1993.

J. McCorkle and M. Rofheart. An order N 2 log(N) backprojector algorithm for fo-cusing wide-angle wide-bandwidth arbitrary-motion synthetic aperture radar.SPIE AeroSense Conference, 2747:25–36, April 1996.

A. S. Milman. The hyperbolic geometry of SAR imaging. Technical report,Northrop Grumman Airborne Ground Surveillance and Battle ManagementSystems, April 2000.

S. K. Mitra. Digital Signal Processing A computer-bases approach. McGraw-Hill, 2001.

C. Nahum. Autofocusing using multi-scale local correlation. Proceedings of SPIE,3497:21–30, 1998.

J. G. Northcutt, A. A. Kleiner, and T. S. Chance. A high-resolutiomn survey AUV.Offshore Technology Conference, May 2000.

Page 86: Fast time domain beamforming for synthetic aperture sonarfolk.uio.no/node/paper.pdf · Fast time domain beamforming for synthetic aperture sonar Nina Ødegaard University of Oslo

86 REFERENCES

S-M. Oh, J. H. McClellan, and M. C. Cobb. Multi-resolution mixed-radix quadtreeSAR image focusing algorithms. Prodeedings of the Advanced Sensors ConsortiumFederated Laboratory Symposium, pages 139–143, February 1999.

A. Olofsson. Signalbehandling i flygburen ultrabredbandig lagfrekvens-SAR irealtid. Master’s thesis, University of Canterbury, March 2000.

J. Pat. Synthetic aperture sonar image reconstruction using a multiple-receivertowfish. Master’s thesis, University of Canterbury, March 2000.

R. S. Raven. Electronic stabilization for displaced phase centers. U.S. Patent4244036, 1978.

F. Rocca, C. Cafforio, and C. Prati. Synthetic aperture radar: a new applicationfor wave equation techniques. Geophysical Prospecting, 37:809–830, 1989.

T. Sato, M. Ueda, and S. Fukuda. Synthetic aperture sonar. Journal of the AcousticalSociety of America, 54(3):799–802, 1973.

R. W. Sheriff. Synthetic aperture beamforming with automatic phase compens-ation for high frequency sonars. IEEE Symposium on Autonomous UnderwaterVehicle Technology, pages 236–245, June 1992.

M. Soumekh. Synthetic Aperture Radar Signal Processing. Wiley-Interscience, 1999.

L. M. H. Ulander, H. Hellsten, and G. Stenstrom. Synthetic-aperture radar pro-cessing using fast factorized back-projection. IEEE Transactions on Aerospace andElectronic Systems, 39(3):760–776, 2003.

R. J. Urick. Principles of underwater sound, 1983.

H. L. Van Trees. Optimum Array Processing. Wiley Interscience, 2002.

C. A. Wiley. Synthetic aperture radars. IEEE Transactions on Aerospace and Elec-tronic Systems, 21:440–443, 1985.

O. Yilmaz. Seismic Data Analysis: Processing, Inversion, and Interpretation of SeismicData, Investigations in Geophysics no. 10, Vol. 1. Society of Exploration Geophys-icists, 2001.