Audio Generation from Radar signals, for target classification

78
IN DEGREE PROJECT MATHEMATICS, SECOND CYCLE, 30 CREDITS , STOCKHOLM SWEDEN 2017 Audio Generation from Radar signals, for target classification JOHAN CLEMEDSON KTH ROYAL INSTITUTE OF TECHNOLOGY SCHOOL OF ENGINEERING SCIENCES

Transcript of Audio Generation from Radar signals, for target classification

Page 1: Audio Generation from Radar signals, for target classification

IN DEGREE PROJECT MATHEMATICS,SECOND CYCLE, 30 CREDITS

, STOCKHOLM SWEDEN 2017

Audio Generation from Radar signals, for target classification

JOHAN CLEMEDSON

KTH ROYAL INSTITUTE OF TECHNOLOGYSCHOOL OF ENGINEERING SCIENCES

Page 2: Audio Generation from Radar signals, for target classification
Page 3: Audio Generation from Radar signals, for target classification

Audio Generation from Radar signals, for target classification JOHAN CLEMEDSON Degree Projects in Optimization and Systems Theory (30 ECTS credits) Degree Programme in Applied and Computational Mathematics (120 credits) KTH Royal Institute of Technology year 2017 Supervisors at SAAB: Stefan Eriksson, Niklas Broman Supervisor at KTH: Johan Karlsson Examiner at KTH: Johan Karlsson

Page 4: Audio Generation from Radar signals, for target classification

TRITA-MAT-E 2017:70 ISRN-KTH/MAT/E--17/70--SE

Royal Institute of Technology School of Engineering Sciences KTH SCI SE-100 44 Stockholm, Sweden URL: www.kth.se/sci

Page 5: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

Abstract

Classification in radar application are often of great interest, since one does notonly want to know where a target is, but also what type of target it is. This thesisfocus on transforming the radar return from a target into a audio signal. So thatthe classification can be done by human perception, in this case human hearing.The aim of these classification methods is to be able to distinguish between twotypes of targets of roughly the same size, namely birds and smaller UnmannedAerial Vehicles (UAV). It is possible with the radar to measure the targets velocityby using the Doppler effect. To be able to distinguish in which direction the targetis moving are a so called I/Q representation of the radar return used, which isa complex representation of the signal. Using signal processing techniques, weextract radar signals generated from the target. By spectral transforms it ispossible to generate real valued signals from the extracted target signals. It isrequired to extend these signals to be able to use them as audio signals, this isdone with an extrapolation technique based on Autoregressive (AR) processes.The extrapolated signals are the signals used as the audio output, it is possibleto perform the audio classification in most of the cases.

This project is done in collaboration with Sebastian Edman [7], where differ-ent perspectives of radar classification has been investigated. As mentioned thisthesis focus on transforming the radar return into an audio signal. While Edmanin his thesis [7] making use of a machine learning approach to classify the targetsfrom the generated audio signal.

Page 6: Audio Generation from Radar signals, for target classification
Page 7: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

Sammanfattning

Ljud generering från radar signaler, för målklassificering

Klassificering är ofta av stort intresse inom radarapplikation, eftersom man inte bara vill veta var ett mål befinner s ig m en o ckså v ad f ör t yp a v m ål d et är. Denna uppsats fokuserar på att omvandla radarekot från ett mål till en ljudsig-nal. Så att klassificeringen kan s ke med mänskliga s innen, i d etta f all hörseln. Syftet med dessa klassificeringsmetoder är att kunna klassificera två typer av mål med ungefär samma storlek, nämligen fåglar och mindre obemannade flygfordon (UAV). Det är möjligt att med radarn mäta målets hastigheten med hjälp av Doppler-effekten. För att kunna avgöra i vilken riktning målet rör sig används en I/Q-representation, som är en komplex representation av radar signalen. Med signalbehandling är det möjligt att extrahera radar signaler som målet generar. Genom att använda spektrala transformationer är det möjligt att generera reel-lvärda signaler från de extraherade målsignalerna. Det är nödvändigt att förlänga dessa signaler för att kunna använda dem som ljudsignaler, detta görs med en extrapoleringsteknik baserad på Autoregressiva (AR) -processer. De ljudsignaler som används är dessa extrapolerade signalerna, det är i det flesta fall möjligt att utifrån ljudet genomföra klassificeringen.

Detta projekt är utfört i samarbete med Sebastian Edman [7], där olika inrik-tningar av radarklassificering har u ndersökts. Som nämnts ovan fokuserar denna uppsats på att omvandla radarekon från målen till ljudsignaler. Medan Edman i sin uppsats [7] använder sig av en maskininlärningsmetod för att klassificera de genererade ljudsignalerna.

Page 8: Audio Generation from Radar signals, for target classification
Page 9: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

Acknowledgement

I would like to express my gratitude to Johan Karlsson, associate professor atKTH department of mathematics at KTH. By ideas steering this project in theright direction. Thanks to my supervisors at SAAB, Stefan Eriksson and NiklasBroman, for providing this project and giving feedback whenever needed. AlsoI would like to thank my family and friends for supporting during this thesis. Aspecial thanks to Sebastian Edman for his collaboration in this project.

Stockholm, August 2017Johan Clemedson

Page 10: Audio Generation from Radar signals, for target classification
Page 11: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

Contents

1 Introduction and Problem Description 31.1 Outline of Thesis . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Background on RADAR technology 52.1 Pulse radar sets . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Antenna . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3 Transmitter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.4 Transmission of signal . . . . . . . . . . . . . . . . . . . . . . . . 82.5 Radar equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.6 Range and Bearing . . . . . . . . . . . . . . . . . . . . . . . . . . 102.7 Radar resolution . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.8 Doppler effect . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

2.8.1 In-phase/Quadrature demodulation . . . . . . . . . . . . . 142.8.2 µ−Doppler effect . . . . . . . . . . . . . . . . . . . . . . . 16

3 Signal Processing Background 183.1 Filters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 18

3.1.1 Finite Impulse Response (FIR) Filters . . . . . . . . . . . 183.1.2 Infinite Impulse Response (IIR) filter . . . . . . . . . . . . 193.1.3 Matched Filter . . . . . . . . . . . . . . . . . . . . . . . . 20

3.2 Spectral Transforms . . . . . . . . . . . . . . . . . . . . . . . . . 223.2.1 Discrete Fourier Transform . . . . . . . . . . . . . . . . . 22

3.3 Resampling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233.3.1 Up-Sampling by integer factor . . . . . . . . . . . . . . . . 243.3.2 Interpolation Filter . . . . . . . . . . . . . . . . . . . . . . 263.3.3 Polyphase Interpolation filter . . . . . . . . . . . . . . . . 273.3.4 Down-sampling (or Decimation) by an Integer Factor . . . 283.3.5 Resampled with non-integer factor . . . . . . . . . . . . . 29

3.4 Power spectra Density . . . . . . . . . . . . . . . . . . . . . . . . 293.4.1 Discrete Power Spectra Density . . . . . . . . . . . . . . . 293.4.2 Signals with Rational Spectrum . . . . . . . . . . . . . . . 31

4 Autoregressive Modeling 324.1 Autoregressive process . . . . . . . . . . . . . . . . . . . . . . . . 32

4.1.1 The Yule-Walker Equations . . . . . . . . . . . . . . . . . 334.1.2 Power Spectra Density of AR process . . . . . . . . . . . . 34

4.2 Autoregressive Model . . . . . . . . . . . . . . . . . . . . . . . . . 354.2.1 Linear Prediction . . . . . . . . . . . . . . . . . . . . . . . 35

4.3 Extrapolation of Finite Signals . . . . . . . . . . . . . . . . . . . 364.3.1 Impulse Response and Transfer Function of the Extrapo-

lation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 364.4 Extrapolation of Audio Signals . . . . . . . . . . . . . . . . . . . 37

4.4.1 Model Order . . . . . . . . . . . . . . . . . . . . . . . . . 384.4.2 Extrapolation of Noisy Signals . . . . . . . . . . . . . . . 38

4.5 Parameter Estimation . . . . . . . . . . . . . . . . . . . . . . . . 40

Page 12: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

4.5.1 Properties of the Covariance Matrix . . . . . . . . . . . . 404.5.2 Levinson-Durbin Algorithm . . . . . . . . . . . . . . . . . 414.5.3 The Burg Method . . . . . . . . . . . . . . . . . . . . . . 42

5 Radar Signal Processing 455.1 MTI-filter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 465.2 Pulse Compression . . . . . . . . . . . . . . . . . . . . . . . . . . 465.3 Signal Extraction . . . . . . . . . . . . . . . . . . . . . . . . . . . 475.4 Spectral Modifications . . . . . . . . . . . . . . . . . . . . . . . . 48

6 Audio Generation and Implementation 526.1 IIR Filter Implementation of the Extrapolation . . . . . . . . . . 526.2 Method for Generating Audio Signals . . . . . . . . . . . . . . . . 53

7 Results 54

8 Discussion and Future Work 578.1 Data set . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 578.2 Tx-modes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588.3 Extrapolation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 588.4 Future work . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

A Stationarity 60

B Kaiser Window 61

Page 13: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

Nomenclature

(·)H Hermitian (conjugate) transpose

∗ Convolution

(·) Estimation

λ Wave Length

F{·} Fourier Transform

Φ(ω) Power spectra density

φi Autoregressive parameter

σt Radar Cross Section

τ Pulse Width

c Speed of light

f0 Carrier frequency

fD Doppler frequency

G Antenna Gain

h[·] Impulse response

kp Reflection coefficient

Ls Loss factor

P Signal Power

SA Angular resolution

Sr Range resolution

1

Page 14: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

Acronyms

AR Autoregressive.

CW Continuous Wave.

DFT Discrete Fourier Transform.DTW Dynamic Time Wrapping.

FIR Finite Impulse Response.

HMM Hidden Markov Models.

IDFT Inverse Discrete Fourier Transform.IIR Infinite Impulse Response.

LOS Line Of Sight.LP Linear Prediction.LSS Low, Slow flying and Small.

MDS Minimum Discernible Signal.MTI Moving Target Indicator.

PRF Pulse Repetition Frequency.PRT Pulse Repetition Time.PSD Power Spectra Density.

RCS Radar Cross Section.RF Radio Frequency.

SAR Synthetic Aperture Radar.SNR Signal to Noise Ratio.

TX Transmission.

2

Page 15: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

1 Introduction and Problem Description

The radar has for more than half a century been the go to technology for surveil-lance and target recognition. This is due to the many advantages of radar sys-tems, for example a radar system can detect and track moving objects in allweathers during day and night. A radar transmits and receives electromagneticwaves, that interacts with targets and the surrounding environment. The veloc-ity of moving targets can be calculated by measuring the Doppler frequency shiftof the received echo signal. Detecting the target is often not enough, one alsowant to distinguish between targets. In todays radar systems this can be done invarious ways, for example discriminate targets by their velocity or specific mo-tion patterns who is unique for a target. Such methods are almost as old as theradar itself and in later years more sophisticated methods have been developedby the use of machine learning algorithms, for example Synthetic Aperture Radar(SAR) image classification. But classification is not necessary performed in anautomatic approach. Some radar system, possess an audio output. For examplethe MSTAR Battlefield Surveillance Radar, which is a man portable lightweightDoppler radar used by an operator who can listen to the audio output. Theaudio signal is a representation of the echo from the illuminated target whichcontains the Doppler frequencies. The operator makes the classification by rec-ognize specific patterns in the audio signal, which is similar to the techniques usedin SONAR applications. The operators ability to perform auditory classificationis based on speech phonemics principles. A phoneme is a specific sound patternthat the human brain is able to recognize. For the operator to be able do dis-tinguish between different targets (and also specific actions by targets) extensivetraining is needed. The human factor may also increase the error rate, especiallyon the battlefield where external factors can have an impact on the human senses.Today it exist speech recognition methods like Dynamic Time Wrapping (DTW)and Hidden Markov Models (HMM). But those methods have been optimized forspeech signals and the audio Doppler signal is not a conventional speech signal.Another challenge may be to classify targets during the scan of a surveillanceradar which results in rather short time frames of the signal (milliseconds) andcontains relatively long discontinuities between the frames. This can lead to thatexisting methods may be possible to use in a mode when the radar stares at atarget but does not imply that it would be possible to use in scanning mode.

The aim of this report is to examine the possibilities to transform radar sig-nal into an audio signal. With the purpose to investigate if it is possible todistinguish different targets by listening to the audio signal. The use of classifi-cation algorithms for classifying of the audio signals are also examine. The goalis to be able to distinguish between Low, Slow flying and Small (LSS) targetssuch as smaller UAV:s and birds, which are fairly comparable in size, velocityand flight pattern. The project are dived in the three following parts

i) Use signal processing techniques to extract signals from UAV:s and birds,from the raw radar data.

3

Page 16: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

ii) Transform the signals into audio signals, which a radar operator can listento.

iii) Develop an automatic classification method, that can classify the audio sig-nal. This automatic classifier will serve as an aid to the radar operator.

This project is done in collaboration with Sebastian Edman, where part i) isdone together. The second part is presented in this thesis and the last part canbe found in Edman’s thesis [7].

1.1 Outline of Thesis

Section 2 treats basic radar technology including transmission of signals, rangemeasurements and the Doppler effect. In Section 3 signal processing techniques,used to transform the complex signal to a real signal, explained. Section 4 treatsthe theory of autoregressive modeling, which is used to extend the signals. Section5 describes the extraction of the complex radar signals and how the signals aretransformed into a real signals. In section 6 are the extrapolation of the realsignals described, which is used to extend the signals. The results of the audiogeneration methods are presented in Section 7. The results and future work arediscussed in Section 8.

The Sections 2, 3.1, 3.2 and Section 5 is done in collaboration with SebastianEdman [7].

4

Page 17: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

2 Background on RADAR technology

A radar (RAdio Detecting And Ranging) transmits electricmagnetic energyin the radio-frequency (RF) interval. When the transmitted RF energy hits atarget the energy is reflected. The radar receives a small quantity of the reflectedenergy, called the echo, which is used to determine the distance and direction ofthe object. This section aims to introduce some basic radar theory needed tounderstand the terminology used in this thesis. For starters signal transmissionis introduced with theory gathered from [22] and the included components ina radar system from [23] and [25] then general range measurements in a radarsystem from [24].

Radar systems can be divided in to two different types of radar systems, pri-mary and secondary radar systems. Primary radar systems transmits a signaland receives the reflected echo. While secondary radar systems receives a codedreply signal from an transmitter on the illuminated targets. Hence secondaryradar systems are not used to detect unknown targets but instead tracking andidentifying friendly targets. Primary radar systems are divided into Continu-ous Wave (CW) and pulsed radar systems. A CW radar set transmits a highfrequency signal continuously, the received echo signal is also processed contin-uously. CW radars can normally only measure the targets speed and not thedistance to targets, since there are no pulse sets to time. This problem can besolved by constantly shifting the frequency in the transmitted signal. These fre-quencies can be extracted from the echo and by knowing when in the past thatparticularly frequency was sent out, one can do a range calculation. This types ofCW radar sets are called frequency modulated CW radars, hence CW radar setscan be divided in modulated and unmodulated CW radar systems. The typicaluse of unmodulated CW radar sets, which only can measure speed, are speedgauges for the police.

Pulse radar transmits a high frequency impulse signal of high power, afterthe impulse a longer break follows in which the echoes are received before thennext impulse is sent out. Properties such as direction, range and speed can bedetermined by using a pulse radar. The following theory will only consider pulseradar theory.

2.1 Pulse radar sets

A powerful transmitter generates the radar signal, which is transmitted fromantenna through a duplexer. The function of the duplexer is to switch the an-tenna between the transmitter and receiver, which means that only one antennais needed. Switching between transmitting and receiving signals is also neces-sary, since the high power pulses produced by the transmitter would destroy thereceiver if energy were allowed to enter the receiver during transmission. Theantenna illuminates the target with the radio frequency pulse, which is reflectedat the target. The backscattered echo signal is picked up by the receiver in theantenna. The received radar pules is amplified and demodulated in the receiver,which produces video signals that can be displayed. The operating principles of

5

Page 18: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

a radar is illustrated in Figure 1, below.

Transmitter Duplexer Receiver Display

Target

Antenna

Transm

itted

pulse

Echo

signal

Figure 1: Illustration of a radar system.

2.2 Antenna

One of the most important parts of the radar system is the antenna. The antennapreforms the following crucial functions [23]:

• The antenna transfers the energy to electromagnetic waves in space withrequired distribution and efficiency. The antenna also transforms the elec-tromagnetic waves in the echo signal to electric signals.

• The required signal pattern in space is ensured by antenna. The signalpattern in angle has to be sufficiently narrow to provide the required angularresolution.

• A scanning antenna has to provide the required frequency of target positionupdates, i.e., the revolution rate.

• The antenna has to measure the pointing direction with a high degree ofaccuracy.

One important antenna characteristic is the antenna gain (Directivity, Directiongain), which describes how well the antenna can focus the outgoing energy in acertain direction. The antenna gain is defined as the ratio between the amountsof energy propagated in the radar direction compared to energy propagating inother directions. The antenna will also have the same gain for receiving signals,if a transmitting antenna is used as a receiving antenna.

Antennas usually emits a stronger radiation in one direction than in otherdirections. Such radars are called anisotropic. A radiation pattern is formed bythe energy radiated from the radar. The shape of the radiation pattern dependon the type of the antenna. When the transmitted energy is measured in variousangles at a constant distance from the antenna an illustration of the radiation

6

Page 19: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

pattern can be plotted, usually in polar coordinates. In such plots three keyfeatures can be observed:

• In the region that is within 3 dB of the maximum radiation, is called themain lobe or main beam.

• Smaller beams in other directions than the main lobe are called sidelobes,which are usually radiation in undesired directions. These sidlobes cannever be completely eliminated.

• The portion of the radiation pattern that are directed in the opposite di-rection of the main lobe is called the backlobe.

One important characteristic of the radiation pattern is the beam width. Thebeam width is defined as the angular range of the radiation pattern in which atleast half of the maximum power is still emitted. Hence the bordering points ofthe lobe are therefore the points at which the field strength fallen 3 dB comparedto the maximum field strength. The notation of the beam width (or half powerangle) is Θ. The beam width can be determined in both the horizontal ΘAZ andvertical plane ΘEL.

The ratio of power gain between the main lobe and the backlobe is called thefront to back ratio. One desires a high front to back ratio, since its means thata minimum amount of energy is radiated in the undesired direction.

Air-surveillance radar system usually uses a cosecant squared pattern, toachieve a more uniform signal strength from a target that moves with a constantelevation. The height information from a detection can be calculated by knowingthe elevation of the returned echo. This is done with the agile multiple beamconcept, where the height information is divided into multiple parts (beams),see Figure 2. The cosecant squared pattern can be achieved by stacking beamsaccording to the figure.

Figure 2: Illustration of cosecant pattern [21]

7

Page 20: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

2.3 Transmitter

The task of the transmitter is to produce high power Radio Frequency (RF) pulsesof energy that are radiated into space by the antenna. There are different kindsof transmitter with different properties. Depending on what type of transmitterused in the radar set, the radar set can be classified as coherent or non-coherent,[25]. The radar system is said to be coherent if every transmitted pulse starts withthe same phase and non-coherent if the phase of each successive pulse is random.The reason to use a coherent system is to keeping track of the phase change of thereflected pulses generated by a moving target and hence the Doppler frequencies.Radars that emit coherent pulses to measure the Doppler shift are known aspulse Doppler radars. The difference between coherent and non-coherent pulsescan be seen in Figure 3.

(a) Coherent pulses (b) Non-coherent pulses

Figure 3: Illustration of coherent and non-coherent radar transmitters.

2.4 Transmission of signal

Each transmitting pulse is radiated from the radar, during the transmit time (orpulse width τ). The radar is waiting for return echo during the listening time,after each transmitted pules. There is a short rest time between the listeningtime and the next pulse. The time between two pulses are called the PulseRepetition Time (PRT). The number of pulses transmitted per second is calledPulse Repetition Frequency (PRF), the relationship between the pulse repetitiontime and pulse repetitive frequency is given by PRT = PRF−1. An illustrationof a transmission of a pulse with previously mentioned variables can be seen inFigure 4.

8

Page 21: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

Transmitted

Pulse

Echo

Pulse

Pulse Width τ

ListningTime

RestTime

Pulse Repetition Time (PRT)

Figure 4: Illustration of transmission of a pulse

2.5 Radar equation

The relation between the transmitted power Ptx and the power in the echo signalPrx is given by the radar equation [24],

Prx = PtxG2 · λ2 · σt

(4π)3 ·R4 · Ls. (2.1)

Where G is the antenna gain, which is a measure of the antenna’s ability to focusoutgoing energy into the direction of the beam. The antenna gain is given by themaximum radiation intensity divided by the average radiation intensity. Howwell an antenna can pick up power from an incoming electromagnetic wave isdescribed by the antenna aperture, G · λ2/(4π). Where λ is the wave length ofthe electromagnetic wave. The parameter σt is the Radar Cross Section (RCS),which is the size and the ability of a target to reflect radar energy summarized inone term. The factor 1/(4π ·R2)2 describes the free space path loss, where R isthe range to the target. The free space path loss is the loss in signal strength ofan electromagnetic wave propagating in the line of sight path through free space.All the internal losses of the radar are summarized in the loss factor Ls.

The Minimum Discernible Signal (MDS) is the smallest signal that the radarcan detect. If the power is smaller than this PMDS , the signal will not be usablesince it will be lost in the background noise. By rewriting the radar equation(2.1) and setting the power of the echo signal equal to the PMDS one obtains,

Rmax = 4

√Ptx ·G2 · λ2 · σt

(4π)3 · PMDS · Ls. (2.2)

This gives the relation between the maximum range Rmax and the transmittedpower for a radar system. Due to the fourth root one must increase the trans-

9

Page 22: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

mitted power 16 times to double the maximum range, if the other parametersare constant.

2.6 Range and Bearing

The slant range R is defined as the line of sight distance between the targetand the radar antenna. It is possible to calculate the slant range from the timedelay tdelay between the transmitted and the reflected pulse, with the followingequation

R =ctdelay

2, (2.3)

where c is the speed of light. It is required to know the target’s elevation tocalculate the horizontal distance between the target and radar (ground range).

Since a pulse radar usually transmits a sequence of pulses and measures thetime between the last transmitted pules and the echo pulse. It is possible thatthe received echo is from a long range target, so that the received signal arrives atthe radar after or during the transmission of the next transmitting pulse. Thismeans that the radar is measuring the wrong time interval and therefore thewrong range. The radar assumes that the pulse is the reflection of the secondtransmitted pulse and declares a reduced range for the target. This occurs whenstrong targets are located outside the range that corresponds to the pulse rep-etition time, and are called range ambiguity. Hence a maximum unambiguityrange is defined by the pulse repetition time (PRT ). The relationship betweenthe pulse repetition time and the unambiguity range Ruamb is given by

Ruamb =(PRT − τ)c

2. (2.4)

There are two types of echo signals that arrives after the reception time:

• Echo signals that arrives during the transmission time. These signals willnot be registered since the receiver is turned off during transmission.

• Echo signals that arrives during the following reception time. These signalswill give range measuring failures (ambiguous returns) illustrated in Figure5

Still, it is possible to discriminate the true range of targets by using differentTransmission (TX) modes. The different transmission modes have various pulserepetition frequencies (PRF) explained in Section 2.4. Hence targets at ambigu-ous ranges will appear at different ranges for each TX-mode allowing the radarsystem to compute and solve the ambiguity and extract the true range.

10

Page 23: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

U

0 10 20 0 10 20 0 10 km

40

10

Figure 5: Illustration of range measuring failure

There is also a minimum detectable range (or blind distance) to consider. This isdue to that the echo signal will not be registered, if the echo from the beginningof the pulse falls inside the transmitting pulse. Since the receiver is turned offduring the transmitting time. The blind distance can be calculated with thefollowing equation,

Rmin =(τ + trest)c

2, (2.5)

where trest is the resting time. Both the horizontal and elevation angel betweenthe antenna and target, can be determined by measuring the direction in whichthe antenna is pointing when the echo is received. The accuracy of the radarsangular measurements is determined by the antennas directional gain. The anglemeasured in the horizontal plane is refereed to as bearing, which can be measuredin true or relative bearing. True bearing is the angle between the true north anda line pointing directly at the target, measured in a clockwise direction. Relativebearing is the angle between the centerline of the own ship or aircraft and atarget, measured in a clockwise direction.

2.7 Radar resolution

A radar is not able to distinguish between targets that are very close in bearingor range. The ability to distinguish between close target are given by the targetresolution of the radar, which is divided in angular resolution and range resolu-tion. The minimum angular separation of two equal targets at the same range,are called angular resolution. The angular resolution of an radar is determinedby the half power beam width Θ of the radar. Which is the angle between thehalf power (-3 dB) points of the main lobe. Which means that two targets canbe resolved in angle if they are separated more than one beam width. Hence thesmaller the beam width, the higher directivity of the antenna, the better angu-lar resolution. The distance between two targets corresponding to the angularresolution is a function of the slant range and given by,

SA = 2R sinΘ

2. (2.6)

Where SA is the angular resolution given as a distance between two targets.

11

Page 24: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

The ability to distinguish between two or more targets at different ranges buton the same bearing, is called range resolution. The range resolution is a factorsince the echo’s from close targets in range gets mixed up as illustrated in Figure6.

τ = 1µs300m 100m

��

��

��

��

��

��

��

(a) Targets mixed up in the same echo

τ = 1µs300m 200m

��

��

��

��

��

��

��

(b) Targets separated by two echoes

Figure 6: Illustration of range resolution

Hence the primary factor in range resolution is the pules width of the transmittedpulse. But the range resolution also depends on types and sizes of the target,and the efficiency of the receiver and indicator. Targets separated by half thepulse width, can be separate distinguished by a well designed radar with all otherfactors at maximum efficiency. Hence, the theoretical range resolution of a radarsystem is given by

Sr =cτ

2. (2.7)

Where Sr is the range resolution given as distance between two targets. A methodto improve the range resolution is by using a pulse compression system, usingpulse compression allows high range resolution with long pulses, but with a higheraverage power

2.8 Doppler effect

The Doppler effect was discovered by Christan Doppler 1842 and has been usedin electromagnetic since 1930 and the first Doppler radar was produced in 1950.The use of the Doppler radar is mainly for targets in motion e.g in militaryuse targets of interest can be hostile air, sea and land targets such as airplanes,ships and tanks but also smaller targets like rockets, artillery and mortars. Theprinciple behind the Doppler shift is that a target with a motion relative to theradar will induce a frequency shift in the reflected signal, i.e, the Doppler shiftwhich depends on the wavelength of the transmitted signal and the radial velocity(the velocity in the Line Of Sight (LOS) of the radar), of the illuminated target.Suppose that the transmitted signal is

12

Page 25: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

st(t) = u(t) cos(2πf0t) (2.8)

where u(t) is the signal envelope, f0 is the carrier frequency, and t is the time.The received backscattered echo signal from a target can then be expressed as

sr(t) = σst(t− tr) = σu(t− tr) cos(2πf0(t− tr)) (2.9)

where σ is the target reflection coefficient and tr is the time delay of the echowith respect to the transmitted signal. If the target would be stationary relativeto the radar the time delay would be constant and also the phase of the echosignal. If the target would be in motion relative to the radar with a velocity vrin the line of sight of the radar the echo signal received at time t transmitted attime t− tr and the time that the target is illuminated is ti = t− tr/2 and hencethe distance between the origin of the radar and the illuminated target is

R(ti) = R0 − vrti (2.10)

where R0 is the distance between the origin of the radar and the illuminatedtarget at time t = 0. The propagation time of the signal for traveling the distancefrom the radar to the target and back is the time delay of the echo tr and thus

tr =2R(ti)

c(2.11)

where c is the speed of light in the case of electromagnetic waves. By combining(2.10) with (2.11) and substitution into the echo signal (2.9) yields

sr(t) = σu(tc+ vrc− vr

− 2R0

c− vr

)cos

[2πf0

(tc+ vrc− vr

− 2R0

c− vr

)](2.12)

The echo signal (2.12) possesses two important properties [26]

i) From the phase term of (2.12) one can see that the frequency is shifted fromf0 to f0

c+vrc−vr

ii) There is a scaling change for the echo signal envelope in terms of time.

The envelope change of the echo signal described in ii) can in most radar appli-cations be ignored since the process of the phase term will not heavily dependon the scaling change of the echo signal envelope [26].

Usually the radial velocity vr of the illuminated target is significantly smallerthan the propagation speed of an electromagnetic wave c thus one can argue toapproximate the frequency shift

f0c+ vrc− vr

− f0 = f02vrc− vr

≈ f02vrc

=2vrλ0

. (2.13)

Where λ0 is the carrier wavelength, the right hand side in (2.13) is the definedas the Doppler frequency

13

Page 26: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

fD =2vrλ0

. (2.14)

If (2.14) combined with (2.13) is inserted in (2.12) and assume that the envelopeof the signal u(s) = 1 we arrive at

sr(t) = σ cos

[2π(f0 + fD)t− f0

2R0

c− vr

]= σ cos

[2π(f0 + fD)t− θ

]= σ cos

[2πf0t+ φ(t)− θ

].

(2.15)

In order to distinguish between positive and negative Doppler frequencies, anI/Q representation of the signal is used and introduced in the next section.

2.8.1 In-phase/Quadrature demodulation

In Doppler radar applications it is of great importance to be able to distin-guish between positive and negative frequencies. Since the sign of the Dopplerfrequency represents in which direction the targets is moving. But a signal repre-sentation just using a series of samples of the momentary amplitude of the signal(see Figure 7a), will not be able to distinguish between positive and negativefrequencies (for example cos(x) = cos(−x)). Hence the In-phase/Quadrature(I/Q) representation of the signal is introduced. Where the signal is compared toa reference signal which gives an in-phase (I) component and a quadrature (Q)component of the signal [14]. The I/Q-signal can be illustrated as spiral in threedimensions, see Figure 7b.

0 100 200 300 400 500 600 700-1

-0.8

-0.6

-0.4

-0.2

0

0.2

0.4

0.6

0.8

1

cos(x)

x

(a) Projection of I/Q representation

0 100 200 300 400 500 600 700

-1

-0.5

0

0.5

1-1

-0.5

0

0.5

1

xQ-part

I-part

(b) I/Q representation

Figure 7: Signal representation

One can see that the projection of the spiral on to the vertical plane is the "real"signal (Figure 7a), which is the I component. The projection of the spiral on to

14

Page 27: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

the vertical plane gives the Q component of the signal. The direction of rotationof the spiral in Figure 7b determines the sign of the frequency.

To extract the I and Q component of a signal and hence the Doppler frequencyshift a quadrature detector is used. The quadrature detector produces a signalwith an In-phase (I) component and a Quadrature (Q) component from theinput signal. The extraction of the Doppler frequency shifts with the quadraturedetector are illustrated in Figure 8.

Received signalsr(t) = σ cos(2πf0t+ φ(t) − θ)

Mixer I

Transmitted signalst(t) = cos(2πf0t)

Reference

90◦ phase shift

Mixer II

Low pass filter

Low pass filter

I(t) = σ2 cos(φ(t) − θ)

Q(t) = −σ2 sin(φ(t) − θ)

Figure 8: Block diagram of quadrature detector

The quadrature detector consists of two mixers, called synchronous detectors. Inthe synchronous detectors the received signal is mixed with a reference signal,the transmitted signal in the first synchronous detector and a 90◦ shift of thetransmitted signal in the second mixer. After each synchronous detector a lowpass filter is applied to filter out the carrier frequency f0 of the transmitted signal

sr(t) = σ cos(2π(f0 + fD)t− θ). (2.16)

In the first synchronous detector the received signal is mixed with the transmittedsignal

st(t) = cos(2πf0t), (2.17)

which gives the output

sr(t)st(t) =σ

2cos(4πf0t+ φ(t)− θ) +

σ

2cos(φ(t)− θ). (2.18)

Applying the low pass filter to the signal gives the I-channel output,

15

Page 28: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

I(t) =σ

2cos(φ(t)− θ). (2.19)

In the other synchronous detector, the 90◦ phase shifted transmitted signal,

s90◦t (t) = sin(2πf0t) (2.20)

which gives the output

sr(t)s90◦t (t) =

σ

2sin(4πf0t+ φ(t)− θ)− σ

2sin(φ(t)− θ). (2.21)

Applying the low pass filter to the signal gives the Q-channel output,

Q(t) = −σ2

sin(φ(t)− θ). (2.22)

By combining the I and Q part the following signals is obtained

sD(t) = I(t) + iQ(t) =σ

2e−iφ(t)−θ =

σ

2e−i2πfDt−θ. (2.23)

From the complex Doppler signal in Equation (2.23) it is possible in to extractpositive and negative frequencies, i.e. positive and negative velocities.

2.8.2 µ−Doppler effect

By extract information about the illuminated target via the Doppler shift interms of radial velocity and range useful information is gained, but in real radarapplications targets with single motion pattern is quite rare. For example manmade aerial targets like helicopters and UAV:s consist of more complex motionsthan just the bulk motion, like engine vibrations and rotations of propellers. Andbiological targets such as personnel or birds generates complementary motionslike swinging arms and flapping wings. These so called micro-motions can beuseful when trying to distinguish between many more classes of targets men-tion above and even different types of the same kind of target due to uniquecharacteristics. Pursuant to Doppler theory beyond the bulk motion of a targetmicro-motions from parts of the target or the target itself can cause frequencymodulation on the echo signal from a radar system. Which is in fact a Dopplersideband besides the main Doppler frequency induced by the bulk motion of thetarget [26]. These frequency modulations generated by micro-motions are in lit-erature and research called micro-Doppler effect and a target is said to have aspecific micro-Doppler signature.

The micro-Doppler effect has its origin in coherent laser detecting and rangingsystems (LADAR) [4] who transmit electromagnetic waves at optical frequenciesand by the backscattered wave from an object one can measure properties suchas range, velocity similar to a radar system by preserve phase information. Sincethe phase of a backscattered signal in a coherent system is sensitive to the vari-ation in range a half wavelength change in range can cause 2π change in phase.In LADAR systems where the wavelengths is typically short e.g 2 µm and thus a

16

Page 29: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

change in radial distance by 1 µm can generate a shift in phase by 2π leading toextremely high sensitivity in LADAR systems where for example tiny vibrationscan be observed rather easily. The micro-Doppler frequency is a time varyingproperty and can be extracted from the output from a quadrature detector usedin standard Doppler processing [4].

The author of [4] [26] validated the µ−Doppler effect in radar systems by usingan X-band radar to detect trigonometric scatter target with a vibration am-plitude of 1 mm and vibration frequency of 10 Hz and successfully extractedmicro-Doppler frequency shift through time-frequency analysis technique, lateron the concerned also put forth results of micro-Doppler analysis results of apedestrian with X-band radar. Since then many papers related to the researchfield have been published, not only in the research of micro-Doppler but alsowith the previous combined with various classification methods. The authorsof [18] uses a speech recognitions techniques, Dynamic Time Wrapping (DTW)and a k -NN classifier to classify baseband audio output signal from a radar withhelp of micro-Doppler signatures. Speech processing algorithms exploit the timevariance in speech patterns to classify signals and identify words and the intendwas to exploit the time variance in the micro-Doppler signature in a similar man-ner. Here the classification set consisted of three classes namely wheeled vehicles,tracked vehicles and personnel. The correct classification rate where 80%, 70%and 100% for the incoherent DTW classifier and 86%, 68% and 94% for the co-herent DTW classifier for respective class. The DTW classifiers outperformedthe k -NN classifier by far. Worth mention is that that the data used consistedof 80,000 samples of complex data when the velocity of the three classes wherecomparable and moving radially towards the radar. The duration of the data wasalso far longer than a typical radar dwell in scanning mode and data was dividedinto frames of reasonable times to increase the realism. The random nature ofthe initial phase of the micro-motions and the angle of LOS is also discussedin terms of the challenges it entails. However the aim of this thesis is not tomake direct use of the micro-Doppler analysis as above but instead this sectionserves as motivation for the possibilities to distinguish targets by the "unique"time varying nature of the aural output generated by the micro-motions. In [12]the authors analyses the Doppler sound and uses cepstrum features and a Hid-den Markov Model (HMM) together with a track based classifier to distinguishbetween personnel, land and air based vehicles. However a standalone analysisof the Doppler sound classifier showed good result (around 90 to 95 % correctclassification of respective class).

17

Page 30: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

3 Signal Processing Background

Linear and time invariant (LTI) systems are a tool used in signal processing, forexample filters are almost always LTI systems. A system model is said to belinear if the model can be described as an linear mapping w : U → Y, where Uis an input space and Y an output space [8]. Causal and time invariant systemscan be represented as

y[n] =∞∑k=0

h[k]x[n− k] =∞∑k=0

h[k]z−kx[n] = h(z)x[k], (3.1)

where, x[n] ∈ U for n ∈ 0, 1, 2, . . ., is a sequence of inputs and, y[n] ∈ Y for n ∈0, 1, 2, . . ., is a sequence of outputs. The z in Equation 3.1 denotes the forwardshift operator. The values in the sequence h[n] are called impulse response of thesystem and h(z) the transfer function of the system.

3.1 Filters

Filters has the purpose to changes a signals frequency content, a filter can beeither analog or digital. In this thesis only digital filters are considered. Adigital filter is a linear time invariant discrete system with the purpose of lettingfrequencies in a specific range pass and stop frequencies outside this range [9].The filter can be described by

y[n] =

N∑i=0

bix[n− i]−M∑j=1

ajy[n− j] (3.2)

where x is the input and y the output of the system. The parameters aj and bi arefilter specific parameters, which characterize the filter. By taking the z-transformof 3.2 one can obtain the filter transfer function

H(z) =Y (z)

X(z)=

∑Ni=0 biz

−i

1 +∑M

j=1 ajz−j

(3.3)

The filter is designed by choosing the coefficients a and b so that the desired filtercharacteristics are fulfilled.

3.1.1 Finite Impulse Response (FIR) Filters

One type of digital filters is the Finite Impulse Response (FIR) filter, which isa non-recursive filter, hence the filter only depends on previous values of theinput signal [9]. One characteristic property of the FIR filter is that the impulseresponse is equal to the filter coefficients and are zero outside a bounded interval.A causal FIR filter of order N are described by the following convolution sum,

y[n] =

N∑i=0

bix[n− i], (3.4)

18

Page 31: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

where y is the output signal, x is the input signal and bi is the value of theimpulse response at the i:th instant. Since the FIR filter is causal and finite italso holds that bi = 0 if i < 0 or i > N . A system illustration of an FIR filter ispresented in Figure 9.

b1

z−1

b2

Σ

z−1

b3

Σ

z−1

bN

Σ

x[n]

y[n]

Figure 9: System overview of a FIR filter

Where each unit delay is a z−1 operator in z-transform notation. The transferfunction of a FIR filter are calculated with the z-transform and given by

H(z) =Y (z)

X(z)=

N∑i=0

biz−i. (3.5)

Equation 3.5 can be rewritten as

H(z) =

∑Ni=0 biz

−i

1=

∑Ni=0 biz

N−i

zN. (3.6)

Hence a FIR filter has equally many poles and zeros, but all the poles are locatedat the origin. Causal discrete system are stable if all poles lie in the open unitdisk. Hence all causal FIR filters are stable, since all the poles for a causal FIRfilter are located at the origin.

3.1.2 Infinite Impulse Response (IIR) filter

An other type of digital filters are the Infinite Impulse Response (IIR) which isa recursive filter, hence the filter can depend on both the previous values of theinput signal and the output signal [9]. A causal IIR filter of feedforward order Nand feedback order M are described by the following equation,

y[n] =N∑i=0

bix[n− i]−M∑j=1

ajy[n− j] (3.7)

where y is the output signal, x is the input signal, aj is the feedforward coefficientsand bi is the feedback coefficients. The z-transform of the Equation (3.7) givesthe the transfer function

H(z) =Y (z)

X(z)=

∑Ni=0 biz

−i

1 +∑M

j=1 ajz−j. (3.8)

19

Page 32: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

As mentioned before for stability of a discrete system, all poles must be locatedin the unit circle in the z-plane. Hence for a IIR filter to be stable it must holdthat all solutions to

0 = 1 +M∑j=1

ajz−j (3.9)

are less than one in absolute value. This means that IIR filters can be unstable,which is not the case for FIR filters. But IIR filters are in many applicationspreferred over FIR filters, since IIR filters are often implemented more efficient.

3.1.3 Matched Filter

In signal processing, and in particular radar systems, a technique called pulsecompression which is a example a matched filter is often used. A matched filteris obtained by correlating the sent out signal with a received echo to detect thesignal in the presence of noise, which is equivalent to convolve the echo with aconjugated and time reversed form of the signal. Moreover matched filtering isused to maximize the Signal to Noise Ratio (SNR) with linear time invariantfilters and the characteristics of the filter can be designed by either frequencyresponse or impulse response [5].

The matched filter can be expressed as the following convolution sum,

y[n] =

∞∑k=−∞

h[n− k]x[k]. (3.10)

Where h is the filter impulse responds, x is the input and y the output. The ideaof the matched filter is to suppress the noise and amplify the signal at some timesample n0, as can be seen in Figure 10.

Matched filterh[n]

+

s[n] + w[n]

x[n]

Suppressnoise

Boost signalat n = n0

Figure 10: Illustration of a matched filter

The desired matched filter is a complexed valued N -point FIR filter, g, whichmaximizes the signal to noise ratio. The output of the filter is the conjugate innerproduct of the filter and the N -point observed signal x. The observed signal xconsist of an deterministic signal s and stochastic noise w. Which means thatthe signal can be expressed as

x[n] = s[n] + w[n], for n ∈ {0, 1, , . . . , N − 1}. (3.11)

For convenience the time index is dropped. The following definition is used inthe derivation of the matched filter.

20

Page 33: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

Definition 1. A matrix A is said to be Hermitian symmetric if A = AH , whereAH is the conjugate transpose of the matrix A.

If the noise mean value is assumed to be zero, the covariance matrix is givenby

Rw = E{wwH

}. (3.12)

Note that the covariance matrix is Hermitian symmetric. The output of the filtery is given by the convolution of the filter g and the observed signal x,

y =N−1∑n=0

g[n]x[n] = gHx = gHs+ gHw = ys + yw, (3.13)

where g is the conjugate of g. The output can be split in to ys and yw, generatedby the signal and the noise respectively. The Signal to Noise Ratio (SNR) isgiven by the ratio of the power of the desired signal and the power of the noise,which can be expressed as

SNR =|ys|2

E {|yw|2}=

|gHs|2E {|gHw|2} . (3.14)

The denominator can be in the following way

E{|gHw|2

}= E

{(gHw)(gHw)H

}= gH [E

{wwH

}]g = gHRwg, (3.15)

which gives the SNR expression

SNR =|gHs|2gHRwg

. (3.16)

Since the objective of the matched filter is to maximize the SNR, the problem isto solve the following optimization problem,

maxg

SNR (3.17)

or

maxg

|gHs|2gHRwg

. (3.18)

By using the property of Hermitian symmetry Equation 3.16 can be rewritten as

SNR =

∣∣∣∣gH(R12w)HR

− 12

w s

∣∣∣∣2gH(R

12w)HR

12wg

=

∣∣∣∣(gR 12w)H(R

− 12

w s)

∣∣∣∣2(gHR

12w)H(R

12wg)

. (3.19)

The Cauchy-Schwarz inequality is used to find an upper bound for the objectivefunction. For an complex inner product the Cauchy-Schwarz inequality is givenby

|uHv|2 ≤ ‖u‖2 · ‖v‖2 = (uHu) · (vHv), (3.20)

21

Page 34: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

the equality holds if and only if the vectors u and v are linear dependent. Hencean upper bound for the for the objective function is given by

SNR =

∣∣∣∣(gR 12w)H(R

− 12

w s)

∣∣∣∣2(gHR

12w)H(R

12wg)

[(R

12wg)H(R

12wg)

] [(R− 1

2w s)H(R

− 12

w s)

](gHR

12w)H(R

12wg)

(3.21)

which simplifies toSNR ≤ sHR−1

w s. (3.22)

The upper bound is achieved if and only if

R12wg = αR

− 12

w s, (3.23)

for an arbitrary scalar α. The optimal filer coefficients to the filter in Equation(3.13) can be expressed as

g = αR−1w s. (3.24)

If the noise is assumed to be zero mean withe noise (E yw = 0), the expectedvalue of the power can be expressed as the standard deviation of the noise σw,since

σw = E{|yw|2

}− (E {yw})2 = E

{|yw|2

}. (3.25)

The expectation value of the power of the noise can also be expressed as

E{|yw|2

}= E

{|gHw|2

}= (αR−1

w s)HRw(αR−1w s) = α2sHR−1

w s = σ2w (3.26)

givingα =

σw√sHR−1

w s. (3.27)

Which gives the normalized filter coefficients

g =σw√sHR−1

w sR−1w s. (3.28)

The matched filters impulse responds h is given by the complex conjugatetime reversal of g.

3.2 Spectral Transforms

In order to be able so analyze frequency content of time signals the DiscreteFourier Transform (DFT) is introduced. This provide a method to analyze dis-crete signals.

3.2.1 Discrete Fourier Transform

If we start by the definition of complex Fourier series for a function x(t), t ∈{IR, 0 ≤ t ≤ T}

22

Page 35: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

x(t) =∞∑

k=−∞cke

i2πnt/T

ck =1

T

∫ T

0x(t)e−ik2πt/Tdt.

(3.29)

Now instead of the continuous function x(t) consider discrete samples of x takenat time intervals δt. Introducing x[n] for the n:th sample of x(t) i.e. x(n∆t)of length N where N∆t = T . If now we calculate the Fourier coefficients ck inequation (3.29) as the sum over the samples x[n] instead of the integral

ck =1

T

N−1∑n=0

x[n]e−ik2πn∆t 1N∆t∆t

=1

N

N−1∑n=0

x[n]e−ik2πn 1N .

(3.30)

Where the right hand side of equation (3.30) is the definition of the DFT explicitlygiven by

X[k] =1

N

N−1∑n=0

x[n]e−ik2πnN . (3.31)

Corresponding to the DFT taking us from time domain to frequency domain theInverse Discrete Fourier Transform (IDFT) taking us the other way around isdefined by

x[n] =

N−1∑k=0

X[k]eik2πnN . (3.32)

Note the scaling by 1/N in equation (3.31) and 1 in equation (3.32). It is quitecommon in literature to use the scaling the other way around, but as proposedhere the scaling is consistent with the Fourier series [9].

A useful property of the DFT is the frequency shift property defined as

X[k − l] =1

N

N−1∑n=0

x[n]e−i2πnkN e

i2πnlN =

1

N

N−1∑n=0

x[n]e−i2πn(k−l)

N (3.33)

that is in words by multiplying the discrete signal x[n] by the complex exponentialei2πnlN will generate a frequency shift in the spectrum with l.

3.3 Resampling

The signals are sampled with the PRF for the specific illumination. Since theradar transmits different PRF’s for different revolutions, the signals is sampledwith different sampling frequencies. For further analysis the signals have to be

23

Page 36: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

sampled with the same sample frequency and of the same length. Hence thesignals have to be resampled with some factor. It is not guaranteed that theresampling factor is an integer. Hence the resample method has to be able toresample the signals with an non-integer factor. This is done by first up-samplethe signal with an integer K and then down-sampled with an integer L, resultingin an resample factor of K/L.

The methods presented in this section is based on the method describedin [16]. Up-sampling by integer factors are described in the first part of thissection. In part two and three are a type of filter described, which is used in theup-sampling method. The Down-sample part is presented in the fourth part ofthis section. The last part of this section summarize the method for resamplesignals with an non-integer factor.

3.3.1 Up-Sampling by integer factor

The up-sampling algorithm used is a time domain method that may be imple-mented in real-time applications. For a function f(t) sampled at intervals ∆T ,will generate a sequence of samples {f [n]} = {f(n∆T )}. By inserting K − 1zeros between each sample in {f [n]} a new sequence {f [k]} is generated. Theextended sequence is sampled at intervals ∆T/K, hence the new sequence {f [k]}is K times longer than {f [n]}. The method of inserting zeros between each sam-ple, is illustrated with an example in Figure 11. The example signal in Figure 11(a) is of length N = 16 and is up-sampled with a factor K = 3, hence in Figure11 (b) two zeros are inserted between every sample of the original signal.

To investigate the effect of inserting K − 1 samples of zeros, the DFT of thesequences are analyzed. The DFT for the original data sequence is given by

F [m] =1

N

N−1∑n=0

f [n]e−i2πmnN , m = 0, . . . , N − 1. (3.34)

For the sequence {f [k]} the DFT is given by

F [m] =1

KN

KN−1∑n=0

f [n]e−i2πmnkN , m = 0, . . . ,KN − 1. (3.35)

But since {F [m]} only has N non zero elements and f [Kk] = f [k], the DFT canbe rewritten as

F [m] =1

KN

N−1∑k=0

f [Kk]e−i2πnkN , m = 0, . . . ,KN − 1. (3.36)

24

Page 37: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

Figure 11: (a) A signal with length N = 16, and (b) the same signal with twozero samples inserted between each of the original samples.

One can note that {F [m]} is periodic with period N and {F [m]} is periodicwith period KN . Which gives that F [m] = F [m]/K, for m = 0; · · · , N − 1 and{F [m]} will contain K repetitions of {F [m]}. A illustration of the DFT of theexample signal from Figure 11 (a) is shown in Figure 12 (a). The DFT of theinterpolated signal is scaled with the factor K and plotted in Figure 12 (b) (Notein this example K = 3). It can be seen that the DFT of the original signal isrepeated three (K) times in the DFT of the interpolated signal.

Figure 12: DFT of the signal shown in 11 (a), which is periodic with period 16.DFT of the interpolated signal shown in 11 (b), which is periodic with period 48.

By insertingK−1 zeros has resulted in a signal with sampling interval ∆T/K

25

Page 38: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

and a Nyquist frequency of K/(2∆T ). The frequency resolution is unchangedand the original DFT is replicated K times within the frequency span of K/∆T .The interpolated signal can be reconstructed by elimination of the replicationsof the original spectral components. This is done by applying a low-pass filterto the sequence {F [m]} to preserve only the base band part of the spectrum.A representation of the two sided spectrum of f [m] are presented in Figure 13.Which illustrates the replications of F [m] and the low-pass filtering process.

mf

F [m]

−KN/2f = −K/(2∆T )

KN/2f = K/(2∆T )

0

Replications of F [m]

Passband oflow-pass filter

Low-passfilter

mf

F [m]

−KN/2f = −K/(2∆T )

KN/2f = K/(2∆T )

0

Nyquistfrequency

Figure 13: Illustration of low-pass filtering used to eliminate spectral replicationsintroduced by the interpolation of the signal.

The algorithm used to upsample a signal with an integer factor K can besummarized in the block diagram, displayed in Figure 14.

Insert K − 1 zerosbetween samples

Low-passfilter

f [n] f [n] Interpolated waveform

t t t∆T ∆T

K

Figure 14: Block diagram of the algorithm for upsample a sequence with aninteger factor K.

The interpolation filter used as the low-pass filter in the upsample algorithm isdescribed in the next section.

3.3.2 Interpolation Filter

The ideal filter for the interpolated sequence, is a low pass filter with cut offfrequency at the Nyquist frequency of the original signal 1/(2∆T ). The impulse

26

Page 39: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

response of such a filter is given by

h[n] =sin(πn/K)

πn/K(3.37)

But this kind of filters are acausal, hence a FIR approximation is made. Thisis done by truncating h[n] to a finite length. If one sets the finite length toN = 2KM + 1 and delaying it with KN samples the FIR approximation isobtained as

h[n] =sin(π(n−KM)/K)

π(n−KM)/K, n = 0; . . . , N − 1. (3.38)

The resampled signal is also windowed by a Kaiser window in the filtering process.The Kaiser window is explained in more detail in Appendix B.

The FIR interpolation filter for a up-sample factor K is illustrated in Figure15, below.

Up-samplewith K

f [Kn+ i]

b0

z−1

b1

Σ

z−1

b2

Σ

z−1

b3

Σ

z−1

b4

Σ

z−1

b5

Σ

z−1

bM−1

Σ

f [n]

f [n]

i = 2 :

i = 1 :

i = 0 :

0

0

f [n]

0

f [n]

0

f [n]

0

0

0

0

f [n− 1]

0

f [n− 1]

0

f [n− 1]

0

0

· · ·· · ·· · ·

Figure 15: Illustration FIR interpolation filter.

One can note that the values of f [n] is zero for a lot of the filter coefficients.This fact can be used to implement the filtering in an efficient way. The more ef-ficient implementation of the interpolation filter is called polyphase interpolationfilter and are described in the next section.

3.3.3 Polyphase Interpolation filter

To improve the computational speed of the algorithm a polypahase filter structureis used. In the filter computation above one can see that many of the up-sampleddata values are zeros at each filter step. It is clear that the zero values does notcontribute to the filter output. Therefor only a subset of the filter coefficientsare required at each of the computational steps. This leads to the polypahasefilter structure, where the filer is split into sub sets of filter which are applied tothe signal. The polypahase filter structure can be summarized in the followingsteps:

(1) Based on the up-sampling factor K, design aM−1 order FIR interpolationfilter.

(2) From the filter coefficients bm, m = 0, . . . ,M −1, form K sub filters, in thefollowing way

27

Page 40: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

Sub filter 0: b0 bK b2K b3K · · ·Sub filter 1: b1 bK+1 b2K+1 b3K+1 · · ·Sub filter 2: b2 bK+2 b2K+2 b3K+2 · · ·

......

......

... · · ·Sub filter K − 1: bK−1 b2K−1 b3K−1 b4K−1 · · ·

Each sub filter is a FIR filter with the coefficients given above, the coeffi-cients bi that is not included are equal to zero.

(3) At each major time step the interpolation data stream may be created bypassing the data sequence {f [n]} through each of the sub filters, so thatthe interpolated value f [Kn+ i] is the output of sub filter i.

The polyphase interpolation filter can be represented with Figure 16.

Sub-filter 0

Sub-filter 1

Sub-filter 2

Sub-filter K − 1

...

f [n] f [nK]

f [nK + 1]

f [nK + 2]

f [(n + 1)K − 1]

f [nK + i]

∆T/K

Figure 16: Illustration poly interpolation filter.

When using the polyphase structure it is not necessary to preform the up-sampling step of inserting zeros in to the data sequence.

3.3.4 Down-sampling (or Decimation) by an Integer Factor

By down-sample a signal one want to increase the sampling interval ∆T by afactor L (or decreasing the samplings frequency with L). This is done by retainevery L sample of the original sequence {f [n]} by letting f [n] = f [Ln], where{f [n]} is the down-sampled sequence. However caution must be taken to preventaliasing effects in the down-sampled sequence. One can not directly down-samplea sequence unless it is known that the spectrum of the data sequence is equal tozero for all frequencies at and above the new Nyquist frequency defined by thenew sampling frequency. To prevent aliasing effects in the resampled data, are theoriginal sequence passed through a digital low-pass filter before down-sampled..

28

Page 41: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

3.3.5 Resampled with non-integer factor

Assume that one want to resample a sequence to non-integer factor P and assumethat the factor P can be expressed as a rational factor

P =K

L, (3.39)

where K and L are integers. Then the resampling can be achieved by up-samplethe sequence by a factor K followed by a down-sample of the sequence by a factorL. Since the low pass filtering parts of the up- and down sapling are cascaded,see Figure 17a. The two filters may be replaced by a single filter, see Figure 17b.

Up-samplerInterpolation

filterAnti-aliasing

filterDown-sampler

f [n] f [n]

Interpolation Decimation

(a) Illustration of filters in the resample algorithm.

Up-samplerLow-pass

filterDown-sampler

f [n] f [n]

(b) Illustration of filters in compact form.

Figure 17: Illustration of filtering process for the resample algorithm.

For resampling factors P that are irrational one first has to approximate thefactor P with a rational number.

3.4 Power spectra Density

In this chapter are the Power Spectra Density (PSD) introduced. For randomsignals are the PSD used to describe the distribution of power over frequency,[9]. Since the radar signals contains noise the PSD are used for the radar signals.Hence the PSD will be used to compare the original radar signal with the gen-erated audio signal. The first part of this chapter describes the PSD for discretesignals in general. A special form of PSD called rational PSD are described inthe second part.

3.4.1 Discrete Power Spectra Density

The PSD can be defined as the Discrete time Fourier transform of the autoco-variance sequence (The autocovariance is described in Appendix A),

Φ(ω) =

∞∑k=−∞

r(k)e−iωk (3.40)

The sequence {r(k)} can be recovered from the PSD Φ(ω) with the inverse Fouriertransform

r(k) =1

∫ π

−πΦ(ω)eiωkdω. (3.41)

29

Page 42: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

From the definition of the PSD it follows that Φ(ω) is a periodic function withperiod 2π. Hence, the PSD is completely described by its variation in the interval

ω ∈ [−π, π]. (3.42)

Or for a sampled signal with sampling frequency Fs

f ∈[−Fs

2,Fs2

]. (3.43)

The PSD function is also a real valued nonnegative function,

Φ(ω) ≥ 0 ∀ω. (3.44)

Next, a useful result which concerns the transfer of PSD through an asymp-totically stable linear system are presented. Let e(t) be the a stationary input toa asymptotic stable system and y(t) the corresponding output and where

H(z) =

∞∑k=−∞

hkz−k, (3.45)

represents the system. The output and input is related through the convolution

y(t) = H(z)e(t) =∞∑

k=−∞hke(t− k). (3.46)

Where the filter has the transfer function

H(ω) =

∞∑k=−∞

hke−iωk. (3.47)

The covariance is obtain as

ry(k) =∞∑

p=−∞

∞∑m=−∞

hphHmE{e(t− p)eH(t−m− k)

}=

∞∑p=−∞

∞∑m=−∞

hphHmre(m+ k − p)

(3.48)

By inserting Equation (3.48) in (3.40) one obtains

Φy(ω) =

∞∑k=−∞

∞∑p=−∞

∞∑m=−∞

hphHmre(m+ k − p)e−iω(k+m−p)eiωme−iωp

=

[ ∞∑p=−∞

hpe−iωp

][ ∞∑m=−∞

hHmeiωm

][ ∞∑τ=−∞

re(τ)e−iωτ

]= |H(ω)|2Φe(ω)

(3.49)

This result are used in the next part to couple the rational PSD to the definitionof the PSD.

30

Page 43: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

3.4.2 Signals with Rational Spectrum

A rational PSD is a rational function of e−iω, i.e. a ratio of two polynomials ine−iω,

Φ(ω) =

∑mk=−m αke

−iωk∑nl=−n βle

−iωl , (3.50)

where α−k = αHk and β−l = βHl . According to the Weierstrass Theorem [20] canany PSD be approximated arbitrary closely by a rational PSD on the form givenin Equation (3.50). Provided that the degrees m and n in Equation (3.50) arechosen appropriately large. This motivates the interest of modeling PSD whitrational PSD’s.

The rational PSD in Equation (3.50) can be factorized in the following way,since Φ(ω) ≥ 0,

Φ(ω) =

∣∣∣∣B(ω)

A(ω)

∣∣∣∣2 σ2, (3.51)

where σ2 is a positive scalar and A(ω) and B(ω) are the polynomials

A(ω) = 1 + α1e−iω + · · ·+ αne

−inω, (3.52)

B(ω) = 1 + β1e−iω + · · ·+ βne

−imω. (3.53)

These results can further be expressed in the z-domain as

Φ(z) = σ2B(z)BH(1/zH)

A(z)AH(1/zH)(3.54)

where

A(z) =

m∑k=0

αkz−k where α0 = 1 (3.55)

AH(1/zH) =

[m∑k=0

αk(1/zH

)−k]H=

m∑k=0

αHk zk where αH0 = 1 (3.56)

and similarly for B(z). Note that the poles and zeros of Φ(z) are symmetricpairs about the unit circle. Since if zk = reiθ is a pole (zero) of Φ(z) then(1/zHk ) = (1/r)eiθ is also a pole (zero). The result that Equation (3.50) can berewritten as Equation (3.51) and (3.54), is called called the spectral factorizationtheorem, [2]. Comparing Equation (3.49) and (3.51) gives the following result,stated in [19].

” The arbitrary rational PSD in Equation (3.51) can be associated with a signalobtained by filtering white noise of power σ2 through the rational filter with

transfer function H(ω) = B(ω)/A(ω).”

The filtering can be represented in the time domain as

y(t) =B(z)

A(z)e(t). (3.57)

31

Page 44: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

Where e(t) is the white noise with variance equal to σ2. Hence the parameterizedmodel of Φ(ω) turned in to a model of the signal itself. This result are laterused to analyze the spectrum of a Autoregressive process, described in the nextchapter.

4 Autoregressive Modeling

In this section are Autoregressive (AR) modeling introduced, which is a way tomodel signals or time series with previous samples. These models are in thisthesis used to extend the signals, generated by the frequency shift described inSection 5.4.

In the first part of this chapter are autoregressive processes introduced whichis the base in AR modeling. In the second part are a relation between an AR pro-cess and the autocovariance function (see Appendix A), called the Yule-Walkerequations shown. The relation between rational PSD and a AR process are pre-sented in the third part. Modeling the AR process as an AR model is describedin the forth part. A method for extrapolation of signals and more specific au-dio signals are introduced in the fifth part of this chapter. The last part of thechapter describe methods to estimate the so called AR parameters, which areparameters that describe a given AR model.

4.1 Autoregressive process

In time series analysis one important tool for forecasting future values is the ARprocess, which is a stochastic process. The AR process is a linear combinationsof previous samples and a stationary stochastic process with zero mean, [6]. (Forstationarity see Appendix A). A p order AR process can be represented as

yt = −p∑i=1

φiyt−i + et, (4.1)

where φt are the autoregressive parameters and et are a stationary random pro-cess with zero mean. The z-transform of the AR process is given by

Y (z) +

p∑i=1

φiY (z)z−i = E(z)

⇒(

1 +

p∑i=1

φiz−i

)︸ ︷︷ ︸

Θ(z)

Y (z) = E(z)(4.2)

where Y (z) and E(z) is the z-transform of yt and et. The polynomial Θ(z) iscalled the characteristic polynomial. In [17] it is proven that the AR process yt isstationary, if and only if the roots of the characteristic polynomial are located inside the unit circle in the complex plane. In the next section are the AR processyt assumed to be at least weakly-stationary and have zero mean. Which are usedto derive some important properties of the AR process.

32

Page 45: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

4.1.1 The Yule-Walker Equations

In this section will a relation between the AR parameters and the autocovariancefunction be shown, based on the [19]. It will be shown that the AR parametersand the autocovariance function are coupled through linear system of equations,called the Yule-Walker equations. One method for computing the AR parametersis to solve the Yule-Walker equations.

The AR process (Equation (4.1)) can also be written in the following way

yt +

p∑i=1

φiyt−i = et, (4.3)

By multiplying the left and right hand-side of the Equation (4.3) with yt−k andtaking the expected value, gives

E{yty

Ht−k}

+

p∑i=0

φiE{yt−iy

Ht−k}

= E{ety

Ht−k}. (4.4)

Since the time series yt is assumed to be stationary with mean zero, the followingrelations can be shown

E{yt−iy

Ht−k}

= r(l − i), (4.5)

E {etyt−k} =

{σ2e , k = 0

0, k > 0. (4.6)

Where σ2e is the variance of the stochastic process et and r the autocovariance

function. Equation (4.4) and (4.5) gives the following equation

r(k) +

n∑i=1

φir(k − i) =

{σ2e , k = 0

0, k > 0. (4.7)

Equation (4.49) can be expressed the following system of linear equations callthe Yule-Walker equations.

r(0) r(−1) · · · r(−n)

r(1) r(0)...

.... . . r(−1)

r(n) · · · r(1) r(0)

1φ1...φn

=

σ2e

0...0

(4.8)

The Yule-Walker equation shows the relation between the autocovariance func-tion, r, for lags 0 to p and the autoregressive parameters φi. If {r(k)}nk=0 wereknown the AR coefficients can be determined by using all but the first row inEquation (4.8) to set up the systemr(1)

...r(n)

︸ ︷︷ ︸

rn

+

r(0) · · · r(−n+ 1)...

. . ....

r(n− 1) · · · r(0)

︸ ︷︷ ︸

Rn

φ1...φn

︸ ︷︷ ︸

θ

=

0...0

, (4.9)

33

Page 46: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

where Rnis the covariance matrix defined in Appendix A, rewritten in a compactform

rn +Rnθ = 0, (4.10)

which has the solution θ = −R−1n rn. Once θ is found, σ2 can be computed from

the first row of Equation (4.8). This method of calculating the AR parameters iscalled the Yule-Walker method. Equation (4.10) are usually not solved directlywhen the Yule-Walker method is used. Instead a more efficient method is usedcalled Levinson-Durbin, which will be explained later in the thesis.

The covariance matrix in Equation (4.8) can be shown to be positive definitefor any n, hence the solution to Equation (4.8) is unique. If the covariances arereplaced by sample covariances (see Appendix A), the matrix can be shown tobe positive definite for any sample that is not identically equal to zero [19].

Equation (4.8) can be written as

Rn+1

[1θn

]=

[σ2n

0

]. (4.11)

to show the dependence of θ and σ on the order n. This form of the Yule-Wakerequations will later be used in the Levinson-Durbin algorithm, that can be usedto estimate the AR coefficients.

4.1.2 Power Spectra Density of AR process

In this part are the relation between the AR process and a rational PSD discussed.Which gives some specific properties for a PSD generated form a AR process.The properties will later be used in the analysis and discussion of the results.

If m = 0 and the polynomial B(ω) = 1 in Equation (3.57), then are y(t) anAR process. For an AR process the rational PSD in Equation (3.51) becomes

ΦAR(ω) = σ2e

1

|A(ejω)|2 (4.12)

An AR process can be represented as a IIR filter that is fed with a stochasticprocess et. Such a IIR filter has the following z-transform

H(z) =1

1 +∑p

i=1 φiz−i =

1

A(z)(4.13)

and the frequency response is

H(ejω) =1

1 +∑p

i=1 φie−jωi =

1

A(ejω). (4.14)

Form Equation (4.13) it is clear that the IIR filter representation, is a all-polefilter. Hence the rational PSD of the AR process are all-pole power spectrum.

Let Φ(ω) be the PSD of the original signal that is fed to the AR process. IfΦ(ω) has sharp peaks, then A(z) must have zeros (poles of Φ(ω)) close to theunit circle, to reproduce these peaks. In the case when Φ(ω) has zeros or sharpdips, then cannot ΦAR(ω) approximate Φ(ω) well. Since A(z) cannot have poleson the unit circle, which is required for Φ(ω) to have zeros or sharp dips. Hencethe AR process can approximate the power spectrum Φ near its peaks, but notnear its valleys, [1].

34

Page 47: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

4.2 Autoregressive Model

In this part methods for modeling Autoregressive processes are discussed. Thefirst part describes a general for of AR models and the second part describes theconcept of linear prediction. Where linear prediction is a way to predict futureevents. Linear prediction are the base for a extrapolation technique, which isused to extrapolate signals in this thesis.

The autoregressive process can be modeled by a autoregressive model. Theautoregressive model of the same order as the process is given by

yt = −p∑i=1

φiyt−i + et (4.15)

where φi are the estimations of the autoregressive parameters and et the estima-tion of the stochastic process, et. The autoregressive model can be implementedas a so called AR filter, which is an example of an IIR filter.

When applying the autoregressive model one have to make an estimation ofthe autoregressive parameters. There is three commonly used method to esti-mate the autoregressive parameters, the least- square approach, the Yule-Walkerapproach and Burg’s method. According to [6] the most preferable method isBurg’s method. Burg’s method will be explained in detail, later in this thesis.

When the methods for estimating the AR parameters are used, one has toknow the autocovariances. But the autocovariances are not known for a givensequence. Instead, given the data sequence {y(t)}Nt=1 the sample covariances{r(k)}nt=k are computed with equation given in Appendix A. The Yule-Walkerequations can be used to estimate the AR parameters φ and the variance σ, byusing the covariance estimate r.

4.2.1 Linear Prediction

In Linear Prediction (LP) the nth signal sample yn is approximated as a com-bination of p previous samples. From Equation (4.15) each data point can bepredicted from its predecessors, by the following expression, [1],

yt = −p∑i=1

φiyt−i (4.16)

Since the data samples yt cannot be exactly predicted, the difference betweenthe measured and estimated value are defined as a residue,

residue ≡ yt − yt = et. (4.17)

Hence the residue is equal to the estimation of the stochastic process.A Linear prediction can be implemented as a FIR filter. The prediction filter

representation is presented by taking the z-transform of the LP Equation (4.16)

P (z) = −p∑i=1

φiz−i. (4.18)

35

Page 48: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

This filter representation can be used to predict a new sample. By using thefilter iteratively more samples can be predicted. This technique will be the baseof an extrapolation method described in the next section.

4.3 Extrapolation of Finite Signals

A method for extrapolation of signals with finite length are explained in thissection, which is based on linear prediction, proposed in [11]. In the first part arethe extrapolation method explained and in the second part are some frequencydomain properties of the extrapolation discussed. The second part is given as atheoretical background about which type of signals can be extrapolated.

Discrete finite signals can be extended by extrapolating the existing signal.For a discrete signal (y = [y1, y2, . . . , yN ]), with a finite length N , one wants tocalculate previously unknown samples, [yN+1, yN+2, . . .] for forward extrapolationand [. . . , y−2, y−1, y0] for backward extrapolation. By making the assumptionthat there exists a set of prediction filter coefficients h = [h1, . . . , hM ], that canperfectly linearly predict any samples in a signal from the M previous samples.If such prediction filter coefficients exists the perfect signal prediction, will givea zero prediction error (residule=0 in Equation (4.17)), and the prediction canbe written as

yn =

M∑i=1

hiyn−i. (4.19)

If there exists at least M known samples in the given signal y, i.e. N ≥M , thefirst froward extrapolated signal yn+1 can be generated from Equation (4.19),resulting in an extended signal [y1, y2, . . . , yN , yN+1]. From the extended signalthe last M samples can be used to generate the second forward extrapolatedsample yN+2 using Equation (4.19) again. The extrapolation method can beused successively to produce an unlimited amount of new extrapolated samplesof the given signal y.

The extrapolation in Equation (4.19), can be rewritten as a convolution sumin the following way

yn = hn ∗ yn =∞∑

i=−∞hixn−i (4.20)

where hi = 0 for i > M and i < 1. Hence the impulse response for the extrapo-lation is causal and it implies that the condition h0 = 0 is satisfied.

4.3.1 Impulse Response and Transfer Function of the Extrapolation

This part serves as a theoretical background about which type of signals that canbe extrapolated. The theory in this part is based on continuous signals. Thistheory are further analyzed in the next section, where discrete signals also areanalyzed.

If the convolution in Equation (4.20) is rewritten in continuous form, wherea continuous signal y(t) is convoluted with a impulse response h(t) so that it isnot changed, one obtains

y(t) = h(t) ∗ y(t) (4.21)

36

Page 49: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

The Fourier transform of this function gives

Y (f) = H(f)Y (f), (4.22)

for Equation (4.21) the following condition on the frequency response functionmust be satisfied,

H(f) =

{1 and real Y (f) 6= 0

arbitrary Y (f) = 0(4.23)

One frequency response function that satisfies the conditions above is the trivialfunction, with real constant value of one for all f . The impulse response of sucha function is the Dirac delta function, i.e. h(t) = δ(t). But this function can notbe used in the extrapolation, since the impulse response in Equation (4.20) mustbe zero at t = 0 and causal.

The frequency response function H(f) is complex valued and it is a Hermi-tian function H(f) = H∗(−f), since the impulse responses of the extrapolationare real valued. Furthermore are the real and imaginary part of the frequencyresponse function H(ω) a Hilbert transform pair, [13]. Hence are the frequencyresponse function H(ω) a analytical signal. An analytical signal can only beforced to a certain value in discrete points, therefore the spectrum X(f) of theinfinity long signal can consist only of sharp lines (i.e. Dirac delta functions), [11].The requirements for what signals that can be extrapolated can be summarizedin the following sentence, based on the reasoning above, [11].

” If the functional form of the given signal section has a theoretical spectrumconsisting only of infinitely sharp lines, the signal section can be extrapolated

perfectly using a finite length of impulse response.”

One example of such a function is the cosine function,

F {cos(2πf0t)} =1

2δ(f − f0) +

1

2δ(f + f0), (4.24)

where F {} denotes the Fourier Transform. The effects of extrapolating cosinesignals are further analyzed in the next section.

4.4 Extrapolation of Audio Signals

The theory about which signals that can be extrapolated are further explained inthis section, but with focus on audio signals. The first part discuses extrapolationof different types of audio signals. The effects of models order are discussed inthe second part. The last part consist of theory about how noisy signals effectsthe extrapolation.

A audio signal can be mathematically approximated with a Fourier series,which is a sum of cosine functions with different frequencies fi and phases ϕimultiplied by an amplitude envelope function Ai(t), given by

x(t) =∑i

Ai(t) cos(2πfit+ ϕi), fi ≥ 0. (4.25)

37

Page 50: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

The cosine function can be decomposed in to complex valued exponential func-tions called ”phasors” according to Euler’s formula

cosωt =eiωt + e−iωt

2, (4.26)

where ω is the angular frequency. The Fourier transform of an single phasor isa Dirac delta function, hence the spectrum of a single phasor is a Dirac deltafunction. Where the Dirac delta function represents the angular frequency ω. Inthe discrete case (let t = n∆T , i.e. sampled with a sample frequency fs = 1/∆T )a single phasor can be extrapolated with a single impulse response coefficient,

eiωn∆T = h1eiω(n−1)∆T , where h1 = eiω∆T . (4.27)

Since a cosine function consists of two phasors, the spectrum consists of twoDirac delta functions and two impulse response coefficients are required for theextrapolation of an cosine function. The two Dirac delta functions represents thepositive and negative frequencies in the spectrum.

In [10] they show that to achieve a perfect extrapolation, the signal that isextrapolated must be predictable. They also show that sinusoids functions arepredictable and sums of sinusoids. But for example a random process like withenoise are not predictable. Since no sample from a white noise signal depends onthe previous samples. For signals that has a time varying amplitude envelope,a perfect extrapolation is only possible if the envelope is a predictable functionalone. Such a time varying envelop are for example a exponential function or apolynomial.

4.4.1 Model Order

The number of impulse response coefficients required to perfectly extrapolate acosine wave can be examined by decomposing the cosine wave into exponentialfunctions

x(t) = A(t) cos(ωt) =A(t)

2eiωt +

A(t)

2e−iωt. (4.28)

If the amplitude envelope function A(t) can be perfectly extrapolated with mcoefficients. Then A(t) multiplied by an exponential function (phasore), can alsobe perfectly extrapolated with m coefficients, [11]. Each term in the sum inEquation (4.28) requires m coefficients to be perfectly extrapolated, hence tobe perfectly extrapolated the cosine wave in Equation (4.28) 2m coefficients arerequired.

An audio signal contains in general a large number of frequencies and timevarying frequencies, which requires a high order model. Since the impulse re-sponse must be longer than twice the number of frequencies. Hence usually avery high order model is required for good extrapolation results.

4.4.2 Extrapolation of Noisy Signals

As mentioned before the extrapolation method are not extrapolating a noisysignal perfectly. Since the noisy part of the signal is not predictable.

38

Page 51: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

The extrapolated signal does not contain the noisy part of the signal, whena noisy signal is extrapolated with the theoretical impulse responses. Since thenoisy part of the signal change amplitude Ai at each frequency fi. The amplitudeof the extrapolated signal may differ from the original (noiseless) signal, [10]. Thisphenomena is illustrated in Figure 18, with an example signal consisting of twosinusoids. The signal is extrapolated from sample 200 and onwards.

0 100 200 300 400 500 600

Sample

-20

0

20

Am

plitu

de

0 100 200 300 400 500 600

Sample

-20

0

20

Am

plitu

de

0 100 200 300 400 500 600

Sample

-5

0

5

Am

plitu

de

Original signal + noise

Original signal

Extrapolated signal

Subtracted signal

Figure 18: Top: Original signal (noiseless). Middle: The original signal withwhite noise extrapolated from sample 200. Bottom: The difference between theoriginal (noiseless) signal and the extrapolated signal.

The phenomena can clearly be seen in the bottom graph of Figure 18, where thefirst 200 samples in the difference signal is just noise and in the rest of the signalsome amplitude differences. But the extrapolated part dose not contain anynoise, instead the error comes from erroneous amplitudes in the extrapolatedsignal. The impulse responses used in the extrapolation is derived from theoriginal (noiseless) signal and 4 impulse responses are used in the example.

For noisy signals a correct impulse response is not known, since the impulseresponse must be computed for the noisy signal, which is not possible. Instead,if the signal are noisy the impulse response should be estimated from the noisysignal, [10]. As before the estimated impulse response must be longer than twicethe number of frequencies in the signal. But to get decent results for noisy signalsare often more than twice the number of frequencies are needed. In Figure 19 anoisy signal consisting of two sinusoids has been extrapolated by using a impulseresponse of length 20, estimated form the noisy signal.

39

Page 52: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

0 100 200 300 400 500 600

Sample

-20

0

20

Am

plitu

de

0 100 200 300 400 500 600

Sample

-20

0

20

Am

plitu

de

0 100 200 300 400 500 600

Sample

-5

0

5

Am

plitu

de

Original signal + noise

Original signal

Extrapolated signal

Subtracted signal

Figure 19: Top: Original signal (noiseless). Middle: The original signal withwhite noise extrapolated from sample 200. Bottom: The difference between theoriginal (noiseless) signal and the extrapolated signal.

The noise continues only in the first samples of the extrapolated sequence (aftersample 200 in the middle graph). Further in the extrapolation the error containsmainly of amplitude differences. According to [10] for signals with a large amountof frequencies, should Burg’s method be used to estimate the impulse response.

4.5 Parameter Estimation

This section contains theory about methods for estimating the AR parameters.Two methods are presented Levinson-Durbin and Burg’s method. In the firstpart are some useful properties of the covariance matrix shown, which are usedin the methods. The second part explains the Levinson-Durbin method, whichis used in Burg’s method. Burg’s method are presented in the last part, whichis the method used to estimate the AR parameters in this thesis.

4.5.1 Properties of the Covariance Matrix

The covariance matrix in Equation (4.8) can be rewritten as (with property ofcovariance matrix given in Appendix A),

Rn+1 =

r(0) r(−1) · · · r(−n)

r(1) r(0)...

.... . . r(−1)

r(n) · · · r(1) r(0)

=

r(0) rH(1) · · · rH(−n)

r(1) r(0)...

.... . . rH(−1)

r(n) · · · r(1) r(0)

(4.29)

40

Page 53: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

hence the covariance matrix Rn+1 is Hermitian symmetric according Definition1. An other type of matrix is the Toeplitz matrix, defined by the followingdefinition.

Definition 2. A matrix in which each descending diagonal from left to right isconstant, is called a Toeplitz symmetric matrix.

Hence the covariance matrix Rn+1 are both Hermitian and Toeplitz symmet-ric. For a vector x =

[x1 · · · xn

]T , letx =

[xHn · · · xH1

]T (4.30)

An important property of any Hermitian Toeplitz matrix R is that

y = Rx⇒ y = Rx, (4.31)

shown in [19]. Note that these properties also are true for the case when thesample covariance are used instead of the autocovariance function.

4.5.2 Levinson-Durbin Algorithm

The Levinson-Durbin provides an efficient way to solve the Yule-Walker equa-tions. Note that for unknown signals are the sample covariance used instead ofthe autocovariance function, hence from here on are ρk used and can be replacedby either r(k) or r(k). The following derivation of the Levinson-Durbin method,are based on the derivation of the method given in [19].

The Levinson-Durbin solves Equation (4.11) recursively in n. By using Equa-tion (4.11) and the above properties of the covariance matrix R, the followingequation can be stated

Rn+2

1θn0

=

ρHn+1Rn+1 pnρn+1 pHn ρ0

1θn0

=

σ2n

0

αn

(4.32)

where

pn =[ρ1 · · · ρn

]T (4.33)

αn = ρn+1 + pHn θn (4.34)

Equation (4.32) would correspond to Equation (4.11) when n is increased by one,if αn in Equation (4.34) was zero. This can be done by letting

kn+1 = −αnσ2n

. (4.35)

Then it follows from Equation (4.31) and (4.32) that

Rn+2

1θn0

+ kn+1

0

θn1

=

σ2n

0αn

+ kn+1

αHn0σ2n

=

σ2n + kn+1α

Hn

00

(4.36)

41

Page 54: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

which has the same structure as

Rn+2

[1

θn+1

]=

[σ2n+1

0

](4.37)

Using the fact that the solution to Equation (4.11) is unique for any n andcomparing Equation (4.36) and (4.37), the following conclusion is made

θn+1 =

[θn0

]+ kn+1

[θn1

], (4.38)

σ2n+1 = σ2

n(1− |kn+1|2). (4.39)

The coefficients ki is called the reflection coefficients. The Levinson-Durbin al-gorithm is summarized below.

Algorithm 1 The Levinson-Durbin AlgorithmInitialization:

θ1 = −ρ1/ρ0 = k1

σ21 = ρ0 − |ρ1|2/ρ0

For n = 1, . . . , nmax, compute:

kn+1 = −ρn+1 + rHn θnσ2n

θn+1 =

[θn0

]+ kn+1

[θn1

]σ2n+1 = σ2

n(1− |kn+1|2)

4.5.3 The Burg Method

The Burg method first presented in John Parker Burg’s thesis [3] is a methodfor estimations of the AR parameters that is based on forward and backwardprediction errors and a direct estimation of the reflection coefficient mentionedin the previews section. The following derivation of the Burg method, are basedon the derivation of the method presented in [19].

Given the data sequence {y[t]} for t = 1, 2, . . . , N . For a p order model theforward and backward prediction errors are defined by the following equations

ef,p[t] = y[t] +

p∑i=1

φp,iy[t− i], t = p+ 1, . . . , N (4.40)

eb,p[t] = y[t− p] +

p∑i=1

φHp,iy[t− p+ 1], t = p+ 1, . . . , N (4.41)

42

Page 55: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

Equation (4.38) gives the following relation between the AR coefficients andreflection coefficient kp

φp,i =

{φp−1,i + kpφ

Hp−1,p−i, i = 1, . . . , p− 1

kp, i = p. (4.42)

Burg’s method is a recursive method, that estimates kp given that the AR coeffi-cients for order p−1 are computed. This is done by finding the kp that minimizesthe arithmetic mean of the forward and backward prediction error variance esti-mates, which can be formulated as the following optimization problem.

minkp

1

2(ρf [p] + ρb[p]) (4.43)

where

ρf [p] =1

N − pN∑

t=p+1

|ef,p[t]|2, (4.44)

ρb[p] =1

N − pN∑

t=p+1

|eb,p[t]|2, (4.45)

with the assumption that{φp−1,i

}p−1

i=1are known from the recursion at the pre-

vious order.Equation (4.40), (4.41) and (4.42) are used to formulate the following recur-

sion in order expression for the forward prediction error,

ef,p = y[t] +

p−1∑i=1

(φp−1,i + kpφ

Hp−1,p−i

)y[t− i] + kpy[t− p]

=

(y[t] +

p−1∑i=1

φp−1,iy[t− i])

+ kp

(y[t− p] +

p−1∑i=1

φHp−1,iy[t− p+ i]

)= ef,p−1[t] + kpeb,p−1[t− 1].

(4.46)Similarly, for backward prediction error Equation (4.40), (4.41) and (4.42) givesthe following recursive expression

eb,p = y[t− p] +

p−1∑i=1

(φHp−1,i + kHp φp−1,p−i

)y[t− p+ i] + kHp y[t]

=

(y[t− p] +

p−1∑i=1

φHp−1,iy[t− p+ i]

)+ kHp

(y[t] +

p−1∑i=1

φp−1,iy[t− i])

= eb,p−1[t− 1] + kHp ef,p−1[t].(4.47)

43

Page 56: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

Note that the optimization problem in Equation (4.43) is a quadratic optimiza-tion problem since

1

2(ρf [p] + ρb[p]) =

1

2(N − p)N∑

t=p+1

(∣∣∣ef,p−1[t] + kpeb,p−1[t]∣∣∣2

+∣∣∣eb,p−1[t] + kHp ef,p−1[t]

∣∣∣2)=

1

2(N − p)N∑

t=p+1

((|ef,p−1[t]|2 + |eb,p−1[t− 1]|2

)(1 +

∣∣∣kp∣∣∣2)+ 2ef,p−1[t]eHb,p−1[t− 1]kHp

+ 2eHf,p−1[t]eb,p−1[t− 1]kp

).

(4.48)The quadratic optimization problem in Equation (4.43), has the solution

kp =−2∑N

t=p+1 ef,p−1[t]eHb,p−1[t− 1]∑Nt=p+1

(|ef,p−1[t]|2 + |eb,p−1[t− 1]|2

) . (4.49)

A recursive in order algorithm for estimating the AR parameters, called the Burgalgorithm, is as follows:

Algorithm 2 The Burg AlgorithmStep 0: Set p = 0 and initialize ef,0 = eb,0 = y[t]Step 1: For p = 1, . . . , n,

(i) Compute kp from Equation (4.49).

(ii) Compute φp,i for i = 1, . . . , p from Equation (4.42).

(iii) Compute ef,p[t] and eb,p[t] for t = p+ 1, . . . , N from Equation (4.46) and(4.47).

Then θ =[φp,1 . . . φp,p

]T is the vector of the AR coefficient estimates.

The Burg method estimates the n reflection coefficients by decoupling ann-dimensional minimization problem into the n one-dimensional minimizationsin Equation (4.43). In contrast to the Yule-Walker method, which estimate theautoregressive parameters directly. Burg’s method first estimates the reflectioncoefficients, which are defined as the last autoregressive-parameter estimate foreach model order p. From the reflection coefficients are the parameter estimatesdetermined using the Levinson-Durbin algorithm. The reflection coefficients con-stitute unbiased estimates of the autocorrelation coefficients, [6]. Hence it is notrequired to compute the autocorrelation coefficients, to initialize the method.The Burg AR model estimate is guaranteed to be stable, shown in [19], which isa pleasant property of the method.

44

Page 57: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

5 Radar Signal Processing

When the radar system send out electromagnetic signals st(t) in the free spaceand receive echoes sr(t) from the surrounding environment. The revived echoesare transformed with an quadrature detector to I/Q signals sD(t). The timedependence of the echo signals correspondence to range, form the radar to ob-jects. Hence the I/Q signals can be expressed as sD[m], where m is the rangediscretized in to range bins. The sizes of the range bins is given the range reso-lution. By stacking subsequent echo pulses as the radar revolves, it is possible tocreate a spatial 2D representation of the signals, x[n,m]. Where n is the bearingdiscretized in to bins, the sizes of the bins corresponds to the angular resolutionof the radar. Much of the returned signals is from objects of non interest suchas buildings, threes and even the ground itself. Echoes from these kinds of ob-jects are often called ground clutter or clutter. Due to the presence of clutter,further signal processing is needed on the raw radar video before extracting theradar echoes (I/Q data) from targets, since clutter echoes can be many orders ofmagnitude larger than the target itself. This can be seen in Figure 20 where theinstantaneous power of x[n,m] is plotted in decibel

Pinst = 20 log10 |x[n,m]| . (5.1)

The plot is dominated by clutter and it is impossible to resolve echoes fromtargets. To be able to resolve targets and extract I/Q data from a target two sig-nals processing methods are used, MTI-filtering and pulse compression. Echoesfrom targets can then be extracted as complex time series, by taking the FourierTransform of these signals the Doppler spectrum are achieved.

Figure 20: Raw radar video

45

Page 58: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

5.1 MTI-filter

The Moving Target Indicator (MTI) filter is used to suppress echoes from clutter,a property of clutter is that it is stationary or close to stationary, i.e., the Dopplerfrequencies induced by echoes from clutter is zero or close to zero. The MTI filteris a high pass filter that filters out the low Doppler frequencies, i.e., the objectswith low velocities. The MTI filters used are FIR filters of order N explained insection 3.1.1 and for a set of impulse response coefficients bi and the signal x[x,m]the filtered signal xmti[n,m] can be expressed as the following convolution sum

xmti[n,m] =N∑i=0

bix[n− i,m], for all m, (5.2)

A graphical representation of the instantaneous power of the MTI filtered signalearlier seen in Figure 20 as its raw form can be seen in Figure 21.

Figure 21: MTI filtered radar video

As can be seen in Figure 21 most of the clutter is removed and it is possible todistinguish echoes returned from targets.

5.2 Pulse Compression

As can be seen in Figure 21 a target is clearly visible at range 3000 meters andbearing 214◦. However the target is smeared out over a large range interval, i.e.,the range resolution is poor.

The range resolution is improved by shorten the pulse width, Equation (2.7),and the maximum range detection is improved by increasing the energy of the

46

Page 59: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

pulse, Equation (2.2). Where to make the pulse longer is the easiest way to in-crease the energy of the pulse. Hence in pulsed radar systems, the characteristicsof the transmitted pulse is compromise between good range resolution and themaximum range detection [5]. To handle this trade off a method called pulse com-pression is used. In pulse compression techniques the received signal is processedusing a so called matched filter (see section 3.1.3). Hence pulse compression inradar systems is a practical implementation of a matched-filter system [15]. Theoutput xpc[n,m] of the matched filter is given by the convolution between theMTI filtered signal xmti[n,m] and the impulse response h of the matched filter

xpc[n,m] =

∞∑k=−∞

h[m− k]xmti[n, k], for all n. (5.3)

Where the impulse response of the matched filter is derived from the transmittedsignal. The result of the pulse compression is presented in Figure 22, wherethe instantaneous power of the signals xpc is plotted. The range resolution issignificantly improved compared too the MTI filtered signal xmti, also targetsthat where not clearly visible in Figure 21 are visible after the pulse compression.

Ran

ge[m

]

Bearing [deg]

Figure 22: MTI filtered and pulse compressed radar video

5.3 Signal Extraction

To be able to point out targets in the I/Q data a approximatively synchronizationwas made, between the I/Q data and a data set containing the targets spatialpositions at certain times. A signal from the target xIQ can be extracted bytaking an interval in bearing around the targets position. Hence if the target is

47

Page 60: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

in range bin mp and bearing bin np, are the extracted signal xIQ given by

xIQ[n] = xpc[mp, n], for np − I ≤ n ≤ np + I, (5.4)

where 2I is the bearing interval, in this case 2I = 3◦. This method is illustratedin Figure 23, for an UAV. In the top of Figure 23 are the position of the targetmarked with a cross and the end points of the extracted interval is marked withthe circles. The signal extracted from this interval are presented in the bottomof Figure 23.

Figure 23: Top: Radar video and the position of the UAV. Bottom: ExtractedI/Q data

5.4 Spectral Modifications

Due to the fact that different TX-modes (different PRF) are used to neglectrange unambiguity explained in Section 2.6 the signals xIQ is up-sampled to anew sampling frequency Fs = 20000 Hz, leading the new signals x20k. Thisensures that all signals, x20k, is sampled with the same sample frequency. Theup-sampled version of the signal xIQ form Figure 23, is illustrated in Figure 24.

48

Page 61: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

resampledresampledd

resampledresampledd

Figure 24: Up sampled I and Q with their corresponding originals

The Doppler spectra of the signal is obtained by taking the Fourier transform ofthe up-sampled signal x20k,

X20k = F{x20k}. (5.5)

Where the Doppler frequencies explained in section 2.8.1, are related to the tar-gets velocity. The Doppler frequency related to the targets bulk motion is filteredout, since the investigation is mainly focused on the µ-Doppler effect discussedin section 2.8.2. A function in the radar system, not explicitly explained here,called a tracker function contains information about the velocity of the object.The targets velocity can be represented as a Doppler frequency, by Equation(2.14).

The filtering is made by shifting the spectra using the frequency shift propertyof the DFT described in Section 3.2.1. The Doppler frequency that correspondsto the current velocity of the object is shifted to DC (frequency = 0), by

Xshift[k] = X20k[k − l], (5.6)

where l is the frequency shift. The spectral content at the DC frequency in theshifted spectra is filtered out with a FIR filter, described in Section 3.1.1. Givingthe spectra of the shifted and filtered signal Xshift,F IR[k]. The spectra is shiftedback with the same frequency shift property of the DFT, creating the filteredsignal,

XFIR[k] = Xshift,F IR[k + l], (5.7)

49

Page 62: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

where l is the same frequency shift. The process is graphically illustrated inFigure 26 below.The signal that generates the Doppler spectrum is complex and to be able to aextract a real signal from the Doppler spectrum the spectrum need to be symmet-ric. In this thesis the method to achieve a real signal that have some correlationin a meaningful way is to use shift the whole spectrum by the frequency shiftproperty of the DFT and then mirror the spectrum, denoted Xmirror, illustratedin Figure 25.

Figure 25: Top: Original spectrum Middle: Shifted spectrum Bottom: Mirroredspectrum

By then taking the Inverse Fourier Transform it is possible to extract a realsignal, denoted xreal.

xreal = F−1{Xmirror} (5.8)

It should also be noted that depending on the sign on the velocity (moving fromor towards the radar) i.e., the sign on the Doppler frequency corresponding tothe velocity the spectrum is shifted to right or left to not let the direction ofmovement induce characteristics on the two objects.

50

Page 63: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

Figure 26: Top: Original spectrum and velocity corresponding to the target.Middle: Shifted spectrum and filtered spectrum with a FIR filter. Bottom:Spectrums shifted to the original position

51

Page 64: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

6 Audio Generation and Implementation

In Section 5.4, spectral transforms are applied to the extracted signals xIQ tocreate a real signal xreal. These real signals xreal are however very short, only acouple of milliseconds. Hence if this signal is played as an audio signal, one is notable to distinguish any characteristics of the audio signal. For that reason it is re-quired to extend the signal, this is done with the extrapolation method describedin Section 4. The extrapolated samples are obtained by a linear combination ofthe previous samples, according to Equation (4.15). The AR parameters φi arethe constants in the linear combination and the model order are the numbersprevious samples used in the linear combination. Only the forward extrapolationof the signals are considered in this thesis. Hence only future samples of thesignals are extrapolated. A way to implement the extrapolation as an IIR filterare described in the next part.

As mentioned in Section 4.4.2, the radar signal contain noise, hence a perfectextrapolation is not possible to obtain. To obtain a descent extrapolated signala large number of impulse responses is also required. The AR parameter shouldalso be estimated with Burg’s method for the best result, according to [10].

6.1 IIR Filter Implementation of the Extrapolation

An IIR filter implementation of the extrapolation (explained in Section 4.3) isdescribed in this section. The IIR which is a recursive filter, hence the filter candepend on both the previous values of the input signal and the output signal [9].

As mentioned in Equation (3.7) a causal IIR filter is defined asp∑i=0

aiyn−i =

q∑j=0

bjxn−j (6.1)

where ai and bj are the filter coefficients, p the feedback filter order and q thefeedforward filter order. In Equation (6.1) above are xn the input samples andyn the output samples of the filter. By feeding a white noise (w) as the input tothe filter, i.e., xn = wn and letting

b0 = 1,

bj = 0 j > 0,(6.2)

Equation (6.1) can be rewritten as

yn = − 1

a0

p∑i=1

aiyn−i + wn. (6.3)

By letting a0 = 1 Equation (6.3) is the same as (4.16), if ai = φi. The estimatesof the AR model is computed with Burg’s method, given in Algorithm 2, for agiven filter model order M . This gives the equation

yn = −M∑i=1

φiyn−i + wn. (6.4)

52

Page 65: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

Which can be used to extrapolate the signal by feeding the filter with white noiseas the input. This special type of IIR filters are sometimes called AR filter. Sincethe property mentioned in Section 4.5.3, that the reflection coefficients are relatedto the autocovariances, the estimates of the autocovariances are not required toinitialize the algorithm. The extrapolation method are summarized belove.

Algorithm 3 Extrapolation of signal as IIR filterTo extrapolate the signal with W samples

Step 1: Compute the impulse response coefficients, by computing the ARparameters with Burg’s method (Algorithm 2)

Step 2: Initialize the IIR filter with the M past known samples just beforethe section to be extrapolated.

Step 3: As input to the filter feed a white noise vector of length W .

This gives W extrapolated samples as the output of the filter.

6.2 Method for Generating Audio Signals

This part contains of a summarization of the whole audio generation, from theextracted signal to the audio signal. The audio generation process is applied tothe whole data set, which contains 232 illuminations of birds and 181 illumina-tions of UAV:s. Which results in 413 audio signals, where 232 audio signals frombirds and 181 from UAV:s.

The extracted signals xIQ (for signal extraction see Section 5.3) are resampledwith a sampling frequency of 20000 Hz, so that all signals are sampled withthe same sampling frequency. The resampled signals are denoted x20k. Thebulk motion frequencies in the resampled signals x20k are filtered out, so thatthe spectrum of x20k only contains µ-Doppler effects. The filtering process aredescribed in detail in Section 5.4. The filtered signals are denoted xFIR and hasthe spectrum XFIR. The spectrum of the filtered signals are mirrored aroundthe DC frequency, creating the spectrum Xmirror. Short real valued signals xrealare generated by the inverse discrete Fourier transform of the spectrum Xmirror.

The signals xreal are extended by extrapolating the signal with an AR-filter.This is done by estimating the AR parameters φi from the signals xreal withBurg’s method (see Algorithm 2). For each signal xreal an AR-filter of order 40is used. Hence Burg’s method estimates 40 AR parameters for each signal xreal.Each signal xreal is then extrapolated using the AR-filter (see Algorithm 3). Theextrapolated signals xaudio is then presented as an audio output, which can beused to classify targets.

53

Page 66: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

7 Results

The signals xreal that was generated in Section 5.4, by the inverse Fourier trans-form of the shifted spectrum, served as the input to the extrapolation method.Two examples of such signals are presented in Figure 28, where a signal from aUAV is shown in Figure 27a and a bird in Figure 27b.

(a) Signal form a UAV. (b) Signal form a bird.

Figure 27: Signals xreal generated from the spectral modifications described inSection 5.4. Note the different scaling for the figures.

The signals xaudio are generated by extrapolating the signals xreal, with theAR filter explained in Section 6.1. Illustrations of xaudio for an UAV is presentedin Figure 28a and xaudio for an bird is shown in Figure 28b.

(a) Signal form a UAV. (b) Signal form a bird.

Figure 28: Original (xreal) and Extrapolated (xaudio) part of the signals. Notethe different scaling for the figures.

If is hard to evaluate if this extrapolation is good or not by just looking at the

54

Page 67: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

signal in the time domain. Instead it is easier to look in the frequency domain,hence the PSD of the two extrapolated signals are presented in Figure 29a and29b.

(a) Spectrum of UAV. (b) Spectrum of bird.

Figure 29: Top: Two sided PSD of the original signal xreal. Bottom: Two sidedPSD of the extrapolated signal xaudio (Original part + Extrapolated part).

To compare differient UAV signals and different bird signals, the PSD forsome other signals are displayed in

(a) Spectrum of UAV. (b) Spectrum of UAV.

(c) Spectrum of bird. (d) Spectrum of bird.

Figure 30: Top: Two sided PSD of the original signal xreal. Bottom: Two sidedPSD of the extrapolated signal xaudio (Original part + Extrapolated part).

55

Page 68: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

The PSD of the original signals for the UAV and the bird are also shownin Figure 29a and 29b. As a comparison between the extrapolated and originalsignals. To investigate if the AR-filters are stable, the zero-pole plot for some offilters are presented in Figure 31.

(a) Poles of AR-filter (UAV) (b) Poles of AR-filter (UAV)

(c) Poles of AR-filter (bird) (d) Poles of AR-filter (bird)

Figure 31: Zero-pole plots for AR-filters generated from UAV:s and birds. Thecross marks the poles.

The AR-filters are stable if all the poles of the filter is located inside the unitcircle in the complex plane.

To test the audio signals xaudio 10 random signals was generated and a op-erator was trying to determine the class of the signal, i.e. UAV:s and birds. Auntrained operator was able to classify 50-70% of the audio signals correct. Aftera couple of minutes of training, was the operator able to classify 85-95% of theaudio signals correct. This results are not statistically ensured, instead they givea rough idea of how suitable the audio signals are for classification with humanhearing.

56

Page 69: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

8 Discussion and Future Work

This thesis provides a method, for extracting radar echoes form targets in rawradar data and from these echoes generating an audio output. From these audiooutputs is a radar operator able to classify certain targets from the radar return.As an additional help for the operator has Edman in his thesis [7], developed afeature driven classification method based on the audio output.

8.1 Data set

The data sets provided for this thesis had it issues, first raw data or I/Q data (forexample presented in Figure 20) did not contain any information about were thetarget was. This issue was solved by a approximatively synchronization betweenthe raw data and an other data set containing the targets spatial positions atcertain times. The approximatively synchronization will however, not guaranteethat the targets exact position is pointed out in the raw data. Also for some rawdata plots there was no target position pointed out, which led to a reduced dataset.

For several of the raw data plot could a distortion be seen in for of a pattern(seen in Figure 32a and 32b), which was generated by some interference duringthe data gathering. The distorted data samples was removed, which shrunk thedata set even further.

(a) (b)

Figure 32: Illustration of distortion pattern in raw data.

After these reductions of the raw data sets, only data sets from one typeof UAV was sufficiently large for further analysis. Hence only audio generatedfrom one type of UAV is compared with audio generated form the birds. Theclassification made in Edman’s thesis [7] is also only based on one UAV. Theanalyze of different UAV’s are left as future work, until a sufficient data set forother types of UAV’s are provided.

57

Page 70: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

8.2 Tx-modes

It was noted that the transmission modes (TX-modes) with low PRF had adistinct different spectrum of the echo signal, compared to the other TX-modes.One answer to why this occurs can be that the higher frequencies are folded into the spectrum, due to the aliasing effects. Since the difference in the spectrawill led to a different sound in the audio output and no method for compensatingthis effects was found. The two lowest TX-modes was disregarded and left forfurther work.

8.3 Extrapolation

As mentioned in Section 4.4.2 a perfect extrapolation is not possible, since thesignals that are extrapolated contains noise. But the extrapolated part of thegenerated signal will not contain noise, the extrapolated part will instead haveamplitude errors compared to a ”true” signal. The signals that are extrapolatedconsists of a finite number of samples, hence there is an upper limit of how longimpulse responses that can be used. Since it is not possible to have a longerimpulse response than the signal itself. But a long impulse response is requiredto model the hole frequency content of the signal. Since to model one frequencya impulse response of length two is needed, as mentioned in Section 4.3. As canbe seen in the Bottom graphs of Figure 29a and 29b, are the estimated impulseresponses length not enough to generate all the frequencies in the spectrum oforiginal the original signals (Top graphs of Figure 29a and 29b). Hence thespikiness behavior of the spectrum of the extrapolated signals, the Bottom graphsof Figure 29a and 29b. One can also see that sharp dips in the spectrum in Topgraph of Figure 29a and 29b is approximated bad in the spectrum in the Bottomgraphs of Figure 29a and 29b. This is due to that the extrapolation is an all polemodel, as described in Section 4.1.2.

But except the spikiness of the spectrum of the extrapolated signals, theyare quite similar to the spectrum of the original signals. So the original and theextrapolated signals share some spectral properties. The difference between theoriginal and extrapolated signal may not have such a big impact on the resultin this case. Since the goal is to be able to hear differences between differenttargets and the spectrum for both the targets are changed. There is also nodirect connection between the signal that is extrapolated and the radar echo.Since a lot of signal processing has change the radar signal, for example thefrequency shift.

The output audio has some differences between the targets that one is ableto hear. But there is a inconsistent between the signals, for some signals one canclearly hear the difference but others are really hard. Hence some further workshould be to make the audio signals more consistent. Note that the approxima-tively synchronization could have given the wrong target position, which couldhave lead to the inconsistent signals.

58

Page 71: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

8.4 Future work

There is still a lot of work to be done, before this method can be used by a radaroperator to classify targets. Here is a summary of future work that has to bedone to make this possible.

First this method must be tested on a bigger data set, that contains differenttypes of UAV:s. Then the TX-mode problem, described above, must be figuredout and also the inconsistent problem, if it is not solved by using a better dataset. A continuously signal over more than one target illumination has to be done,by some kind of merging of signals. So that the operator can continuously listento the radar echo from the target.

To summarize, this project can be seen as a pilot for future work in the areaof radar classification. The methods and ideas developed in this thesis can serveas an inspiration for future work.

59

Page 72: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

A Stationarity

When studding time series one often has to make some assumptions about theregularity of the series. A concept that describes the regularity of a time seriesis stationarity defined in [17], in the following way

Definition 3. A strictly stationary time series is one for which the probabilis-tic behavior of every collection of values

{xt1 , xt2 , . . . , xt1k}is identical to that of the time shifted set

{xt1+h, xt2+h, . . . , xt1k+h}.That is,

P {xt1 ≤ c1, . . . , xtk ≤ ck} = P {xt1+h ≤ c1, . . . , xtk+h ≤ ck}for all k = 1, 2, . . ., all time points t1, t2, . . . , tk, all numbers c1, c2, . . . , ck, andall time shifts h = 0,±1,±2, . . ..

Definition 4. A weakly stationary time series, xt, is a finite variance processsuch that

(i) the mean value function, µxt, defined as

µxt = E(xt) =

∫ ∞−∞

xft(x), (A.1)

is constant and does not depend on time t, and

(ii) the autocovariance function, rx(s, t), defined as

rx(s, t) = Cov(xs, xt) = E[(xs − µs)(xt − µt)] (A.2)

depends on s and t only through their difference |s− t|.Due to the time independence of the mean function, E[yt] = µt, for an station-

ary time series, the notation µt = µ are used for time series that are stationary.Similarly the autocovariance function, r(s, t), depends on s and t only throughtheir difference |s− t| if the time series are stationary. Hence the following holdsfor a stationary time series

r(s, t) = r(t+ h, t) = Cov(yt+h, yt) = Cov(yh, y0) = r(h, 0) = r(h), (A.3)

where h = s− t. It can also be shown that for stationary time series it holds thatr(h) = rH(−h) and r(0) ≥ |r(k)|, ∀k. For stationary processes the covariancematrix is defined as

Rm =

r(0) rH(1) · · · rH(m− 1)

r(1) r(0). . .

......

. . . . . . rH(1)r(m− 1) · · · r(1) r(0)

= E

y

H(t− 1)...

yH(t−m)

[y(t− 1) · · · y(t−m)] .

(A.4)

60

Page 73: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

The autocovariance function must be estimated for unknown signals. For a sam-pled signal {y(1), . . . , y(N)} the sample covariance can be computed by

r(k) =1

N

N∑i=k+1

y(t)yH(t− k), 0 ≤ k ≤ N − 1, (A.5)

with only the assumption that the signal is stationary. Similarly as for theautocovariance function the relation r(k) = rH(−k), holds for negative lags.

B Kaiser Window

A Kaiser window is given by the following equation

w[n] =

I0

√1−

(n−N/2N/2

)2)

I0, n = 0; . . . , N − 1. (B.1)

Where β is the Kaiser window parameter and I0 is the zeroth-order modifiedBessel function of the first kind, given by

I0(z) =∞∑k=0

(z2

4

)kk!Γ(k + 1)

. (B.2)

Where Γ is the Gamma function, Γ(n) = (n− 1)!.

61

Page 74: Audio Generation from Radar signals, for target classification

SF280X Master Thesis

References

[1] Silvia Maria Alessio. Digital Signal Processing and Spectral Analysis forScientists: Concepts and Applications. Springer, 2015.

[2] Karl J Åström. Introduction to stochastic control theory. Courier Corpora-tion, 2012.

[3] John Parker Burg. Maximum Entropy Spectral Analysis. PhD thesis, Stan-ford University, 1975.

[4] Victor C Chen. The micro-Doppler effect in radar. Artech House, 2011.

[5] Charles Cook. Radar signals: An introduction to theory and application.Elsevier, 2012.

[6] MJL De Hoon, THJJ Van der Hagen, H Schoonewelle, and H Van Dam.Why yule-walker should not be used for autoregressive modelling. Annalsof nuclear energy, 23(15):1219–1228, 1996.

[7] Sebastian Edman. Radar target classification using Support Vector Machinesand Mel Frequency Cepstral Coefficients. Master’s thesis, KTH, 2017.

[8] Per Enqvist. Spectral estimation by geometric, topological and optimizationmethods. PhD thesis, KTH, 2001.

[9] Ulf Carlsson Hans Bodén, Kjell Ahlin. Applied Signal Analysis. KTH Farkostoch Flyg / MWL, 2014.

[10] Ismo Kauppinen, Jyrki Kauppinen, and Pekka Saarinen. A method for longextrapolation of audio signals. Journal of the Audio Engineering Society,49(12):1167–1180, 2001.

[11] Ismo Kauppinen and Kari Roth. Audio signal extrapolation–theory andapplications. In Proc. DAFx, pages 105–110, 2002.

[12] Guy Kouemou, Christoph Neumann, and Felix Opitz. Sound and dynamicsof targets—fusion technologies in radar target classification. In InformationFusion, 2008 11th International Conference on, pages 1–7. IEEE, 2008.

[13] Yi-Wen Liu. Hilbert transform and applications. In Fourier TransformApplications. Intech, 2012.

[14] Bassem R Mahafza. Radar systems analysis and design using MATLAB.CRC press, 2002.

[15] Thin Thin Mar, Su Su Yi Mon, et al. Pulse compression method for radarsignal processing. International Journal of Science and Engineering Appli-cations, 3:31–35, 2014.

[16] Lawrence R Rabiner. Multirate digital signal processing. Prentice Hall PTR,1996.

62

Page 75: Audio Generation from Radar signals, for target classification

Master Thesis SF280X

[17] Robert H Shumway, David S Stoffer, and David S Stoffer. Time seriesanalysis and its applications, volume 3. Springer, 2000.

[18] Graeme E Smith, Karl Woodbridge, and Chris J Baker. Template basedmicro-doppler signature classification. In High Resolution Imaging and Tar-get Classification, 2006. The Institution of Engineering and Technology Sem-inar on, pages 127–144. IET, 2006.

[19] Petre Stoica, Randolph L Moses, et al. Spectral analysis of signals, volume452. Pearson Prentice Hall Upper Saddle River, NJ, 2005.

[20] Marshall H Stone. The generalized weierstrass approximation theorem.Mathematics Magazine, 21(5):237–254, 1948.

[21] Christian Wolff. Radar Basics derivation of the doppler-frequency formula.

[22] Christian Wolff. Radar basics: Classification of radar sets, radar frequencybands. Radartutorial. eu, 2014.

[23] Christian Wolff. Radar basics: Radar antennas. Radartutorial. eu, 2014.

[24] Christian Wolff. Radar basics: Radar basic principles. Radartutorial. eu,2014.

[25] Christian Wolff. Radar basics: Radar transmitter. Radartutorial. eu, 2014.

[26] Qun Zhang, Ying Luo, and Yong-an Chen. Micro-Doppler Characteristicsof Radar Targets. Elsevier, 2016.

63

Page 76: Audio Generation from Radar signals, for target classification
Page 77: Audio Generation from Radar signals, for target classification
Page 78: Audio Generation from Radar signals, for target classification

TRITA -MAT-E 2017:70

ISRN -KTH/MAT/E--17/70--SE

www.kth.se