Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf ·...

39
Implementation of Adaptive and Synthetic-Aperture Processing Schemes in Integrated Active–Passive Sonar Systems STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art signal- processing schemes in sonar systems is limited mainly by the moderate advancements made in sonar computing architectures and the lack of operational evaluation of the advanced processing schemes. Until recently, matrix-based processing techniques, such as adaptive and synthetic-aperture processing, could not be efficiently implemented in the current type of sonar systems, even though it is widely believed that they have advantages that can address the requirements associated with the difficult operational problems that next-generation sonars will have to solve. Interestingly, adaptive and synthetic-aperture techniques may be viewed by other disciplines as conventional schemes. For the sonar technology discipline, however, they are considered as advanced schemes because of the very limited progress that has been made in their implementation in sonar systems. This paper is intended to address issues of implementation of advanced processing schemes in sonar systems and also to serve as a brief overview to the principles and applications of advanced sonar signal processing. The main development reported in this paper deals with the definition of a generic beam-forming structure that allows the implementation of nonconventional signal-processing techniques in integrated active–passive sonar systems. These schemes are adaptive and synthetic-aperture beam formers that have been shown experimentally to provide improvements in array gain for signals embedded in partially correlated noise fields. Using target tracking and localization results as performance criteria, the impact and merits of these techniques are contrasted with those obtained using the conventional beam former. Keywords— Acoustic scattering, adaptive signal processing, beam steering, beams, covariance matrices, delay estimation, digital signal processors, estimation, FIR digital filters, Fourier transforms, frequency domain analysis, frequency estimation, maximum likelihood estimation, multisensor systems, signal detection, sonar signal processing, sonar tracking, spectral analysis, synthetic-aperture sonar, underwater acoustic arrays. Manuscript received May 15, 1996; revised September 1, 1997. This work was supported in part by the Natural Sciences and Engineering Research Council of Canada under NSERC Strategic Grant STR01810390 and NSERC Research Grant OGP0170406. The author is with the Defence and Civil Institute of Envi- ronmental Medicine, North York, Ont. M3M 3B9 Canada (e-mail: [email protected]). Publisher Item Identifier S 0018-9219(98)01298-5. NOMENCLATURE Complex conjugate transpose operator. Power spectral density of signal . AG Array gain. Small positive number designed to main- tain stability in normalized least mean square adaptive algorithm. Narrow-band beam-power pattern of a line array expressed by . Broad-band beam-power pattern of a line array steered at direction . Beams for conventional or adaptive beam formers or plane wave response of a line array steered at direction and expressed by . BW Signal bandwidth. Signal blocking matrix in generalized side-lobe canceller adaptive algorithm. Speed of sound in the underwater sea environment. CFAR Constant false alarm rate. Steering vector having its th phase term for the plane wave arrival with angle being expressed by . Detection index of receiver operating characteristic curve. Sensor spacing for a line array receiver. DT Detection threshold. Expectation operator. ETAM Extended towed array measurements. Noise vector component with th ele- ment for sensor outputs (i.e., ). 0018–9219/98$10.00 1998 IEEE 358 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Transcript of Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf ·...

Page 1: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

Implementation of Adaptive andSynthetic-Aperture Processing Schemes inIntegrated Active–Passive Sonar Systems

STERGIOS STERGIOPOULOS,SENIOR MEMBER, IEEE

Progress in the implementation of state-of-the-art signal-processing schemes in sonar systems is limited mainly by themoderate advancements made in sonar computing architecturesand the lack of operational evaluation of the advanced processingschemes. Until recently, matrix-based processing techniques,such as adaptive and synthetic-aperture processing, could notbe efficiently implemented in the current type of sonar systems,even though it is widely believed that they have advantagesthat can address the requirements associated with the difficultoperational problems that next-generation sonars will have tosolve. Interestingly, adaptive and synthetic-aperture techniquesmay be viewed by other disciplines as conventional schemes. Forthe sonar technology discipline, however, they are considered asadvanced schemes because of the very limited progress that hasbeen made in their implementation in sonar systems.

This paper is intended to address issues of implementation ofadvanced processing schemes in sonar systems and also to serveas a brief overview to the principles and applications of advancedsonar signal processing. The main development reported inthis paper deals with the definition of a generic beam-formingstructure that allows the implementation of nonconventionalsignal-processing techniques in integrated active–passive sonarsystems. These schemes are adaptive and synthetic-aperturebeam formers that have been shown experimentally to provideimprovements in array gain for signals embedded in partiallycorrelated noise fields. Using target tracking and localizationresults as performance criteria, the impact and merits ofthese techniques are contrasted with those obtained using theconventional beam former.

Keywords—Acoustic scattering, adaptive signal processing,beam steering, beams, covariance matrices, delay estimation,digital signal processors, estimation, FIR digital filters, Fouriertransforms, frequency domain analysis, frequency estimation,maximum likelihood estimation, multisensor systems, signaldetection, sonar signal processing, sonar tracking, spectralanalysis, synthetic-aperture sonar, underwater acoustic arrays.

Manuscript received May 15, 1996; revised September 1, 1997. Thiswork was supported in part by the Natural Sciences and EngineeringResearch Council of Canada under NSERC Strategic Grant STR01810390and NSERC Research Grant OGP0170406.

The author is with the Defence and Civil Institute of Envi-ronmental Medicine, North York, Ont. M3M 3B9 Canada (e-mail:[email protected]).

Publisher Item Identifier S 0018-9219(98)01298-5.

NOMENCLATURE

Complex conjugate transpose operator.

Power spectral density of signal .

AG Array gain.

Small positive number designed to main-tain stability in normalized least meansquare adaptive algorithm.

Narrow-band beam-power pattern ofa line array expressed by

.

Broad-band beam-power pattern of a linearray steered at direction.

Beams for conventional or adaptivebeam formers or plane waveresponse of a line array steeredat direction and expressed by

.

BW Signal bandwidth.

Signal blocking matrix in generalizedside-lobe canceller adaptive algorithm.

Speed of sound in the underwater seaenvironment.

CFAR Constant false alarm rate.

Steering vector having its th phaseterm for the plane wave arrival withangle being expressed by

.

Detection index of receiver operatingcharacteristic curve.

Sensor spacing for a line array receiver.

DT Detection threshold.

Expectation operator.

ETAM Extended towed array measurements.

Noise vector component with th ele-ment for sensor outputs (i.e.,

).

0018–9219/98$10.00 1998 IEEE

358 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Page 2: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

Frequency in hertz.Sampling frequency.

GSC Generalized side-lobe canceller.Vector of weights for spatial shading inbeam-forming process.Unit vector of ones.Index of time samples of hydrophonetime series .Iteration number of adaptation process.Wave-number parameter.Size of line array expressed by

.Wavelength of acoustic signal with fre-quency , where .

LCMV Linearly constrained minimum variance.Number of time samples inhydrophone time series, where

.

Convergence controlling parameter or“step size” for the normalized least meansquare algorithm.

MVDR Minimum variance distortionless re-sponse.

Number of hydrophones in line ar-ray receiver, where

.

Noise energy flux density at the receiv-ing array.

Index for space samples of hydrophonetime series .

NLMS Normalized least mean square.

OMI Operator-machine interface.

Probability of detection for receiver op-erating characteristic curves.

Probability of false alarm for receiveroperating characteristic curves.

Steered spatial covariance matrix in timedomain.

3.14 159.

Spatial correlation matrix with elementsfor received hydrophone

time series.

Cross-correlation coefficients given from.

ROC Receiver operating characteristic curve.

Spatial correlation matrix for the planewave signal .

Spatial correlation matrix for theplane wave signal in frequencydomain. It has its th row and thcolumn defined by

.

Signal vector whose th element is ex-pressed by .

Power spectral density of noise, .

S Source energy flux density at a range of1 m from the acoustic source.

STCM Steered covariance matrix.

STMV Steered minimum variance.

SVD Singular value decomposition method.

Time delay between the first and theth hydrophone of the line array for an

incoming plane wave with direction ofpropagation .

Diagonal steering matrix with elementsthose of the steering vector .

TL Propagation loss for the range separatingthe source and the sonar array receiver.

Angle of plane wave arrival with respectto a line array receiver.

Adaptive steering vector.

Frequency in rad/second.

Beam time series, output of time-domain beam former

or formed by usingfast Fourier transforms and fast convolu-tion of frequency-domain beam formers

IFFT .

Mean acoustic intensity of hydrophonetime sequences at frequency bin.

Fourier transform of .

Presteered hydrophone time series in fre-quency domain.

Row vector of received -hydrophonetime series .

Presteered hydrophone time series intime domain.

Result of the signal blocking matrix Cbeing applied to presteered hydrophonetime series .

I. INTRODUCTION

Several review articles [1]–[4] on sonar system technol-ogy have provided a detailed description of the mainstreamsonar signal-processing functions along with the associ-ated implementation considerations. This paper attempts toextend the scope of these articles by introducing an imple-mentation effort of nonmainstream processing schemes inreal-time sonar systems. The organization of this paper isas follows.

The first section provides a historical overview of sonarsystems and introduces the concept of the signal-processorunit and its general capabilities. This section also outlinesthe practical importance of the topics to be discussedin subsequent sections, defines the sonar problem, andprovides an introduction into the organization of the paper.

In Section II, we discuss very briefly a few issuesof space-time signal processing related to detection andprocedures for estimating sources’ parameters. Section III

STERGIOPOULOS: IMPLEMENTATION OF PROCESSING SCHEMES 359

Page 3: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

deals with optimum estimators for sonar signal processing.It introduces various nonconventional processing schemes(adaptive and synthetic-aperture beam formers) and thepractical issues associated with the implementation ofthese advanced processing schemes in sonar systems.Our intent here is not to be exhaustive but only to beillustrative of how the receiving array, the underwatermedium, and the subsequent signal processing influencethe performance of a sonar system. Issues of practicalimportance, related to system-oriented applications, are alsoaddressed, and generic approaches are suggested that couldbe considered for the development of next-generation sonarsignal-processing concepts. These generic approaches arethen applied to the central problem that the sonar systemsdeal with, that is, detection and estimation.

Section IV introduces the development of a realizablegeneric processing scheme that allows the implementa-tion and testing ofnonlinear processing techniquesin awide spectrum of real-time sonar systems. The comput-ing architecture requirements for future sonar systems areaddressed in the same section. It identifies the matrixoperations associated with high-resolution and adaptivesignal processing and discusses their numerical stabilityand implementation requirements. The mapping onto sonarsignal processors of matrix operations includes specifictopics such as QR decomposition, Cholesky factorization,and singular-value decomposition for solving least-squaresand eigensystem problems. Schematic diagrams illustratealso the mapping of the signal-processing flow for theadvanced beam formers in sonar computing architectures.

Last, a concept demonstration of the above developmentsis presented in Section V, which provides real data outputsfrom an advanced beam-forming structure incorporatingadaptive and synthetic-aperture beam formers.

A. Overview of a Sonar System

To provide a context for the material contained in thispaper, it would seem appropriate to review briefly thebasic requirements of a high-performance sonar system. Asonar (sound, navigation, and ranging) system is definedas a “method or equipment for determining by underwatersound the presence, location, or nature of objects in thesea” [5]. This is equivalent to detection, localization, andclassification.

Fig. 1 shows one possible high-level view of a genericsonar system. It consists of awet end,a high-speedsignalprocessor to provide mainstream signal processing fordetection and initial parameter estimation, adata manager,which supports the data and information processing func-tionality of the system, and adisplay subsystemthroughwhich the system operator can interact with the data struc-tures in the data manager to make the most effective useof the resources at his command. In this paper, we willbe limiting our comments to thedry end,which consistsof the algorithms and the processing architectures requiredfor their implementation. The wet end includes devices ofvarying degrees of complexity that sense the existence of an

Fig. 1. Overview of a generic sonar system. It consists of awet end, a high-speed signal processor to provide mainstreamsignal processing for detection and initial parameter estimation, adata manager, which supports the data and information processingfunctionality of the system, and adisplay subsystemthrough whichthe system operator can interact with the manager to make the mosteffective use of the information available at his command.

underwater acoustic signal. These devices are hydrophonearrays having cylindrical, spherical, plane, or line geometricconfigurations that are housed inside domes of naval shipsor towed at a depth behind a vessel. Quantitative estimatesof the various benefits that result from the deployment ofarrays of hydrophones are obtained by thearray gain term,which will be the subject of our discussion in Section III.Hydrophone array design concepts, however, are beyondthe scope of this paper, and readers interested in sonartransducers can refer to other publications on the topic [6],[7].

The signal processoris probably the single most im-portant component in the dry end of a sonar system.To satisfy the basic requirements, the processor normallyincorporates three fundamental operations: beam forming,matched filtering, and data normalization. The first twoprocesses are used to improve both the signal-to-noise ratio(SNR) and parameter estimation capability through spatialand temporal processing techniques.

Data normalization is required to map the resulting datainto the dynamic range of the display devices in a mannerthat provides a CFAR capability across the analysis cells. Inwhat follows, each subsystem, shown in Fig. 1, is examinedvery briefly by associating the evolution of its functionalityand characteristics with the corresponding technologicaldevelopments.

B. Signal Processor

The implementation of signal-processing concepts insonar systems is heavily dependent on the sonar-computingarchitecture characteristics, and therefore it is limited by theprogress made in computing architectures. While the math-ematical foundations of the signal-processing algorithmshave been known for many years, it was the introduction ofthe microprocessor and high-speed multiplier-accumulatordevices in the early 1970’s, which heralded the turningpoint in the development of digital sonars. The first systemswere primarily fixed-point machines with limited dynamicrange and hence were constrained to use conventional

360 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Page 4: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

beam-forming and filtering techniques [1], [3], [8]. Asfloating-point central processing units (CPU’s) and sup-porting memory devices were introduced in the mid- tolate 1970’s, multiprocessor digital sonar computing archi-tectures and modern signal-processing algorithms could beconsidered for implementation in real-time systems. Thismajor breakthrough expanded in the 1980’s into massivelyparallel architectures supporting multisensor requirements.

Recently, new scalable computing architectures, whichsupport both scalar and vector operations satisfying highinput/output bandwidth requirements of large multisen-sor systems, are becoming available [9]. Announced veryrecently was the successful development of superscalarand massively parallel signal-processing computers thathave throughput capabilities of hundred of billions offloating-point operations per second [10]. This resulted ina resurgence of interest in algorithm development of newcovariance-based, high-resolution, adaptive [11]–[13] andsynthetic-aperture beam-forming algorithms [14]–[20] andtime-frequency analysis techniques [21].

In many cases, these new algorithms trade robustnessfor improved performance [11], [12], [22], [23]. Further-more, the improvements achieved are generally not uniformacross all signal and noise environments and operationalscenarios. The challenge is to develop a concept thatallows an appropriate mixture of these algorithms to beimplemented in practical sonars [22], [23]. The advent ofnew adaptive processing techniques is only the first stepin the utilization of a priori information as well as moredetailed environmental information. Of particular interestis the rapidly growing field of matched field processing(MFP) [24], [25]. The use of linear models will also bechallenged by techniques that utilize higher order statistics[21], neural networks [26], fuzzy systems [27], chaos,and other nonlinear approaches. These concerns have beendiscussed very recently [9] in a special issue of the IEEEJOURNAL OF OCEANIC ENGINEERING devoted to sonar sys-tem technology. A detailed examination of MFP can befound also in the July 1993 issue of the above journal,which was devoted to detection and estimation in MFP [25].

C. Data Manager and Display Subsystem

Processed acoustic data at the output of the mainstreamsignal-processing system must be stored in a temporarydata base before it is presented to the sonar operator foranalysis. Until very recently, owing to the physical sizeand cost associated with constructing large acoustic databases, thedata managerplayed a relatively small rolein the overall capability of the sonar system. However,with the dramatic drop in the cost of solid-state memoriesand the introduction of more powerful microprocessorsin the 1980’s, the role of thedata managerhas nowbeen expanded to incorporate localization, tracking, andclassification functionality in addition to its traditionaldisplay data management functions.

Normally, the processing and integration of informationfrom sensor level to a command and control level includesa few distinct processing steps. Fig. 2 shows a simplified

overview of the integration of four different levels ofinformation for a sonar system. These levels consist mainlyof:

• navigation and nonacoustic data from receiving sensorarrays;

• environmental information and estimation of propaga-tion characteristics in order to assess the medium’sinfluence on sonar system performance;

• signal processing of received acoustic signals thatprovides parameter estimation in terms of bearing,range, and temporal spectral estimates for detectedsignals;

• signal following (tracking) and localization that moni-tors the time evolution of a detected signal’s estimatedparameters.

This last tracking and localization capability allows thesonar operator rapidly to assess the data from a multisensorsystem and carry out the processing required to develop anacoustically based tactical picture for integration into theplatform-level command and control system.

To allow the data bases to be searched effectively, ahigh-performance OMI is required. These interfaces arebeginning to draw heavily on modern workstation tech-nology through the use of windows, on-screen menus,etc. Large flat-panel displays driven by graphic engines,which are equally adept at pixel manipulation as they arewith three-dimensional object manipulation, will be criticalcomponents in future systems. It should be evident bynow that the term “data manager” describes a level offunctionality that is well beyond simple data management.The data manager facility applies technologies rangingfrom relational data bases, neural networks [26], and fuzzysystems [27] to expert systems [9], [26]. The problems itaddresses can be variously characterized as signal, data, orinformation processing.

In the past, improving the dry end of a sonar systemwas synonymous with the development of new signal-processing algorithms and faster hardware. While advanceswill continue to be made in these areas, future developmentsin data (contact) management represent one of the mostexciting avenues of research in the development of high-performance systems.

One aspect of this development is associated with theoperational requirement by the operator to be able to assessnumerous detected signals rapidly in terms of localization,tracking, and classification in order to pass the necessaryinformation up through the chain of command to enabletactical decisions to be made in a timely manner. Thus,an assigned task for a data manager would be to providethe operator with quick and easy access to both the outputof the signal processor, which is called acoustic display,and the tactical display, which will show localization andtracking information through graphical interaction betweenthe acoustic and tactical displays.

It is apparent from the above that for a next-generationsonar system, emphasis should be placed on the degree ofinteraction between the operator and the system through

STERGIOPOULOS: IMPLEMENTATION OF PROCESSING SCHEMES 361

Page 5: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

Fig. 2. A simplified overview of integration of four different levels of information from sensor levelto a command and control level for a sonar system. These levels consist mainly of 1) navigationand nonacoustic data from receiving sensor arrays, 2) environmental information to access themedium’s influence on sonar system performance, 3) signal processing of received acoustic signalsthat provides parameter estimation in terms of bearing, range, and temporal spectral estimates fordetected signals, and 4) signal following (tracking) and localization of detected targets.

an operator-machine interface, as shown schematically inFig. 1. Through this interface, the operator may selectivelyproceed with localization, tracking, and classification tasks.

Even though the processing steps of radar and airbornesystems associated with localization, tracking, and classi-fication have conceptual similarities with those of a sonarsystem, the processing techniques that have been success-fully applied in airborne systems have not been successfulwith sonar systems. This is a typical situation that indicateshow hostile, in terms of signal propagation characteris-tics, the underwater environment is with respect to theatmospheric environment. It is our belief that technologies

associated with data fusion, neural networks, knowledge-based systems, and automated parameter estimation willprovide solutions to the very difficult operational sonarproblem regarding localization, tracking, and classification.

In summary, the main focus of the assigned tasks of amodern sonar system would vary from the detection ofsignals of interest in the open ocean to very quiet signalsin very cluttered underwater environments, which couldbe shallow coastal sea areas. These varying degrees ofcomplexity of the above tasks, however, can be groupedtogether quantitatively, and this will be the topic of ourdiscussion in the following section.

362 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Page 6: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

D. The Sonar Problem

A convenient and accurate integration of the wide varietyof effects of the underwater environment, the target’s char-acteristics, and the sonar system’s designing parameters isprovided by the sonar equation [8]. Since World War II,the sonar equation has been used extensively to predictthe detection performance and to assist in the design ofa sonar system. It combines, in logarithmic units (i.e., unitsof decibels relative to the standard reference of energy fluxdensity of rms pressure of integrated over a periodof one second), the following terms:

(1)

which define signal excess where:

source energy flux density at a range of 1 m fromthe source;

propagation loss for the range separating the sourceand the sonar array receiver; thus, the term (

) expresses the recorded signal energy fluxdensity at the receiving array;

noise energy flux density at the receiving array;

array gain that provides a quantitative measure ofthe coherence of the signal of interest with respectto the coherence of the noise across the line array(see Section III-B);

detection threshold associated with the decisionprocess that defines the SNR at the receiver inputrequired for a specified probability of detection andfalse alarm.

A detailed discussion of the term and the associatedstatistics are given in [8] and [28]–[30]. Very briefly, theparameters that define the detection threshold values for asonar system are the following.

• The time-bandwidth product, which defines the inte-gration time of signal processing. This product consistsof the term , which is the time-series length forcoherent processing such as the fast Fourier transform(FFT) and the incoherent averaging of the powerspectra over successive blocks. The reciprocalof the FFT length defines the bandwidth of a singlefrequency cell. An optimum signal-processing schemeshould match the acoustic signal’s bandwidth with thatof the FFT length in order to achieve the predicted

values.

• The probabilities of detection and false alarm ,which define the confidence that the correct decisionhas been made.

Improved processing gain can be achieved by incorporat-ing segment overlap, windowing, and FFT zeros extension,as discussed by Welch [31] and Harris [32]. In summary,the definition of is given by [8]

(2)

where is the noise power in a 1-Hz band, is thesignal power in bandwidth , is the integration periodin displays during which the signal is present, andisthe detection index of the ROC curves defined for specificvalues of and [8], [28]. Typical values for theabove parameters in the term DT that are considered inreal-time sonar systems are: Hz

for and s.For values of for which (1) becomes an equality, we

have

FoM (3)

where the new term FoM (figure of merit) equals thetransmission loss , and gives an indication of the rangeat which a sonar can detect its target.

The noise term in (1) includes the total or compositenoise received at the array input of a sonar system and isthe linear sum of all the components of the noise processes,which are assumed independent. Detailed discussions ofthe noise processes related to sonar systems are beyond thescope of this paper, however, and readers interested in thesenoise processes can refer to other publications on the topic[8], [33]–[39].

When taking the sonar equation as the common guide asto whether the processing concepts of a sonar system willgive improved performance against very quiet targets, thefollowing issues become very important and appropriate.

• During sonar operations, the terms, , andare beyond the sonar operators’ control becauseand

are given as parameters of the sonar problem andis associated mainly with the design of the array

receiver. The signal-processing parameters in (2) thatinfluence are adjusted by the sonar operators sothat will have the maximum positive impact inimproving the FoM of a sonar system.

• The quantity ( ) in (1) and (3), how-ever, provides opportunities for sonar performanceimprovements by increasing the term (e.g.,deploying large-size array receivers or using newsignal-processing schemes)and by minimizing theterm (e.g., using adaptive processing by takinginto consideration the directional characteristics of thenoise field and by reducing the impact of the sensorarray’s self-noise levels).

Our emphasis in this paper will be focused on theminimization of the quantity ( ). This will result innew signal-processing schemes in order to achieve a desiredlevel of performance improvement for the specific case ofa line array sonar system.

II. THEORETICAL REMARKS

Sonar operations can be carried out by a wide varietyof naval platforms, as shown in Fig. 3. This includessurface vessels, submarines, and airborne systems, suchas airplanes and helicopters. Shown also in Fig. 3 is aschematic representation of active and passive sonar op-erations in an underwater sea environment. Active sonar

STERGIOPOULOS: IMPLEMENTATION OF PROCESSING SCHEMES 363

Page 7: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

Fig. 3. Schematic representation for active and passive sonar operations for a wide variety ofnaval platforms in an underwater sea environment.

operations involve the transmission of well-defined acousticsignals, called replicas, which illuminate targets in anunderwater sea area. The reflected acoustic energy from atarget provides the sonar array receiver with a basis for de-tection and estimation. Passive sonar operations base theirdetection and estimation on acoustic sounds, which emanatefrom submarines and ships. Thus, in passive systems, onlythe receiving sensor array is under the control of thesonar operators. In this case, major limitations in detectionand classification result from imprecise knowledge of thecharacteristics of the target radiated acoustic sounds.

The passive sonar concept can be made clearer by com-paring sonar systems with radars, which are always ac-tive. Another major difference between the two systemsarises from the fact that sonar system performance is moreaffected than that of radar systems by the underwatermedium propagation characteristics. All the above issueshave been discussed in several review articles [1]–[4]that form a good basis for interested readers to becomefamiliar with “mainstream” sonar signal-processing devel-opments. Therefore, discussions of issues of conventionalsonar signal processing, detection, estimation, and influenceof medium on sonar system performance are beyond thescope of this paper. Only a very brief overview of the aboveissues will be highlighted in this section in order to definethe basic terminology required for the presentation of themain theme of this paper.

Let us start with a basic system model that reflects theinterrelationships between the target, the underwater seaenvironment (medium), and the receiving sensor array of asonar system. A schematic diagram of this basic system isshown in Fig. 4, where sonar signal processing is shown tobe two dimensional (2-D) [1], [12], [40] in the sense thatit involves both temporal and spatial spectral analysis. Thetemporal processing provides spectral characteristics thatare used for target classification and the spatial processing

provides estimates of the directional characteristics (i.e.,bearing and possibly range) of a detected signal. Thus,space-time processing is the fundamental processing con-cept in sonar systems, and it will be the subject of the nextsection.

A. Space-Time Processing

Let us consider a combination of equally spaced acous-tic transducers in a linear array, which may form a towedor hull mounted array system that can be used to estimatethe directional properties of echoes and acoustic signals.As shown in Fig. 4, a direct analogy between samplingin space and sampling in time is a natural extension ofthe sampling theory in space-time signal representation andthis type of space-time sampling is the basis in array designthat provides a description of a sonar array system response.When the sensors are arbitrarily distributed, each elementwill have an added degree of freedom, which is its positionalong the axis of the array. This is analogous to nonuniformtemporal sampling of a signal. In this paper, we restrictour discussion to line array systems. Sources of sound thatare of interest in sonar system applications are harmonicnarrowband and broadband and satisfy the wave equation[1], [40]. Furthermore, their solutions have the propertythat their associated temporal-spatial characteristics areseparable [40]. Therefore, measurements of the pressurefield , which is excited by acoustic source signals,provide the spatial-temporal output response, designated by

of the measurement system. The vectorrefers tothe source-sensor relative position andis the time. Theoutput response is the convolution of withthe line array system response [40], [41]

(4)

where refers to convolution. Since is defined atthe input of the receiver, it is the convolution of the source’s

364 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Page 8: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

Fig. 4. A model of space-time signal processing. It shows that sonar signal processing is twodimensional in the sense that it involves both temporal and spatial spectral analysis. The temporalprocessing provides characteristics for target classification and the spatial processing providesestimates of the directional characteristics (bearing and possibly range) of a detected signal.

characteristics with the underwater medium’s re-sponse

(5)

Fourier transformation of (4) provides

(6)

where are the frequency and wave-number parametersof the temporal and spatial spectrums of the transformfunctions in (4) and (5). Signal processing, in terms ofbeam-forming operations, of the receiver’s outputprovides estimates of the source bearing and possibly ofthe source range. This is a well-understood concept of theforward problem, which is concerned with determining theparameters of the received signal given that wehave information about the other two functionsand [4]. The inverse problem is concerned withdetermining the parameters of the impulse response ofthe medium by extracting information from thereceived signal assuming that the functionis known [4]. The sonar and radar problems, however,are quite complex and include both forward and inverseproblem operations. In particular, detection, estimation,and tracking-localization processes of sonar systems aretypical examples of the forward problem, while targetclassification for passive–active sonars is a typical exampleof the inverse problem. In general, the inverse problem is acomputationally very costly operation. Typical examples inacoustic signal processing are seismic deconvolution andacoustic tomography.

B. Definition of Basic Parameters

This section outlines the context in which the sonarproblem can be viewed in terms of models of acousticsignals and noise fields. The signal-processing concepts thatare discussed in this paper have been included in sonarand radar investigations with sensor arrays having circular,planar, cylindrical, and spherical geometric configurations[101]. For geometrical simplicity and without any loss ofgenerality, we consider here an-hydrophone line arrayreceiver with sensor spacing, as shown in Fig. 4. Theoutput of the th sensor is a time series denoted by ,where are the time samples for each sensortime series. The symbol “*” denotes complex conjugatetransposition so that is the row vector of the received

-hydrophone time series .Then , where are

the signal and noise components in the received sensor timeseries. , denote the column vectors of the signal andnoise components of the vector of the sensor outputs(i.e., ). isthe Fourier transform of at the signal with frequency

is the speed of sound in the underwater mediumand is the wavelength of the frequency. isthe spatial correlation matrix of the signal vector, whose

th element is expressed by

(7)

denotes expectation and

(8)

STERGIOPOULOS: IMPLEMENTATION OF PROCESSING SCHEMES 365

Page 9: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

is the time delay between the first and theth hydrophoneof the line array for an incoming plane wave with directionof propagation , as illustrated in Fig. 4. In frequencydomain, the spatial correlation matrixfor the plane wavesignal is defined by

(9)

where is the power spectral density of for theth frequency bin and is the steering vector having

its th phase term for the plane wave arrival, with anglebeing expressed by

(10)

where is the sampling frequency. Then matrixhas its th row and th column defined by

. Moreover, is the spatialcorrelation matrix of received hydrophone time series withelements . is thespatial correlation matrix of the noise for theth frequencybin, with being the power spectral density of thenoise . In what is considered an estimation procedurein this paper, the associated problem of detection is definedin the classical sense as a hypothesis test that providesa detection probability and a probability of false alarm.This choice of definition is based on the standard CFARprocessor, which is based on the Neyman–Pearson criterion[28]. The CFAR processor provides an estimate of theambient noise or clutter level so that the threshold can bevaried dynamically to stabilize the false alarm rate. Ambientnoise estimates for the CFAR processor are provided mainlyby noise normalization techniques [42]–[45] that accountfor the slowly varying changes in the background noise orclutter. The above estimates of the ambient noise are basedupon the average value of the received signal, the desiredprobability of detection, and the probability of false alarms.

Furthermore, optimum beam forming, which is basicallya spatial filter, requires the beam-forming filter coefficientsto be chosen based on the covariance matrix of the receiveddata by the -sensor array in order to optimize the arrayresponse [46], [47]. The family of algorithms for optimumbeam forming that use the characteristics of the noise arecalled adaptive beam formers[2], [11], [12], [46]–[49]. Adetailed definition of an adaptation process requires knowl-edge of the correlated noise’s covariance matrix .If the required knowledge of the noise’s characteristics isinaccurate, however, the performance of the optimum beamformer will degrade dramatically [12], [49]. As an example,the case of cancellation of the desired signal is often typicaland significant in adaptive beam-forming applications [12],[50]. This suggests that the implementation of useful adap-tive beam formers in real-time operational systems is not atrivial task. The existence of numerous articles on adaptivebeam forming suggests the dimensions of the difficultiesassociated with this kind of implementation.

To minimize the generic nature of the problems associ-ated with adaptive beam forming, the concept of partiallyadaptive beam-former design was introduced. This concept

reduces the degrees of freedom, which results in loweringthe computational requirements and often improving theadaptive response time [11], [12]. However, the penaltyassociated with the reduction of the degrees of freedom inpartially adaptive beam formers is that they cannot convergeto the same optimum solution as the fully adaptive beamformer.

Although a review of the various adaptive beam formerswould seem relevant at this point, we believe that this is notnecessary since there are excellent review articles [2], [4],[11]–[13], [24] that summarize the points that have beenconsidered for this experimental study. There are two mainfamilies of adaptive beam formers: GSC’s and LCMV’s. Aspecial case of the LCMV is Capon’s maximum likelihoodmethod [48], which is called MVDR [11], [12], [48], [49].This algorithm has proven to be one of the more robustof the adaptive array beam formers and has been used bynumerous researchers as a basis to derive other variants ofMVDR [12]. In this paper, we will address implementationissues for various partially adaptive variants of the MVDRmethod and a GSC adaptive beam former [51]–[53], whichare discussed in Section III-C.

At this point, a brief discussion on the fundamentals ofdetection and estimation processes is required in order toaddress implementation issues of signal-processing schemesin sonar systems.

C. Detection and Estimation

The problem of detection [28]–[30] is much simpler thanthe problem of estimating one or more parameters of adetected signal. Classical decision theory [8], [28]–[30],[54] treats signal detection and signal estimation as separateand distinct operations. A detection decision as to thepresence or absence of the signal is regarded as takingplace independently of any signal parameter or wave-formestimation that may be indicated as the result of detec-tion decision. However, interest in joint or simultaneousdetection and estimation of signals arises frequently. Mid-dleton and Esposito [55] have formulated the problem ofsimultaneous optimum detection and estimation of signalsin noise by viewingestimationas a generalized detectionprocess. Practical considerations, however, require differentcost functions for each process [55]. As a result, it is moreeffective to retain the usual distinction between detectionand estimation.

Estimation, in passive sonar systems, includes both thetemporal and spatial structure of an observed signal field.For active systems, correlation processing and Doppler(for moving target indications) are major concerns thatdefine the critical distinction between these two approaches(i.e., passive, active) to sonar processing. In this paper,we restrict our discussion to topics related to spatial sig-nal processing for estimating signal parameters. However,spatial signal processing has a direct representation thatis analogous to the frequency-domain representation oftemporal signals. Therefore, the spatial signal-processingconcepts discussed here have direct applications to temporalspectral analysis.

366 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Page 10: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

Typically, the performance of an estimator is representedas the variance in the estimated parameters. Theoreticalbounds associated with this performance analysis are spec-ified by the Cramer–Rao bound [28]–[30], and that has ledto major research efforts by the sonar signal-processingcommunity in order to define the idea of an optimumprocessor for discrete sensor arrays [17], [19], [56]–[60].If the a priori probability of detection is close to unity,then the minimum variance achievable by any unbiasedestimator is provided by theCramer–Rao lower bound(CRLB) [28], [29], [55]. In this case, if there exists a signalprocessor to achieve the CRLB, it will be the maximum-likelihood estimation technique. The above requirementassociated with thea priori probability of detection is veryessential because if it is less than one, then the estimationis biased and the theoretical CRLB’s do not apply. Thisgeneral framework of optimality is very essential in orderto account for Middleton’s [29] warning that a systemoptimized for the one function (detection or estimation) maynot be necessarily optimized for the other.

For a given model describing the received signal by asonar system, the CRLB analysis can be used as a tool todefine the information inherent in a sonar system. This isan important step related to the development of the signal-processing concept for a sonar system as well as in definingthe optimum sensor configuration arrangement under whichwe can achieve, in terms of system performance, the opti-mum estimation of signal parameters of our interest. Thisapproach has been applied successfully to various studiesrelated to the present development [17], [19], [56]–[60].

The next question that needs to be addressed is aboutthe unbiased estimator that can exploit this available in-formation and provide results asymptotically reaching theCRLB’s. For each estimator, it is well known that thereis a range of SNR in which the variance of the estimatesrises very rapidly as SNR decreases. This effect, whichis called thethreshold effect of the estimator,determinesthe range of SNR of the received signals for which theparameter estimates can be accepted. In passive sonarsystems, the SNR of signals of interest is often quite lowand probably below the threshold value of an estimator.In this case, high-frequency resolution in both time andspatial domains for the parameter estimation of narrow-band signals is required. In other words, the thresholdeffect of an estimator determines the frequency resolutionfor processing and the size of the towed array required inorder to detect and estimate signals of interest that havevery low SNR [17], [18], [53], [61], [62]. The CRLBanalysis has been used in the present study to evaluate andcompare the performance of the various nonconventionalprocessing schemes [17], [18], [53], [61], [62] that havebeen considered for implementation in the generic beam-forming structure to be discussed in Section IV-A.

III. OPTIMUM ESTIMATORS FOR SONAR

SIGNAL PROCESSING

The purpose of this section is to outline very brieflythe processing schemes that have been considered in this

experimental study. These schemes are conventional, adap-tive, and synthetic-aperture methods, which are introducedin Sections III-A, III-C, III-D, and III-F by outlining theirsonar-related implementation considerations. Briefly, theseconsiderations include mainly estimation (after detection)of the source’s bearing, which is the main concern in sonararray systems because in most of the sonar applications,the acoustic signal’s wave fronts tend to be planar, whichassumes distant sources. Passive ranging by measurementof wave-front curvature is not appropriate for the far-fieldproblem. The range estimate of a distant source, in thiscase, must be determined by various target-motion analysismethods discussed in Section V-A, which addresses thelocalization-tracking performance of nonconventional beamformers with real data. In general, sonar array processingincludes a large number of algorithms and systems thatare quite diverse in concept. There is a basic point that iscommon in all of them, however, and this is the beam-forming process, which we are going to examine next.

A. Conventional Beam Forming

Previous studies [63] have shown that theconventionalbeam former(CBF) without shading is the optimum beamformer for bearing estimation, and its variance estimatesachieve the CRLB bounds. The narrow-band CBF is definedby [12]

(11)

where is the th term of the steering vectorfor the beam steering direction , as expressed

by (10). Equation (11) is basically a mathematical inter-pretation of Fig. 4 and shows that a line array is basicallya spatial filter because by steering a beam in a particulardirection, we spatially filter the signal coming from thatdirection, as illustrated in Fig. 4. On the other hand, (11)is fundamentally a discrete Fourier transform relationshipbetween the hydrophone weightings and the beam patternof the line array, and as such it is computationally avery efficient operation. However, (11) can be generalizedfor nonlinear two- and three-dimensional arrays, which isdiscussed in [101].

As an example, let us consider a distant monochromaticsource. Then the plane wave signal arrival from the direc-tion received by an -hydrophone line array is expressedby (10). The plane-wave response to the above signal of thisline array steered at direction can be written as follows:

(12)

and the beam-power pattern is given by. Then, the beam-power

pattern takes the form

(13)

STERGIOPOULOS: IMPLEMENTATION OF PROCESSING SCHEMES 367

Page 11: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

where is the spacing between the th andth hydrophones. As a result of (10), the expression for

the beam-power pattern , is reduced to

(14)

Let us consider, for simplicity, that the source bearingisat array broadside, , and is the arraysize. Then (11) is modified as [3], [40]

(15)

which is the far-field radiation or directivity pattern of theline array as opposed to near-field regions. The resultsin (14) and (15) are for a perfectly coherent incidentacoustic signal, and an increase in array sizeresultsin additional power output and a reduction in beamwidth.The side-lobe structure of the directivity pattern of a linearray, which is expressed by (14), can be suppressed atthe expense of a beamwidth increase by applying differentweights. The selection of these weights will act as spatialfilter coefficients with optimum performance [4], [11], [12].There are two different approaches to select the aboveweights: pattern optimizationand gain optimization. Forpattern optimization, the desired array response pattern

is selected first. A desired pattern is usuallyone with a narrow main lobe and low side lobes. Theweighting or shading coefficients in this case are realnumbers from well-known window functions that modifythe array response pattern. Harris’ review [32] on the useof windows in discrete Fourier transforms and temporalspectral analysis is directly applicable in this case to spatialspectral analysis for towed line array applications.

Using the approximation for small at arraybroadside, the first null in (12) occurs ator . The major conclusion drawn here for linearray applications is that [3], [40]

and (16)

where is the hydrophone time-series length.Both the above relations in (16) express the well-knowntemporal and spatial resolution limitations in line arrayapplications that form the driving force and motivation foradaptive and synthetic-aperture signal processing that wewill discuss later.

An additional constraint for sonar applications requiresthat the frequency resolution of the hydrophone timeseries for spatial spectral analysis that is based on FFTbeam-forming processing must be such that

(17)

in order to satisfyfrequency quantizationeffects associatedwith discrete frequency-domain beam forming followingthe FFT of sensor data [11], [64]. This is because in conven-tional beam forming, finite-duration impulse response (FIR)filters are used to provide realizations in designing digital

phase shifters for beam steering. Since fast-convolutionsignal-processing operations are part of the processing flowof a sonar signal processor, the effective beam-formingfilter length needs to be considered as the overlap sizebetween successive snapshots. In this way, the overlapprocess will account for the wraparound errors that arisein the fast-convolution processing [65]–[67]. It has beenshown [64] that an approximate estimate of the effectivebeam-forming filter length is provided by (15) and (17).

Because of the linearity of the conventional beam-forming process, an exact equivalence of the frequency-domain narrow-band beam former with that of thetime-domain beam former for broad-band signals can bederived [64], [68], [69]. Based on the model of Fig. 4, thetime-domain beam former is simply a time delaying [69]and summing process across the hydrophones of the linearray, which is expressed by

(18)

Since IFFT , by using FFT’sand fast convolution procedures, continuous beam-timesequences can be obtained at the output of the frequency-domain beam former [64]. This is a very useful operationwhen the implementation of beam-forming processors insonar systems is considered.

The beam-forming operation in (18) is not restrictedonly for plane-wave signals. More specifically, consider anacoustic source at the near field of a line array withthesource range and its bearing. Then the time delay forsteering at is

(19)

As a result of (19), the steering vectorwill include two parameters of interest, the

bearing and range of the source. In this case, thebeam former is called afocused beam former. There are,however, practical considerations restricting the applicationof the focused beam former in passive sonar line arraysystems, and these have to do with the fact that effectiverange focusing by a beam former requires extremely longarrays.

B. Array Gain

It was discussed in Section I-D that the performance ofa sonar array receiver to an acoustic signal embodied in anoise field is characterized by the “array gain” parameter,

, a term in (1). This parameter is defined by

(20)

where the cross-correlation coefficients aregiven by

(21)

368 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Page 12: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

where is the mean acoustic intensity of hydrophonetime sequences at the frequency binand ,

denote the normalized cross-correlation co-efficients of the signal and noise field [8], [41], respectively.If the noise field is isotropic, the denominator in (20) isequal to because the nondiagonal terms of the cross-correlation matrix for the noise field are negligible. Forperfect spatial coherence across the line array, the normal-ized cross-correlation coefficients areand the expected values of the array gain estimates are

. For the general case of isotropicnoise and for frequencies smaller than the towed array’sdesign frequency, is reduced to the quantity called thedirectivity index (DI) DI .

When and the conventional beam-forming pro-cessing is employed, (16) indicates that the deployment ofvery long line arrays is required in order to achieve suffi-cient array gain and angular resolution for precise bearingestimates. Practical deployment considerations, however,usually limit the overall dimensions of a hull mounted lineor towed array. In addition, the medium’s spatial coherence[41] sets an upper limit on the effective towed array length.In general, the medium’s spatial coherence length is on theorder of [40], [41], [71], [72]. In addition, verylong towed arrays suffer degradation in the array gain due toarray shape deformation and increased levels of self-noise[34], [73]–[78].

Alternatives to large-aperture sonar arrays are signal-processing schemes discussed in [9]. Theoretical and exper-imental investigations have shown that bearing resolutionand detectability of weak signals in the presence of stronginterferences can be improved by applying nonconventionalbeam formers such as adaptive beam forming [2], [11]–[13],[23], [24], [46]–[54], [79]–[84], high-resolution (linear pre-diction) techniques [61], [63], [71]–[82], [85]–[96], oracoustic synthetic-aperture processing [14]–[20], [22], [62],[97] to the hydrophone time series of deployed short sonararrays.

Moreover, a first-order analysis of beam-forming struc-tures usually ignores a whole host of practical issues, whichcan severely compromise actual achieved array gain. Forthe signals of interest, which are embodied in anisotropicnoise fields that consist of partially directional correlatednoise due to the distant shipping, one of the most importantissues, shown in (20), is the impact of the correlationproperties of the anisotropic noise on the array gain of asonar system. This impact is quite severe for conventionalbeam formers because the correlation properties of thenoise field are ignored in the mainstream sonar processingschemes.

The focus of this implementation study is on defining anintegrated advanced beam-forming structure for real-timesystems. This processing structure consists of a collec-tion of various nonconventional beam-forming algorithmsthat recent investigations have shown to provide improvedarray-gain performance in anisotropic noise fields. In thefollowing two sections, we will discuss very briefly the twomajor processing schemes, adaptive and acoustic synthetic-

aperture schemes, that have played an important role inthis investigation.

C. Adaptive Beam Formers

1) MVDR: The goal is to optimize the beam-formerresponse so that the output contains minimal contributionsdue to noise and signals arriving from directions other thanthe desired signal direction. For this optimization procedure,it is desired to find a linear filter vector , whichis a solution to the constrained minimization problem thatallows signals from the look direction to pass with aspecified gain [11], [12].

Minimize

subject to

(22)

where is the conventional steering vector basedon (10). The solution is given by

(23)

The above solution provides the adaptive steering vectorsfor beam forming the received signals by the-hydrophoneline array. Then in frequency domain, an adaptive beam ata steering is defined by

(24)

and the corresponding conventional beams are provided by(12).

2) GSC: The GSC [12] is an alternative approach tothe MVDR method. It reduces the adaptive problem to anunconstrained minimization process. The GSC formulationproduces a much less computationally intensive implemen-tation. In general, GSC implementations have complexity

, as compared to for MVDR implemen-tations, where is the number of sensors used in theprocessing. The basis of the reformulation of the problemis the decomposition of the adaptive filter vectorinto two orthogonal components and , where and

lie in the range and the null space of the constraint of(22), such that . A matrix ,which is called a signal blocking matrix, may be computedfrom where is a vector of ones. This matrix

, whose columns form a basis for the null space of theconstraint of (22), will satisfy , where is definedby (26). The adaptive filter vector may now be defined as

and yields the realization shown in Fig. 5.Then the problem is reduced to the following.

Minimize

(25)

which is satisfied by

(26)

being the value of the weights at convergence.The Griffiths–Jim GSC, in combination with the NLMS

adaptive algorithm, has been shown to yield near instan-taneous convergence [51], [98], [99]. Fig. 5 shows the

STERGIOPOULOS: IMPLEMENTATION OF PROCESSING SCHEMES 369

Page 13: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

Fig. 5. Basic processing structure for the memoryless GSC. Theright-hand-side branch is simply the shaded conventional beamformer. The left-hand-side branch of this figure is the resultof the signal blocking matrix (constraints) applied to presteeredhydrophone time series. The output of the signal blocking matrixis the input to the NLMS adaptive filter. Then, the output of thisprocessing scheme is the difference between the adaptive filter andthe “conventional” output.

basic structure of the so-called memoryless GSC. The timedelayed by hydrophone time series defined by (7),(8), and Fig. 4 form the presteered hydrophone time series,which are denoted by . In frequency domain,these presteered sensor data are denoted by andform the input data vector for the adaptive scheme in Fig. 5.On the left-hand-side branch of this figure, the intermedi-ate vector is the result of the signal blockingmatrix being applied to the input . Next, thevector is an input to the NLMS adaptive filter.The output of the right-hand branch is simply the shadedconventional output. Then, the output of this processingscheme is the difference between the adaptive filter outputand the “conventional” output

(27)

The adaptive filter, at convergence, reflects the side-lobestructure of any interferers present, and it is removed fromthe conventional beam-former output. In the case of theNLMS, this adaptation process can be represented by

(28)

where is the iteration number and is a small positivenumber designed to maintain stability. The parameteristhe convergence controlling parameter or “step size” forthe NLMS algorithm.

3) STMV Broad-Band Adaptive:Krolik and Swingler[83] have shown that the convergence time for broad-bandsource location can be reduced by using the space-time statistic called the STCM. This method achievessignificantly shorter convergence times than adaptivealgorithms that are based on the narrow-band cross-spectral density matrix (CSDM) [12] without sacrificingspatial resolution. In fact, the number of statisticaldegrees of freedom available to estimate the STCM isapproximately the time-bandwidth product ( ),as opposed to the observation time (being the sampling frequency) in CSDM methods. Thisprovides an improvement of approximately , thesize of the broad-band source bandwidth, in convergencetime. The conventional beam former’s output in frequencydomain is shown by (12). The corresponding time-domainconventional beam-former output is the weightedsum of the steered sensor outputs, as expressed by (18).Then, the expected broad-band beam power is givenby

(29)

where the vector includes the weights for spatial shading,as discussed in Section III-A.

The term

(30)

is defined as the STCM in time domain and is assumed to beindependent of in stationary conditions. The name STCMis derived from the fact that the matrix is computed bytaking the covariance of the presteered time-domain sensoroutputs. Suppose is the Fourier transform of thesensor outputs ., assuming that the sensor outputsare approximately band limited. Under these conditions,the vector of steered (or time delayed) sensor outputs

can be expressed by

(31)where is the diagonal steering matrix in (32),with elements identical to the elements of the conventionalsteering vector

(32)

Then it follows directly from the above equations that

(33)

370 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Page 14: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

where the index refers to thefrequency bins in a band of interest and isthe CSDM for the frequency bin . This suggests that

in (30) can be estimated from the CSDM, ,and expressed by (32). In the STMV, the broad-band spatial power spectral estimate is given by[83]

(34)

The STMV algorithm differs from the basic MVDRalgorithm in that the former yields an STCM that iscomposed from a band of frequencies and the MVDRalgorithm uses a CSDM that is derived from a singlefrequency bin. Thus, the additional degrees of freedom ofSTMV compared to those of CSDM provide a more robustadaptive process.

However, estimates of according to (34) do notprovide coherent beam time series, since they represent thebroad-band beam-power output of an adaptive process. Inthis investigation, we have modified the estimation processof the STMV matrix in order to get the complex coefficientsof for all the frequency bins in the band ofinterest.

The STMV algorithm may be used in its original formto generate an estimate of for all the frequencybands across the band of the received signal. Assumingstationarity across the frequency bins of a band, theestimate of the STMV may be considered to be approxi-mately the same as the narrow-band estimate forthe center frequency of the band . In this case, thenarrow-band adaptive coefficients may be derived from

(35)

The phase variations of across the frequency bins(where is the number of bins in

the band ) are modeled by

(36)where ( ) is a time-delay term derived from

(37)

Then, by using the adaptive steering weights ( )that are provided by (36), the adaptive beams are formed asshown by (24). Fig. 6 shows the realization of the STMVbeam former and provides a schematic representation of thebasic processing steps, which include:

1) time series segmentation, overlap, and FFT, shownby the group of blocks in the top left part of theschematic diagram;

2) formation of STCM (30), (33), shown by the twoblocks in the bottom left-hand side of Fig. 6;

3) inversion of covariance matrix using Cholesky factor-ization, estimation of adaptive steering vectors, andformation of adaptive beams in frequency domain,

presented by the middle and bottom blocks on theright-hand side of Fig. 6;

4) formation of adaptive beams in time domain throughinverse (I)FFT, discarding of overlap, and concatena-tion of segments to form continuous-beam time series,is shown in the top right-hand block.

The various indexes in Fig. 6 provide details for the im-plementation of the STMV processing flow in a genericcomputing architecture. The same figure indicates thatestimates of the STCM are based on an exponentiallyweighted time average of the current and previous STCM,which is discussed in the next section.

D. Implementation Considerations forAdaptive Processing Schemes

In this implementation study, we form the adaptive beamsin frequency domain as discussed in previous sections. Fur-thermore, the frequency-domain adaptive outputs, shown inFigs. 5, 6, and 12, are made equivalent to the FFT of thetime-domain beam-forming outputs with proper selection ofbeam-forming weights and careful data partitioning. Thisequivalence corresponds to implementing FIR filters viacircular convolution, which has also been discussed inSection III-A.

Matrix inversion is another major implementation issuefor the adaptive schemes discussed in this paper. Standardnumerical methods for solving systems of linear equationscan be applied to solve for the adaptive weights. The rangeof possible algorithms includes the following.

• Cholesky factorization of and [11].This allows the linear system to be solved by backsubstitution in terms of the received data vector. Notethat there is no requirement to estimate the sample co-variance matrix and that there is a continuous updatingof an existing Cholesky factorization.

• QR decomposition of the received vector , whichincludes the conversion of a matrix to upper triangularform via rotations. The QR decomposition methodhas better stability than the Cholesky factorizationalgorithm but it requires twice as much computationaleffort as the Cholesky approach.

• SVD method. This is the most stable factorizationtechnique. It requires, however, three times more com-putational requirements than the QR decompositionmethod.

In this implementation study, we have applied theCholesky factorization and the QR decomposition tech-niques in order to get solutions for the adaptive weights.Our experience suggests that there are no noticeabledifferences in performance between the above two methods.

The main consideration, however, for implementingadaptive schemes in real-time systems is associated withthe requirements derived from (23), (26), and (35), whichrequire knowledge of second-order statistics for the noisefield. Although these statistics are usually not known, theycan be estimated from the received data [11], [12], [49] by

STERGIOPOULOS: IMPLEMENTATION OF PROCESSING SCHEMES 371

Page 15: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

Fig. 6. Realization of the steered covariance adaptive beam former. The basic processing stepsinclude: 1) time-series segmentation, overlap, and FFT, shown by the group of blocks in the top-leftpart of the schematic diagram; 2) formation of STCM, shown by the two blocks on the bottomleft-hand side; 3) inversion of covariance matrix using Cholesky factorization, estimation of adaptivesteering vectors, and formation of adaptive beams in frequency domain (middle and bottom blocks onthe right-hand side); and 4) formation of adaptive beams in time domain through IFFT, discardingof overlap, and concatenation of segments to form continuous-beam time series (top right-handblock). The various indexes provide details for the implementation of the STMV processing flowin a generic computing architecture.

averaging a large number of independent samples of thecovariance matrixes [CSDM, ], [STCM, )],and by allowing the iteration process of the GSC schemein (27) to converge. Thus, if is the effective numberof statistically independent samples of CSDM and STCM,then the variance on the adaptive beam output powerestimator detection statistic is inversely proportional to( ) [11], [48], where is the number of arraysensors. Theoretical suggestions [49] and our empiricalobservations suggest that this numberneeds to be threeto four times the size of in order to obtain coherent-beamtime series at the output of the above adaptive schemes.In other words, for arrays with a large number of sensors,the implementation of adaptive schemes as statisticallyoptimum beam formers would require the averaging ofa very large number of independent samples of CSDMand STCM in order to derive an unbiased estimate of theadaptive weights [1]. In practice, this is the most serious

problem associated with the implementation of adaptivebeam formers in real-time systems.

Owsley [11] has addressed this problem with two veryimportant contributions. His first contribution is associatedwith the estimation procedure of the CSDM and STCMmatrices. His argument is that in practice, the CSDMand STCM matrices cannot be estimated exactly by timeaveraging because the received signal vector is nevertruly stationary and/or ergodic. As a result, the availableaveraging time is limited. Accordingly, one approach to thetime-varying adaptive estimation of andat time is to compute the exponentially time-averagedestimator at time as

(38)

where is a smoothing factor ( ) that implementsthe exponentially weighted time-averaging operation. The

372 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Page 16: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

same principle has also been applied in the GSC scheme,as expressed by (28). The use of this kind of exponentialwindow to update the covariance matrix is a very importantfactor in the implementation of adaptive algorithms in real-time systems.

Owsley’s [79] second contribution deals with the dynam-ics of the data statistics during the convergence period ofthe adaptation process. As mentioned above, the implemen-tation of an adaptive beam former with a large number ofadaptive weights in a large array sonar system requiresvery long convergence periods that would eliminate thedynamical characteristics of the adaptive beam former todetect the time-varying characteristics of a received signalof interest. A natural way to avoid this kind of temporalstationarity limitation is to reduce the number of adaptiveweights requirements. In the previous section, there wasa reference to the partially adaptive beam-former designconcept. Several approaches have been introduced that arebased on this concept, including the implementation of theMVDR algorithm in a so-called beam space [79], [80] orthe processing of a subset of the outputs of the sensors inthe array [81]–[83].

The beam-space partially adaptive approach [79], [80]includes the formation of conventional beams that areused as inputs to an MVDR, STMV, or GSC adaptationprocess, which can be viewed as adaptive beam interpo-lation processes of existing steered conventional beams.Owsley’s partially adaptive beam-former concept is basedon a subaperture configuration with very large percentageoverlap between contiguous subapertures [79]. More specif-ically, a line array is divided into a number of subarraysthat overlap. These subarrays are beam formed using theconventional approach, and this first stage of beam forminggenerates a number of sets of beams equal to the number ofsubarrays. The second stage of beam forming includes theimplementation of the MVDR on a set of beams, which aresteered in the same direction in space but each belongs to adifferent subarray. The above two stages of beam-formingoperations are illustrated in Fig. 7. Although the abovepartially adaptive beam-forming designs are in frequencydomain, their implementation includes broad-band as wellas narrow-band applications.

Our implementation scheme of the MVDR, STMV, andGSC adaptive beam formers includes the following vari-ants: their narrow-band implementation 1) in element space[11] and 2) in a subaperture configuration, as suggested byOwsley [79].

Passive broad-band outputs for the above adaptive beamformers are derived from the incoherent summation of thenarrow-band adaptive beam-power spectral estimates forall the frequency bins in a wide band of interest. A moredetailed discussion on this issue is included in Section IV,which presents various implementation aspects of signal-processing schemes in real-time systems. It is important tonote, however, that under this kind of incoherent summationscheme, the broad-band adaptive outputs represent the beamoutputs, only in the sense of energy content, for wide-bandsignals.

Coherent adaptive beam-forming processing for broad-band signals has been studied by Frost [81]. Other studies[82]–[84], such as the STMV method, have successfullyattempted to downsize the computationally demanding pro-cessing requirements of Frost’s [81] algorithm. In ourstudies, we have considered the implementation of theserelatively new broad-band adaptive processing schemes[82]–[84] in towed array sonar systems with integratedpassive and active capabilities. Based on this experience,it is suggested that a successful implementation of adaptivebeam formers should not be focused only on algorithmdevelopment. Instead, emphasis should be placed in devel-oping configuration schemes equivalent to the subapertureconfiguration suggested by Owsley [79]. This is because themajor problem in real-time adaptive beam forming is therequirement for improving the adaptive response that willallow a successful handling of the unpredictable dynamicsof a real-time sonar system. We intend to address this issueagain in Section V, which will present real experimentalresults from passive narrow- and broad-band and activereal-time sonar systems, including adaptive beam formers.

E. Evaluation of Convergence Propertiesof Adaptive Schemes

To test the convergence properties of the various adaptivebeam formers of this study, synthetic data were used thatincluded one CW signal. The frequency of the monochro-matic signal was selected to be 330 Hz and the angleof arrival was selected to be 68.9to coincide directlywith the steering direction of a beam. The SNR of thereceived synthetic signal was very high, 10 dB at the sensor.By definition, the adaptive beam formers allow signals inthe look direction to pass undistorted while minimizingthe total output power of the beam former. Therefore,in the ideal case the main beam output of the adaptivebeam former should resemble the main beam output of theconventional beam former while the side beams’ outputswill be minimized to the noise level. To evaluate theconvergence of the beam formers, two measurements weremade. From (27), the mean square error (MSE) betweenthe normalized main beam outputs of the adaptive beamformer and the conventional beam former was measured,as was the mean of the normalized output level of the sidebeam, which is the MSE when compared with zero. Theaveraging of the errors was done with a sliding window offour snapshots to provide a time-varying average and theoutputs were normalized so that the maximum output of theconventional beam former was unity.

1) Convergence Characteristics of GSC and GSC Subaper-ture (SA) Beam Formers:The GSC/NLMS adaptive algo-rithm, which was discussed in Section III-C2, and its sub-aperture configuration, denoted by GSC-SA/NLMS, werecompared against each other to determine if the use ofthe subaperture configuration produced any improvement inthe time required for convergence. The graph in Fig. 8(a)shows the comparison of the MSE of the main beamsof both algorithms for the same step size, which isdefined in (28). The graphs show that the convergence rates

STERGIOPOULOS: IMPLEMENTATION OF PROCESSING SCHEMES 373

Page 17: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

Fig. 7. Concept of subaperture adaptive processing. Schematic diagram shows the basic steps,which include: 1) formation ofJ subapertures; 2) for each subaperture, formation ofS conventionalbeams; and 3) for a given beam direction,� formation of line sensor arrays that consist ofJ numberof directional sensors (beams). The number of line arrays with directional sensors (beams) is equalto the numberS steered conventional beams in each subaperture. For each line array, the directionalsensor time series (beams) are provided at the input of an adaptive beam former.

of the main beams are approximately the same for bothalgorithms, reaching a steady-state value of MSE withina few snapshots. The value of MSE that is achieved isdictated by the misadjustment, which depends on. Thehigher MSE produced by the GSC-SA algorithm indicatesthat the algorithm exhibits a higher misadjustment.

The graph in Fig. 8(b) shows the output level of animmediate side beam, again for the same step size.The side beam was selected as the beam right next tothe main beam. The GSC-SA algorithm appears superiorat minimizing the output of the side beam. It reachesits convergence level almost immediately, while the GSC

algorithm requires approximately 30 snapshots to reachthe same level. This indicates that the GSC-SA algorithmshould be superior at canceling time-varying interferers.By selecting a higher value for , the time required forconvergence will be reduced but the MSE of the mainbeam will be higher.

2) Convergence Characteristics of STMV and STMV-SABeam Formers:As with the GSC/NLMS and GSC-SA/NLMS beam formers, the STMV and STMV-SA beamformers were compared with each other to determineif there was any improvement in the time required forconvergence when using the subaperture configuration.

374 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Page 18: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

(a)

(b)

Fig. 8. (a) MSE of the main beams of the GSC/NLMS and the GSC-SA/NLMS algorithms.(b) Side-beam levels of the above algorithms.

The graph in Fig. 9(a) shows the comparison of the MSEof the main beams of both algorithms. The graph showsthat the STMV-SA algorithm reaches a steady-state valueof MSE within the first few snapshots.

The STMV algorithm is incapable of producing anyoutput for at least eight snapshots as tested. Before this time,the matrices that are used to compute the adaptive steeringvectors are not invertible. After this initial period, thealgorithm has already reached a steady-state value of MSE.Unlike the case of the GSC algorithm, the misadjustmentfrom subaperture processing is smaller.

Fig. 9(b) shows the output level of the side beam for boththe STMV and the STMV-SA beam formers. Again, theside beam was selected as the beam right next to the mainbeam. As before, there is an initial period during which theSTMV algorithm is computing an estimate of the STCMand is incapable of producing any output; after that period,the algorithm has reached steady state and produces lowerside beams than the subaperture algorithm.

3) Signal Cancellation Effects of the Adaptive Algorithms:Testing of the adaptive algorithms of this study for signalcancellation effects was carried out with simulations that

STERGIOPOULOS: IMPLEMENTATION OF PROCESSING SCHEMES 375

Page 19: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

(a)

(b)

Fig. 9. (a) MSE of the main beams of the STMV and STMV-SA algorithms. (b) Side-beamlevels of the above algorithms.

included two signals arriving from 64and 69. All ofthe parameters of the signals were set to the same valuesfor all the beam formers: conventional, GSC/NLMS, GSC-SA/NLMS, STMV, and STMV-SA. In the narrow-bandoutputs of the conventional beam former (which can befound in [100]), the signals appear at the frequency andbeam at which they were expected. As anticipated, how-ever, the side lobes are visible in a number of other beams.The LOFAR-gram outputs of the GSC/STMV algorithmindicated that there is signal cancellation. In each case, thealgorithm failed to detect either of the two CW’s. This

suggests that there is a shortcoming in the GSC/NLMSalgorithm when there is strong correlation between twosignal arrivals received by the line array. The narrow-band outputs of the GSC-SA/NLMS algorithm showedthat in this case, the signal cancellation effects have beenminimized and the two signals were detected only atthe expected two beams with complete cancellation ofthe side-lobe structure. For the STMV beam former, theLOFAR-grams indicated a strong side-lobe structure inmany other beams. However, the STMV-SA beam formersuccessfully suppresses the side-lobe structure that was

376 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Page 20: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

present in the case of the STMV beam former. From allthese simulations, it was obvious that the STMV-SA beamformer, as a broad-band beam former, is not as robust fornarrow-band applications as the GSC-SA/NLMS.

F. Acoustic Synthetic-Aperture Processing:ETAM Algorithm

While the concept of synthetic-aperture processinghas been successfully applied to aircraft and satelliteradar systems, it has received limited success in passivesonar systems and wide acceptance in active side-lookingsonars [14]. The realistic conditions for effective acousticsynthetic-aperture processing, which can be viewed asa scheme that converts temporal gain to spatial gain,require that successive snapshots of the acoustic signal havegood cross-correlation properties in order to synthesize anextended aperture and that the tow speed fluctuations aresuccessfully compensated by means of processing. It hasbeen also suggested [14]–[20], [62] that the prospectsfor successfully extending the physical aperture of asonar array require algorithms that are not based on thesynthetic-aperture concept used in active radars.

The reported results in [15]–[20] and [62] have shownthat the problem of creating an acoustic synthetic apertureis centered on the estimation of a phase-correction factor,which is used to compensate for the phase differencesbetween sequential sonar-array measurements in order tosynthesize coherently the spatial information into a syn-thetic aperture. When the estimates of this phase-correctionfactor are correct, then the information inherent in thesynthetic aperture is the same as that of an array with anequivalent physical aperture [17], [20].

Recent theoretical and experimental studies have ad-dressed the above concerns and indicated that the spaceand time coherence of the acoustic signal in the sea [14],[15], [18], [22], [40], [41], [97] appears to be sufficientto extend the physical aperture of a moving line array. Inthe above studies, the fundamental question related to theangular resolution capabilities of a moving line array andthe amount of information inherent in a received signalhave been addressed. These investigations included the useof the CRLB analysis and showed that for long observationintervals on the order of 100 s, the additional informationprovided by a moving line array over a stationary array isexpressed as a large increase in angular resolution, whichis due to the Doppler caused by the movement of the array(see [17, Fig. 3]). A summary of these research efforts hasbeen reported in a special issue of the IEEE JOURNAL OF

OCEANIC ENGINEERING [14]The synthetic-aperture processing scheme of the present

development is based on the ETAM algorithm, which wasinvented by Stergiopoulos and Sullivan [16]. The basicconcept of this algorithm is a phase-correction factor thatis used to combine coherently successive measurements ofthe towed array to extend the effective towed array length.

Fig. 10 shows the experimental implementation of theETAM algorithm in terms of the towed array positionsas a function of time and space. Between two successive

positions of the -sensor line array with hydrophonespacing , there are ( ) pairs of space samples ofthe acoustic field that have the same spatial information,their difference being a phase factor [17]–[20], [62] relatedto the time delay in which these measurements were taken.By cross correlating the ( ) pairs of the hydrophonetime series that overlap, the desired phase-correction factoris derived, which compensates for the time delay betweenthese measurements and the phase fluctuations caused byirregularities of the tow path of the physical array; this iscalled theoverlap correlator. Following the above, the keyparameters in the ETAM algorithm are the time increment

between two successive sets of measurements,where is the tow speed and represents the numberof hydrophone positions that the towed array has movedduring seconds or the number of hydrophones to whichthe physical aperture of the array is extended at eachsuccessive set of measurements. The optimum overlap size( ), which is related to the variance of the phase-correction estimates, has been shown [19] to be .The total number of sets of measurements required toachieve a desired extended aperture size is then definedby , where is the period taken by thetowed array to travel a distance equivalent to the desiredlength of the synthetic aperture.

Then for the frequency bin and between two successiveth and ( )th snapshots, the phase-correction factor

estimate is given by

(39)

where, for a frequency band with central frequencyandobservation bandwidth or ,the coefficients

(40)

are the normalized cross-correlation coefficients or thecoherence estimates between the pairs of hydrophonesthat overlap in space. The above coefficients are used asweighting factors in (39) to optimally weight the goodagainst the bad pairs of hydrophones during the estimationprocess of the phase-correction factor.

The performance characteristics and expectations fromthe ETAM algorithm have been evaluated experimentally,

STERGIOPOULOS: IMPLEMENTATION OF PROCESSING SCHEMES 377

Page 21: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

Fig. 10. Concept of the experimental implementation of ETAM algorithm in terms of towed arraypositions as a function of time and space.

and the related results have been reported [17]–[20], [61],[62]. The main conclusion drawn from these experimentalresults is that for narrow-band signals or for frequency-modulated (FM) types of pulses from active sonar systems,the overlap correlator in ETAM compensates successfullythe tow speed fluctuations and effectively extends thephysical aperture of an array more than eight times. Onthe other hand, the threshold value of ETAM is8 dB fora 1-Hz band at the sensor. For values of SNR higher thanthis threshold, it has been shown that ETAM achieves thetheoretical CRLB bounds and has comparable performanceto the maximum-likelihood estimator [17], [20].

IV. SYSTEM IMPLEMENTATION ASPECTS

A. Definition of a Generic Beam-FormingStructure for Sonar Systems

As stated in Section III-B, the major research effort ofthis project has been devoted to designing a generic beam-forming structure that will allow the implementation ofadaptive, synthetic-aperture, and high-resolution temporaland spatial spectral analysis techniques in an integratedactive–passive sonar system.

The practical implementation of the numerous adaptiveand synthetic-aperture processing techniques, however, re-quires consideration of the characteristics of the signaland noise and the complexity of the ocean environment,as well as the computational difficulty. The discussion inSections II and III addressed these concerns and preparedthe ground for the development of the above generic beam-forming structure.

The major goal here in defining this genericsignal-processingscheme is to exploit the great degree of process-ing concept similarities existing among the various types ofprocessing techniques. Second is the definition of acomput-ing architecturefor the signal processor that will allow an

effective implementation of the proposed signal-processingscheme. Third is theconcept demonstrationof both thetechnology and signal-processing concepts that is provinginvaluable in reducing risk and in ensuring that significantinnovations occur during the formal development process.

Fig. 11 shows the proposed configuration of the signal-processing flow that includes the implementation of FIRfilters and conventional, adaptive, and synthetic-aperturebeam formers. The reconfiguration of the different process-ing blocks in Fig. 11 allows the application of the proposedconfiguration into a variety of active and/or passive sonarsystems. The shaded blocks in Fig. 11 represent advancedsignal-processing concepts of next-generation sonar sys-tems, and this basically differentiates their functionalityfrom the current operational sonars.

The first point of the proposed processing flow con-figuration is that its implementation is in the frequencydomain. The second point is that the frequency-domainbeam-forming (or spatial filtering) outputs can be madeequivalent to the FFT of the broad-band beam-formeroutputs with proper selection of beam-forming weights andcareful data partitioning. This equivalence corresponds toimplementing FIR filters via circular convolution. It alsoallows spatial-temporal processing of narrow- and broad-band-type signals as well. As a result, the output of eachone of the processing blocks in Fig. 11 provides continuoustime series. This modular structure in the signal-processingflow is a very essential processing arrangement in order toallow for the integration of a great variety of processingschemes such as those considered in this study.

The details of the proposed generic processing flow, asshown in Fig. 11, very briefly are the following.

• The block in this figure named “band formation”includes the partitioning of the time series from thereceiving sensor array, their initial spectral FFT, theselection of the signal’s frequency band of interest via

378 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Page 22: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

Fig. 11. Schematic diagram of a generic signal-processing flowthat allows the implementation of nonconventional processingschemes in sonar systems.

bandpass FIR filters, and downsampling [65]–[67]. Theoutput of this block provides continuous time series atreduced sampling rate.

• The major blocks, including “conventional” spatial firfiltering and “advanced beamformers” (adaptive andsynthetic-aperture FIR filtering), provide continuousdirectional beam time series by using the FIR imple-mentation scheme of the spatial filtering via circularconvolution [64]–[67]. The segmentation and overlapof the time series at the input of the beam formerstakes care of the wraparound errors that arise in fast-convolution signal-processing operations. The overlapsize is equal to the effective FIR filter’s length.

• The block named “matched filter” is for the processingof echoes for active sonar applications. The intentionhere is to compensate also for the time-dispersiveproperties of the medium by having as an option theinclusion of the medium’s propagation characteristicsin the replica of the active signal considered in thematched filter in order to improve detection and gain.

• The blocks “vernier,” “narrow-band spectral analysis,”and “broad-band spectral analysis” [67] include thefinal processing steps of a temporal spectral analysis.The inclusion of the vernier here is to allow theoption for improved frequency resolution capabilitiesdepending on the application.

• The block “display” includes the data normalization[42], [44] in order to map the output results into the

dynamic range of the display devices in a manner thatprovides a CFAR capability.

The strength of this generic implementation scheme isthat it permits under a parallel configuration the inclusion ofnonlinear signal-processing methods, such as adaptive andsynthetic aperture, as well as the equivalent conventionalapproach. This permits a very cost-effective evaluation ofany type of improvements during the concept demonstrationphase.

All the variations of adaptive processing techniques,while providing good bearing/frequency resolution, are sen-sitive to the presence of system errors. Thus, the deforma-tion of a towed array, especially during course alterations,can be the source of serious performance degradation forthe adaptive beam formers. This performance degradationis worse than that for the conventional beam former.So our concept of the generic beam-forming structurerequires the integration of towed array shape estimationtechniques [73]–[78] in order to minimize the influence ofsystem errors on the adaptive beam formers. Furthermore,the fact that the advanced beam-forming blocks of thisgeneric processing structure provide continuous-beam timeseries allows the integration of passive and active sonarapplication in one signal processor. Although this kind ofintegration may exist in conventional systems, the integra-tion of adaptive and synthetic-aperture beam formers in onesignal processor for active and passive applications has notbeen reported yet, except for the experimental system of thisstudy, described in Section V. Thus, the beam time seriesfrom the output of the conventional and nonconventionalbeam formers are provided at the input of two differentprocessing blocks, the passive and active processing units,as shown in Fig. 11.

In the passive unit, the use of verniers and the temporalspectral analysis (incorporating segment overlap, window-ing, and FFT coherent processing [31], [32]), provide thenarrow-band results for all the beam time series. Normal-ization and OR-ing [42], [44] are the final processing stepsbefore displaying the output results. Since a beam timesequence can be treated as a signal from a directionalhydrophone having the same array gain and directivitypattern as that of the above beam-forming processingschemes, the display of the narrow-band spectral estimatesfor all the beams follows the so-called LOFAR presentationarrangements, as shown in Figs. 15–17. This includes thedisplay of the beam-power outputs as a function of time,steering beam (or bearing), and frequency. LOFAR displaysare used mainly by sonar operators to detect and classifythe narrow-band characteristics of a received signal.

Broad-band outputs in the passive unit are derived fromthe narrow-band spectral estimates of each beam by meansof incoherent summation of all the frequency bins in a wideband of interest. This kind of energy content of the broad-band information is displayed as a function of bearing andtime, as shown in Fig. 21.

In the active unit, the application of a matched filter(or replica correlator) on the beam time series providescoherent broad-band processing. This allows detection of

STERGIOPOULOS: IMPLEMENTATION OF PROCESSING SCHEMES 379

Page 23: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

echoes as a function of range and bearing for referencewave forms transmitted by the active transducers of a sonarsystem. The displaying arrangements of the correlator’soutput data are similar to the LOFAR displays and include,as parameters, range as a function of time and bearing, asshown in Fig. 24.

At this point, it is important to note that for activesonar applications, wave-form design and matched-filterprocessing must not only take into account the type ofbackground interference encountered in the medium butshould also consider the propagation characteristics (multi-path and time dispersion) of the medium and the featuresof the target to be encountered in a particular underwaterenvironment. Multipath and time dispersion in either deepor shallow water cause energy spreading that distorts thetransmitted signals of an active sonar, and this resultsin a loss of matched filter processing gain if the replicahas the properties of the original pulse [1]–[4], [8], [54],[102]. Results from a very recent study by Hermand andRoderick [103] have shown that the performance of aconventional matched filter can be improved if the referencesignal (replica) compensates for the multipath and the timedispersion of the medium. This compensation is a model-based matched-filter operation, including the correlationof the received signal with the reference signal (replica)that consists of the transmitted signal convolved with theimpulse response of the medium. Experimental results haveshown also that the model-based matched-filter approachhas improved performance with respect to the conventionalmatched-filter approach by as much as 3.6 dB. The aboveremarks should be considered as supporting arguments forthe inclusion of model-based matched-filter processing inthe generic signal-processing structure, shown in Fig. 11.

We focus our attention now on the processing arrange-ments for the adaptive subaperture schemes, which aresummarized in Figs. 5–7 and 12. The subaperture pro-cessing scheme in Fig. 7 provides sets of beam timeseries, which are treated as arrays, each consisting of

directional hydrophones with spacing. Each set ofdirectional hydrophone time series is provided at the inputof the adaptive processor, which is illustrated in Figs. 5, 6,and 12. The mathematical details discussed in Section III-Cand the appropriate indexes shown in these figures are self-explanatory in providing the details needed for mappingthis computationally very intensive processing scheme in ascalar computing architecture. For adaptive beam formers,Fig. 12 summarizes their basic processing steps, whichinclude:

1) time-series segmentation and overlap, shown by theblock in the top right of the schematic diagram ofFig. 12;

2) FFT of segmented time series, formation of cross-covariance spectral density matrix, and its inversionusing Cholesky factorization, presented by the middleand bottom blocks on the left-hand side of Fig. 12;

3) estimation of adaptive steering vectors and formationof adaptive beams in frequency domain, shown by

the middle and bottom blocks in the middle part ofFig. 12;

4) formation of adaptive beams in time domain throughIFFT, discarding of overlap, and concatenation ofsegments to form continuous-beam time series, shownin the left-hand block.

The indexes of Fig. 12 provide details for the implementa-tion of the adaptive processing flow in a generic computingarchitecture.

For the synthetic-aperture processing scheme, however,there is an important detail regarding the segmentation andoverlap of the hydrophone time series into sets of discon-tinuous segments. This segmentation process is associatedwith the tow speed and the size of the synthetic aperture,as discussed in Section III-F. So in order to achieve contin-uous data flow at the output of the overlap correlator, the

-continuous time series are segmented into discontinuousdata sets, as shown in Fig. 13. Our implementation schemein Fig. 13 considers five discontinuous segments in eachdata set. This arrangement will provide at the output of theoverlap correlator -continuous hydrophone time series,which are provided at the input of the conventional beamformer as if they were the hydrophone time series of anequivalent physical array. Thus, the basic processing stepsinclude time-series segmentation, overlap, and grouping offive discontinuous segments, which are provided at theinput of the overlap correlator, as shown by the group ofblocks in the top part of Fig. 13. is the length inseconds of the discontinuous segmented time series anddefines the size of FFT. The rest of the blocks provide theindexing details for the formation of the synthetic aperture.These indexes also provide details for the implementationof the segmentation process of the synthetic-aperture flowin a generic computing architecture.

The processing arrangements and the indexes in Fig. 14provide the details needed for the mapping of this synthetic-aperture processing scheme in a sonar computing architec-ture. The basic processing steps include the following.

1) Time series segmentation, overlap, and grouping offive discontinuous segments, which are provided atthe input of the overlap correlator, shown by the blockin the top part of schematic diagram. Details of thissegmentation process are shown also in Fig. 13.

2) The main block, “ETAM: overlap correlator,” pro-vides processing details for the estimation of thephase-correction factor to form the synthetic aperture(39) and (40).

3) Formation of the continuous sensor time series ofthe synthetic aperture are obtained through IFFT,discarding of overlap, and concatenation of segmentsto form continuous time series, shown in the left-handblock.

It is important to note here that the choice of five discon-tinuous segments was based on experimental observations[18] regarding the temporal and spatial coherence properties

380 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Page 24: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

Fig. 12. Schematic diagram for the processing details of adaptive beam formers. The basicprocessing steps include: 1) time-series segmentation and overlap (block at the top right of schematicdiagram); 2) FFT of segmented time series, formation of cross-covariance spectral density matrix,and its inversion using Cholesky factorization (middle and bottom blocks on the left-hand side); 3)estimation of adaptive steering vectors and formation of adaptive beams in frequency domain (middleand bottom blocks in the middle part of schematic diagram); and 4) formation of adaptive beams intime domain through IFFT, discarding of overlap, and concatenation of segments to form continuousbeam time series (left-hand block). The various indexes provide details for the implementation ofthe adaptive processing flow in a generic computing architecture.

of the underwater medium. These issues of coherence arevery critical for synthetic-aperture processing and have beenaddressed in Section III-B.

B. Comments on Computing Architecture Requirements

The implementation of this investigation’s nonconven-tional processing schemes in sonar and radar systems isa nontrivial issue. In addition to the selection of theappropriate algorithms, success is heavily dependent on theavailability of suitable computing architectures.

Past attempts to implement matrix-based signal-processing methods, such as adaptive beam formersreported in this paper, were based on the development ofsystolic array hardware because systolic arrays allow largeamounts of parallel computation to be performed efficientlysince communications occur locally. None of these ideasis new. Unfortunately, systolic arrays have been muchless successful in practice than in theory. The fixed-sizeproblem for which it makes sense to build a specific arrayis rare. Systolic arrays big enough for real problems cannotfit on one board, much less one chip, and interconnects

have problems. A 2-D systolic array implementation willbe even more difficult. So any new computing architecturedevelopment should provide high throughput for vector-as well as matrix-based processing schemes.

A fundamental question, however, that must be addressedat this point is whether it is worthwhile to attempt todevelop a system architecture that can compete with a mul-tiprocessor using stock microprocessors. Although recentmicroprocessors use advanced architectures, improvementof their performance includes a heavy cost in design com-plexity, which grows dramatically with the number ofinstructions that can be executed concurrently. Moreover,the recent microprocessors, which claim high performancefor peak rates of millions of floating operations per second,have their net throughput usually much lower, and theirmemory architectures are targeted toward general-purposecode.

The above issues establish the requirement for dedicatedarchitectures, such as in the area of operational sonarsystems. Sonar applications are computationally intensive,as shown in Section IV of this paper, and they require

STERGIOPOULOS: IMPLEMENTATION OF PROCESSING SCHEMES 381

Page 25: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

Fig. 13. Schematic diagram of the data flow for the ETAM algorithm and the sensor time-seriessegmentation into a set of five discontinuous segments for the overlap correlator. The basicprocessing steps include time-series segmentation, overlap, and grouping of five discontinuoussegments, which are provided at the input of the overlap correlator (shown by the group ofblocks in the top part of schematic diagram).T = M=fs is the length in seconds of thediscontinuous segmented time series andM defines the size of FFT. The rest of the blocks providethe indexing details for the formation of the synthetic aperture. These indexes provide details for theimplementation of the segmentation process of the synthetic-aperture flow in a generic computingarchitecture. The processing flow is shown in Fig. 14.

high throughput on large data sets. It is our understandingthat the Canadian Department of National Defence recentlysupported work for a new sonar computing architecturecalled the next-generation signal processor (NGSP) [10].We believe that the NGSP has established the hardware con-figuration to provide the required processing power for theimplementation and real-time testing of nonconventionalbeam formers such as those reported in this paper.

A detailed discussion about the NGSP, however, isbeyond the scope of this paper. A brief overview of thisnew signal processor can be found in [10]. Other advancedcomputing architectures that can cover the throughput re-

quirements of computationally intensive signal-processingapplications, such as those discussed in this manuscript,have been developed by Mercury Computer Systems [104].Based on the experience of the author, the suggestion isthat implementation efforts of advanced signal-processingconcepts should be directed more toward the developmentof generic signal-processing structures as in Fig. 11 ratherthan toward the development of very expensive comput-ing architectures. Moreover, the signal-processing flow ofadvanced processing schemes that include both scalar andvector operations should be very well defined in order toaddress practical implementation issues.

382 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Page 26: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

Fig. 14. Schematic diagram for the processing arrangements of ETAM algorithm. The basicprocessing steps include the following. 1) Time series segmentation, overlap, and grouping offive discontinuous segments, which are provided at the input of the overlap correlator, shownby the block in the top part of the schematic diagram. Details of the segmentation process areshown also in Fig. 13. 2) The main block, “ETAM: overlap correlator,” provides processing detailsfor the estimation of the phase-correction factor to form the synthetic aperture. 3) Formation ofthe continuous sensor time series of the synthetic aperture is obtained through IFFT, discardingof overlap, and concatenation of segments to form continuous time series (left-hand block). Thevarious indexes provide details for the implementation of the synthetic processing flow in a genericcomputing architecture.

In this paper, we have addressed the issue of computingarchitecture requirements by defining generic concepts ofthe signal-processing flow for integrated active–passivesonar systems, including adaptive and synthetic-aperturesignal-processing schemes. The schematic diagrams inFigs. 6, 7, and 11–13 that show the signal-processing flowof the advanced processing concepts of this study providea realistic effort in that regard. Their implementation canbeen carried out in existing computer architectures [10],[104] as well as in a network of general-purpose computerworkstations that support both scalar and vector operations.

V. CONCEPT DEMONSTRATION: EXPERIMENTAL RESULTS

The real data sets that have been used in this study totest the implementation configuration of the above non-

conventional processing schemes come from two kindsof experimental setups. The first one included sets ofexperimental data representing an acoustic field consistingof the tow ship’s self-noise and the reference narrow-bandCW’s, as well as broad-band signals such as hyperbolicfrequency-modulated (HFM) and pseudo-random transmit-ted waveforms from a deployed source. The absence ofother noise sources as well as noise from distant shippingduring these experiments make this set of experimentaldata very appropriate for concept demonstration. This isbecause there are only a few known signals in the receivedhydrophone time series, and this allows an effective testingof the performance of the above generic signal-processingstructure by examining various possibilities of artifacts thatcould be generated by the nonconventional beam formers.

STERGIOPOULOS: IMPLEMENTATION OF PROCESSING SCHEMES 383

Page 27: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

Fig. 15. Conventional beam former’s LOFAR narrow-band output. The 25 windows of this displaycorrespond to the 25 steered beams equally spaced in [1; �1] cosine space. The acoustic fieldincluded three narrow-band signals. Very weak indications of the CW signal of interest are shownin beams 21–24.

In the second experimental setup, the received hy-drophone data represent an acoustic field consisting ofthe reference CW, HFM, and broad-band signals from thedeployed source that are embodied in a highly correlatedacoustic noise field, including narrow- and broad-bandnoise from heavy shipping traffic. During the experiments,signal conditioning and continuous recording on a high-performance digital recorder was provided by a real-timedata system.

The generic signal-processing structure, presented inFig. 11, and the associated signal-processing algorithms(MVDR, GSC, STMV, ETAM, matched filter) wereimplemented in a workstation supporting a UNIX operatingsystem and FORTRAN and C compilers, respectively.

Although the CPU power of the workstation was not suf-ficient for real-time signal-processing response, the mem-ory of the workstation supporting the signal-processingstructure of Fig. 11 was sufficient to allow processingof continuous hydrophone time series up to three hourslong. Thus, the output results of the above generic signal-processing structure were equivalent to those that wouldhave been provided by a real-time system including theimplementation of the signal-processing schemes discussedin this paper.

The results presented in this section are divided into twoparts. The first part discusses passive narrow- and broad-band towed array sonar applications. The scope here is toevaluate the performance of the adaptive and synthetic-aperture beam-forming techniques and to assess their abilityto track and localize narrow- and broad-band signals ofinterest while suppressing strong interferers. The impactand merits of these techniques will be contrasted with thelocalization and tracking performance obtained using theconventional beam former.

The second part of this section presents results fromactive towed array sonar applications. The aim here is toevaluate the performance of the adaptive and synthetic-aperture beam formers in a matched-filter processing en-vironment.

A. Passive Towed Array Sonar Applications

1) Narrow-Band Acoustic Signals:The display ofnarrow-band bearing estimates according to a LOFARpresentation arrangement is shown in Figs. 15–17. Twenty-five beams equally spaced in [ ] cosine space weresteered for the conventional, adaptive, and synthetic-aperture beam-forming processes. The wavelengthof

384 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Page 28: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

Fig. 16. Synthetic-aperture (ETAM algorithm) LOFAR narrow-band output. The processed sensortime series are the same as those of Fig. 15. The basic difference between the LOFAR-gram resultsof the conventional beam former in Fig. 15 and those of the synthetic-aperture beam former is thatthe improved directionality (array gain) of the nonconventional beam former localizes the detectednarrow-band signals in a smaller number of beams than the conventional beam former. For thesynthetic-aperture beam former, this is translated into a better tracking and localization performancefor detected narrow-band signals, as shown in Figs. 18 and 19.

the reference CW signal was approximately equal to 1/6of the aperture size of the deployed line array. Thepower level of the above CW signal was in the range of130 dB for 1 Pa, and the distance between the sourceand the receiver was on the order of(10 ) nm. Thewater depth in the experimental area was 1000 m and thedeployment depths of the source and the array receiverwere approximately 100 m.

Fig. 15 presents the conventional beam former’s LO-FAR output. At this particular moment, we started to losedetection of the above reference CW signal tonal. Veryweak indications of the presence of this CW signal areshown in beams 21–24 of Fig. 15. In Figs. 16 and 17, theLOFAR outputs of the synthetic aperture and the partiallyadaptive subaperture MVDR processing schemes are shownfor the set of data and are the same as those of Fig. 15.In particular, Fig. 16 shows the synthetic-aperture (ETAMalgorithm) LOFAR narrow-band output, which indicatesthat the basic difference between the LOFAR-gram resultsof the conventional beam former in Fig. 15 and those ofthe synthetic-aperture beam former is that the improved

directionality (array gain) of the nonconventional beam for-mer localizes the detected narrow-band signals in a smallernumber of beams than the conventional beam former. Forthe synthetic-aperture beam former, this is translated intoa better tracking and localization performance for detectednarrow-band signals, as shown in Figs. 18 and 19.

Fig. 17 presents the subaperture MVDR beam former’sLOFAR narrow-band output. In this case, the processedsensor time series are the same as those of Figs. 15 and16. However, the sharpness of the adaptive beam former’sLOFAR output was not as good as that of the conventionaland synthetic-aperture beam former. This indicated lossof temporal coherence in the adaptive beam time series,which was caused by nonoptimum performance and poorconvergence of the adaptive algorithm. The end resultwas poor tracking of detected narrow-band signals by theadaptive schemes, as shown in Fig. 20.

The narrow-band LOFAR results from the subapertureGSC and STMV adaptive schemes were almost identicalwith those of the subaperture MVDR scheme, shown inFig. 17. For the adaptive beam formers, the number of itera-

STERGIOPOULOS: IMPLEMENTATION OF PROCESSING SCHEMES 385

Page 29: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

Fig. 17. Subaperture MVDR beam former’s LOFAR narrow-band output. The processed sensortime series are the same as those of Figs. 15 and 16. Even though the angular resolution performanceof the subaperture MVDR scheme in this case was better than that of the conventional beamformer, the sharpness of the adaptive beam former’s LOFAR output was not as good as that of theconventional and synthetic-aperture beam former. This indicated loss of temporal coherence in theadaptive beam time series, which was caused by nonoptimum performance and poor convergenceof the adaptive algorithm. The end result was poor tracking of detected narrow-band signals bythe adaptive schemes, as shown in Fig. 20.

tions for the exponential averaging of the sample covariancematrix was approximately five to ten snapshots, [according to (38)]. Thus, for narrow-band applications, theshortest convergence period of the subaperture adaptivebeam formers was on the order of 60–80 s, while for broad-band applications, the convergence period was on the orderof 3–5 s.

Even though the angular resolution performance of theadaptive schemes (MVDR, GSC, STMV) in element spacefor the above narrow-band signal was better than thatof the conventional beam former, the sharpness of theadaptive beam formers’ LOFAR output was not as goodas that of the conventional and synthetic-aperture beamformer. Again, this indicated loss of temporal coherencein the adaptive beam time series, which was caused bynonoptimum performance and poor convergence of theabove adaptive schemes when their implementation was inelement space.

Loss of coherence is evident in the LOFAR outputsbecause the generic beam-forming structure in Fig. 11includes coherent temporal spectral analysis of thecontinuous-beam time series for narrow-band analysis.For the adaptive schemes implemented in element space,the number of iterations for the adaptive exponentialaveraging of the sample covariance matrix was 200snapshots, [ according to (38)]. In particular,the MVDR element space method required a very longconvergence period on the order of 3000 s. In cases wherethis convergence period was reduced, the MVDR elementspace LOFAR output was populated with artifacts [23].However, the performance of the adaptive schemes of thisstudy (MVDR, GSC, STMV) improved significantly whentheir implementation was carried out under the subapertureconfiguration, shown in Fig. 7.

Apart from the presence of the CW signal within the conventional LOFAR display, only two more

386 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Page 30: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

Fig. 18. Signal following of bearing estimates from the con-ventional beam-forming LOFAR narrow-band outputs and thesynthetic aperture (ETAM algorithm). Solid line shows the truevalues of source’s bearing. The wavelength of the detected CWwas equal to 1/3 of the aperture sizeL of the deployed array. Forreference, the tracking of bearing from conventional beam-formingLOFAR outputs of another CW with wavelength equal to 1/16 ofthe towed array’s aperture is shown in the lower part.

narrow-band signals with wavelengths approximately equalto were detected. No other signals were expectedto be present in the acoustic field, and this is confirmedby the conventional narrow-band output of Fig. 15, whichhas white-noise characteristics. This kind of simplicity inthe received data is essential for this kind of demonstrationprocess in order to identify the presence of artifacts thatcould be produced by the various beam formers.

The narrow-band beam-power maps of the LOFAR-grams in Figs. 15–17 form the basic unit of acousticinformation that is provided at the input of the data man-ager of our system for further information extraction. Asdiscussed in Section I-C, one basic function of the data-management algorithms is to estimate the characteristics ofsignals that have been detected by the beam-forming andspectral-analysis processing schemes, which are shown inFig. 11. The data-management processing includes signalfollowing or tracking [105], [106] that provides monitoringof the time evolution of the frequency and the associatedbearing of detected narrow-band signals.

If the output results from the nonconventional beamformers exhibit improved array-gain characteristics, thiskind of improvement should deliver better system trackingperformance over that of the conventional beam former. Toinvestigate the tracking performance improvements of thesynthetic-aperture and adaptive beam formers, the deployedsource was towed along a straight line course while thetowing of the line array receiver included a few coursealterations over a period of approximately three hours.Fig. 19 illustrates this scenario, showing the constant courseof the towed source and the course alterations of the vesseltowing the line array receiver.

The parameter estimation process for tracking the bearingof detected sources consisted of peak picking in a region ofbearing and frequency space sketched by fixed gate sizesin the LOFAR-gram outputs of the conventional and non-conventional beam formers. Details about this estimationprocess can be found in [107]. Briefly, the choice of thegate sizes was based on the observed bearing and frequencyfluctuations of a detected signal of interest during the exper-iments. Parabolic interpolation was used to provide refinedbearing estimates [108]. For this investigation, the bearings-only tracking process described in [107] was used as anarrow-band tracker, providing unsmoothed time evolutionof the bearing estimates to the localization process [105],[109]. The localization process of this study was based ona recursive extended Kalman filter formulated in Cartesiancoordinates. Details about this localization process can befound in [107] and [109].

The solid line in Fig. 18 shows the expected bearingsof a detected CW signal with respect to the towed arrayreceiver. The dots in the same figure represent the trackingresults of bearing estimates from LOFAR data provided bythe synthetic-aperture and conventional beam formers. Themiddle part of Fig. 18 illustrates the tracking results of thesynthetic-aperture beam former. In this case, the wavelength

of the narrow-band CW signal was approximately equalto 1/3 of the aperture size of the deployed towed array (DI

STERGIOPOULOS: IMPLEMENTATION OF PROCESSING SCHEMES 387

Page 31: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

(a)

(b)

Fig. 19. (a) Signal following of bearing estimates from conventional beam-forming and syn-thetic-aperture (ETAM algorithm) LOFAR narrow-band outputs. Solid line shows the true valuesof source’s bearing. (b) Localization estimates that were based on the bearing tracking resultsshown in (a).

dB). For this very low frequency CW signal, thetracking performance of the conventional beam former wasvery poor, as shown by the upper part of Fig. 18. To providea reference, the tracking performance of the conventionalbeam former for a CW signal, having wavelength approx-imately equal to 1/16 of the aperture size of the deployedarray (DI dB), is shown in the lower part of Fig. 18.

Localization estimates for the acoustic source transmit-ting the CW signal with were derived only fromthe synthetic-aperture tracking results, shown in the middle

of Fig. 18. In contrast to these results, the conventionalbeam former’s localization estimates did not convergebecause the variance of the associated bearing trackingresults was very large, as indicated by the results of theupper part of Fig. 18. As expected, the conventional beamformer’s localization estimates for the higher frequency CWsignal converge to the expected solution. This isbecause the system array gain in this case was higher (DI

dB), resulting in better bearing tracking performancewith a very small variance in the bearing estimates.

388 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Page 32: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

The tracking and localization performance of thesynthetic-aperture and conventional beam-forming tech-niques was also assessed from other sets of experimentaldata. In this case, the towing of the line array receiverincluded only one course alteration over a period ofapproximately 30 min. Fig. 19 presents a summary ofthe tracking and localization results from this experiment.Fig. 19(a) shows tracking of the bearing estimates providedby the synthetic-aperture and conventional beam-formingLOFAR-gram outputs. Fig. 19(b) presents the localizationestimates derived from the corresponding tracking results.

It is apparent from the results of Figs. 18 and 19 that thesynthetic-aperture beam former improves the array gain ofsmall-size array receivers, and this improvement is trans-lated into a better signal-tracking and target-localizationperformance than the conventional beam former.

With respect to the tracking performance of the narrow-band adaptive beam formers, our experience is that duringcourse alterations, the tracking of bearings from the narrow-band adaptive beam-power outputs was very poor. As anexample, Fig. 20(b) shows the subaperture MVDR adaptivebeam former’s bearing tracking results for the same set ofdata as in Fig. 19. It is clear in this case that the changesof the towed array’s heading are highly correlated with thedeviations of the adaptive beam former’s bearing trackingresults from their expected estimates.

Although the angular resolution performance of the adap-tive beam former was better than that of the synthetic-aperture processing, the lack of sharpness and the fuzzinessand discontinuity in the adaptive LOFAR-gram outputsprevented the signal following algorithms from trackingthe signal of interest [107]. Thus, the subaperture adaptivealgorithm should have provided better bearing estimatesthan those indicated by the output of the bearing tracker,shown in Fig. 20. To address this point, we plotted thesubaperture MVDR bearing estimates as a function of timefor all the 25 steered beams equally spaced in [ ]cosine space for a frequency bin including the signalof interest. Fig. 20(a) shows a waterfall of these bearingestimates. It is apparent in this case that our bearing trackerfailed to follow the narrow-band bearing outputs of theadaptive beam former. Moreover, the results in Fig. 20(a)suggest signal fading and performance degradation for thenarrow-band adaptive processing during certain periods ofthe experiment.

Our explanation for this performance degradation istwofold. First, the drastic changes in the noise field, dueto a course alteration, would require a large number ofiterations for the adaptive process to converge. Second,since the associated sensor coordinates of the towed arrayshape deformation had not been considered in the steeringvector in (23), (27), and (35), this omissioninduced erroneous estimates in the noise covariance matrixduring the iteration process of the adaptive processing. If atowed array shape estimation algorithm had been includedin this case, the adaptive process would have providedbetter bearing tracking results than those shown in Fig. 20[77], [107]. For the broad-band adaptive results, however,

(a)

(b)

Fig. 20. (a) Narrow-band bearing estimates as a function of timefor the subaperture MVDR and for a frequency bin including thesignal of interest. These narrow-band bearing estimates are for allthe 25 steered beams equally spaced in [1; �1] cosine space. (b)Tracking of bearing estimates from subaperture MVDR LOFARnarrow-band outputs. Solid line shows the true values of bearing.

the situation is completely different, and this is addressedin the following section.

2) Broad-Band Acoustic Signals:Fig. 21 shows the con-ventional and subaperture adaptive broad-band bearing es-timates as a function of time for a set of data representingan acoustic field consisting of radiated noise from distantshipping in acoustic conditions typical of a sea state twoto four. The experimental area here is different than thatincluding the processed data presented in Figs. 15–20. Theprocessed frequency regime for the broad-band bearingestimation was the same for both the conventional andpartially adaptive subaperture MVDR, GSC, and STMVprocessing schemes. Since the beam-forming operationsin this study are carried out in frequency domain, thelow-frequency resolution in this case was on the order of

(10 ). That resulted in very short convergence periods for

STERGIOPOULOS: IMPLEMENTATION OF PROCESSING SCHEMES 389

Page 33: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

(a) (b)

Fig. 21. Broad-band bearing estimates for a two-hour-long set of data. (a) Output from conven-tional beam former. (b) Output from subaperture MVDR beam former. Solid red lines show signaltracking results for the broad-band bearing estimates provided by the conventional and subapertureMVDR beam formers. These results show a superior signal detection and tracking performancefor the broad-band adaptive scheme compared with that of the conventional beam former. Thisperformance difference was consistent for a wide variety of real data sets.

the partially adaptive beam former on the order of a fewseconds.

Fig. 21(a) shows the conventional broad-band bearing es-timates, and Fig. 21(b) shows the partially adaptive broad-band estimates for a two-hour-long set of data. Althoughthe received noise level of a distant vessel was very low,the adaptive beam former has detected this target in time-space position (240, 6300 s) in Fig. 21, something that theconventional beam former has failed to show. In addition,the subaperture adaptive outputs have resolved two closelyspaced broad-band signal arrivals at space-time position(340 , 3000 s), while the conventional broad-band outputshows an indication only that two targets may be presentat this space-time position.

It is evident by these results that the subaperture adaptiveschemes of this study provide better detection (than theconventional beam former) of weak signals in the presenceof strong signals. For the above set of data, shown inFig. 21, broad-band bearing tracking results (for a fewbroad-band signals at bearing 245, 265 , and 285) areshown by the red solid lines for both the adaptive andconventional broad-band outputs. As expected, the signalfollowers of the conventional beam former lost track of thebroad-band signal with bearing 240at the time position(240 , 6300 s). On the other hand, the trackers of thesubaperture adaptive beam formers did not lose track ofthis target, as shown by the results in Fig. 21(b). At thispoint, it is important to note that the broad-band outputs of

390 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Page 34: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

the subaperture MVDR, GSC, and STMV adaptive schemeswere almost identical.

It is apparent from these results that the partially adaptivesubaperture beam formers have better performance thanthe conventional beam former in detecting very weaksignals. In addition, the subaperture adaptive configurationhas demonstrated equivalence to the conventional beamformer’s dynamic response in tracking targets during thetow vessel’s course alterations. For the above set of data,localization estimates based on the broad-band bearingtracking results of Fig. 21 converged to the expected so-lution for both the conventional and adaptive processingbeam outputs.

Given the fact that the broad-band adaptive beam formerexhibits better detection performance than the conventionalmethod—as shown by the results of Fig. 21 and other datasets, which are not reported here—it is concluded that forbroad-band signals, the subaperture adaptive beam formersof this study provide significant improvements in array gainthat result in better tracking and localization performancethan that of the conventional signal-processing scheme.

At this point, questions may be raised about the differ-ences in bearing tracking performance of the adaptive beamformer for narrow- and broad-band applications. It appearsthat the broad-band subaperture adaptive beam formers asenergy detectors exhibit very robust performance becausethe incoherent summation of the beam powers for all thefrequency bins in a wide band of interest removes the fuzzi-ness of the narrow-band adaptive LOFAR-gram outputs,shown in Fig. 17. However, a signal follower capable oftracking fuzzy narrow-band signals [27] in LOFAR-gramoutputs should remedy the observed instability in bearingtrackings for the adaptive narrow-band beam outputs. Inaddition, towed array shape estimators should also beincluded because the convergence period of the narrow-band adaptive processing is of the same order as the periodassociated with the course alterations of the towed arrayoperations. None of these remedies is required for broad-band adaptive beam formers because of their proven robustperformance as energy detectors and the short convergenceperiods of the adaptation process during course alterations.

B. Active Towed Array Sonar Applications

It was discussed in Section IV-A that the configuration ofthe generic beam-forming structure to provide continuous-beam time series at the input of a matched filter and atemporal spectral analysis unit forms the basis for integratedpassive and active sonar applications. Before the adaptiveand synthetic-aperture processing schemes are integratedwith a matched filter, however, it is essential to demonstratethat the beam time series from the output of these noncon-ventional beam formers have sufficient temporal coherenceand correlate with the reference signal. For example, if thereceived signal by a sonar array consists of FM-type pulseswith a repetition rate of a few minutes, then questionsmay be raised about the efficiency of an adaptive beamformer to achieve near instantaneous convergence in orderto provide beam time series with coherent content for the

FM pulses. This is because partially adaptive processingschemes require at least a few iterations to converge to asuboptimum solution.

To address this question, the matched filter and thenonconventional processing schemes, shown in Fig. 11,were tested with real data sets, including HFM pulses 8s long with 100 Hz bandwidth. The repetition rate was120 s. Although this may be considered as a configurationfor bistatic active sonar applications, the findings fromthis experiment can be applied to monostatic active sonarsystems as well.

Figs. 22 and 23 present some experimental results fromthe output of the active unit of the generic signal-processingstructure. Fig. 22 shows the output of the replica correlatorfor the conventional, subaperture MVDR adaptive, andsynthetic-aperture beam time series. The horizontal axisin this figure represents range or time delay of 0–120 s,which is the repetition rate of the HFM pulses. While thethree beam-forming schemes provide artifact-free outputs,it is apparent from the values of the replica correlatoroutput that the conventional beam time series exhibit bettertemporal coherence properties than the beam time seriesof the synthetic-aperture and subaperture adaptive beamformer. The significance and a quantitative estimate of thisdifference can be assessed by comparing the amplitudesof the normalized correlation outputs in Fig. 22. In thiscase, the amplitudes of the replica correlator outputs are0.32, 0.28, and 0.29 for the conventional, adaptive, andsynthetic-aperture beam formers, respectively.

This difference in performance, however, was expectedbecause for the synthetic-aperture processing scheme toachieve optimum performance, the reference signal is re-quired to be present in the five discontinuous snapshots thatare being used by the overlapped correlator to synthesizethe synthetic aperture. So if a sequence of five HFM pulseshad been transmitted with a repetition rate equal to the timeinterval between the above discontinuous snapshots, thenthe coherence of the synthetic-aperture beam time serieswould have been equivalent to that of the conventionalbeam former. Normally, this kind of requirement restrictsthe detection ranges for incoming echoes. To overcomethis limitation, a combination of the pulse length, desiredsynthetic-aperture size, and detection ranges should bederived that will be based on the aperture size of thedeployed array. A simple application scenario, illustratingthe concept of this combination, is a side-scan sonar systemthat deals with predefined ranges.

Although for the adaptive beam time series in Fig. 22a suboptimum convergence was achieved within two tothree iterations, the arrangement of the transmitted HFMpulses in this experiment was not an optimum configurationbecause the subaperture beam former had to achieve nearinstantaneous convergence with a single snapshot. Oursimulations suggest that a suboptimum solution for thesubaperture MVDR adaptive beam former is possible ifthe active sonar transmission consists of a continuoussequence of active pulses. In this case, the number ofpulses in a sequence should be a function of the number of

STERGIOPOULOS: IMPLEMENTATION OF PROCESSING SCHEMES 391

Page 35: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

(a)

(b)

(c)

Fig. 22. Output of replica correlator for the beam series of thegeneric beam-forming structure shown in Fig. 10. The processedhydrophone time series included received HFM pulses transmittedfrom the acoustic source, 8 s long with 100 Hz bandwidth and120 s repetition rate. (a) Replica correlator output for conventionalbeam time series. (b) Replica correlator output for subapertureMVDR beam time series. (c) Replica correlator output for syn-thetic-aperture (ETAM algorithm) beam time series.

subapertures of Fig. 7, and the repetition rate of this groupof pulses should be a function of the detection ranges ofoperational interest.

The near instantaneous convergence characteristics forthe other two adaptive beam formers, however, namely theGSC and the STMV schemes, are better compared withthose of the subaperture MVDR scheme. Fig. 23 showsthe replica correlator output for the same set of data as

Fig. 23. Output of replica correlator for the beam series ofthe conventional and the subaperture MVDR, GSC, and STMVadaptive schemes of the generic beam-forming structure shown inFig. 10. The processed hydrophone time series are the same asthose of Fig. 22.

in Fig. 22 and for the beam series of the conventional andsubaperture MVDR, GSC, and STMV adaptive schemes.

Even though the beam-forming schemes of this studyprovide artifact-free outputs, it is apparent from the valuesof the replica correlator outputs, shown in Figs. 22 and23, that the conventional beam time series exhibit bettertemporal coherence properties than the beam time seriesof the adaptive beam formers, except for the subapertureSTMV scheme. The significance and a quantitative esti-mate of this difference can be assessed by comparing theamplitudes of the correlation outputs in Fig. 23. In thiscase, the amplitudes of the replica correlator outputs are10.51, 9.65, 9.01, 10.58 for the conventional and adaptiveschemes (GSC in element space, GSC-SA, and STMV-SA),respectively. These results show that the beam time seriesof the STMV subaperture scheme have achieved temporalcoherence properties equivalent to those of the conventionalbeam former, which is the optimum case.

Normalization and OR-ing of matched filter outputs,such as those of Figs. 22 and 23, and their display ina LOFAR-gram arrangement would provide a waterfalldisplay of ranges as a function of beam steering and time.Fig. 24 shows these results for the correlation outputs of theconventional and adaptive beam time series for beam 23.It should be noted that this figure includes approximatelytwo hours of processed data. The detected HFM pulses and

392 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Page 36: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

Fig. 24. Waterfall display of replica correlator outputs as a func-tion of time for the same conventional and adaptive beam timeseries as those of Fig. 23. It should be noted that this figure includesapproximately two hours of processed data. The detected HFMpulses and their associated ranges are clearly shown in beam 23.A reflection from the side walls of an underwater canyon in the areais visible as a second echo closely spaced with the main arrival.

their associated ranges are clearly shown in beam 23. Areflection from the side walls of an underwater canyon inthe area is visible as a second echo closely spaced with themain arrival.

In summary, the basic difference between the LOFAR-gram results of the adaptive schemes and those of theconventional beam time series is that the improved direc-tionality of the nonconventional beam formers localizes thedetected HFM pulses in a smaller number of beams than theconventional beam former. Although we do not present herethe LOFAR-gram correlation outputs for all the 25 beams,a picture displaying the 25 beam outputs would confirm theabove statement regarding the directionality improvementsof the adaptive schemes with respect to the conventionalbeam former. Moreover, it is anticipated that the directionalproperties of the nonconventional beam formers wouldsuppress the anticipated reverberation levels during activesonar operations. Thus, if there are going to be advantagesregarding the implementation of the above nonconventionalbeam formers in active sonar applications, it is expected thatthese advantages would include minimization of the impactof reverberations by means of improved directionality.More specifically, the improved directionality of the non-conventional beam formers would restrict the reverberationeffects of active sonars in a smaller number of beamsthan that of the conventional beam former. This improveddirectionality would enhance the performance of an activesonar system (including nonconventional beam formers) to

detect echoes located near the beams that are populatedwith reverberation effects.

VI. CONCLUSION

The experimental results of this study were derivedfrom a wide variety of CW, broad-band, and HFM-typestrong and weak acoustic signals. The fact that adaptiveand synthetic-aperture beam formers provided improveddetection and tracking performance (Figs. 15–22), for theabove type of signals and under a real-time data flow asthe conventional beam former, demonstrates the merits ofthese nonconventional processing schemes for sonar ap-plications. In addition, the generic implementation schemeof this study suggests that the design approach to providesynergism between the conventional beam former and theadaptive and synthetic-aperture processing schemes couldprobably provide some answers to the integrated active andpassive sonar problem in the near future.

Although the focus of the implementation effort includedonly adaptive and synthetic-aperture processing schemes,the consideration of other types of nonlinear processingschemes for real-time sonar and radar applications shouldnot be excluded. The objective here was to demonstratethat nonconventional processing schemes can address someof the challenges that the next-generation active–passivesonars as well as radar systems will have to deal within the near future. Once a computing architecture and ageneric signal-processing structure are established, such asthose suggested in this paper, the implementation of a widevariety of nonlinear processing schemes in real-time sonarand radar systems can be achieved with minimum effort.

In conclusion, the above results suggest that the broad-band outputs of the subaperture adaptive processingschemes (Fig. 21) and the narrow-band synthetic-apertureLOFAR-grams (Fig. 16) exhibit very robust performance(under the prevailing experimental conditions) and that theirarray-gain improvements provide better signal-trackingand target-localization estimates (Figs. 18, 19, and 21)than the conventional processing schemes. It is worthnoting also that the reported improvements in performanceof the above nonconventional beam formers comparedwith that of the conventional beam former have beenconsistent for a wide variety of real data sets. For theimplementation configuration of the adaptive schemesin element space, however, the narrow-band adaptiveimplementation requires very long convergence periods,which makes the application of the adaptive processingschemes in element impractical. This is because theassociated long convergence periods destroy the dynamicresponse of the beam-forming process, which is veryessential during course alterations and for cases that includetargets with dynamic changes in their bearings.

Last, the experimental results of this study indicate thatthe subaperture GSC and STMV adaptive schemes addressthe practical concerns of near instantaneous convergence as-sociated with the implementation of adaptive beam formersin integrated active–passive sonar systems.

STERGIOPOULOS: IMPLEMENTATION OF PROCESSING SCHEMES 393

Page 37: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

ACKNOWLEDGMENT

The author wishes to express his appreciation to Dr. P.Bhartia, director general of Defense Research Establish-ment Atlantic, for his support and for allowing the use ofreal experimental data to support the work reported in thispaper. The author also is grateful to Computing DevicesCanada, MacDonalDettWiler (Canada), and Array SystemsComputing (Canada) for their support and encouragement.

REFERENCES

[1] W. C. Knight, R. G. Pridham, and S. M. Kay, “Digital signalprocessing for sonar,”Proc. IEEE,vol. 69, pp. 1451–1506, Nov.1981.

[2] B. Windrow, P. E. Manfey, L. J. Griffiths, and B. B.Goode, “Adaptive antenna systems,”Proc. IEEE, vol. 55,pp. 2143–2159, Dec. 1967.

[3] A. A. Winder, “Sonar system technology,”IEEE Trans. SonicsUltrason., vol. SU-22, no. 5, pp. 291–332, 1975.

[4] A. B. Baggeroer, “Sonar signal processing” inApplications ofDigital Signal Processing,A. V. Oppenheim, Ed. EnglewoodCliffs, NJ: Prentice-Hall, 1978.

[5] American Standard Acoustical Terminology S1.1-1960,Ameri-can Standards Association, New York, NY, May 25, 1960.

[6] D. Stansfield,Underwater Electroacoustic Transducers. Bath,England: Bath University Press and Institute of Acoustics,1990.

[7] J. M. Powers, “Long Range Hydrophones,” inApplications ofFerroelectric Polymers,T. T. Wang, J. M. Herbert, and A. M.Glass, Eds. New York: Chapman & Hall, 1988.

[8] R. I. Urick, Principles of Underwater Acoustics,3rd ed. NewYork: McGraw-Hill, 1983.

[9] S. Stergiopoulos and A. T. Ashley, “Guest editorial,”IEEE J.Oceanic Eng.,vol. 18, no. 4, pp. 361–365, 1993.

[10] R. C. Trider and G. L. Hemphill, “The next generation sig-nal processor: An architecture for the future,” Defense Re-search Establishment Atlantic, Dartmouth, N.S., Canada, Rep.DREA/oral 1994/Hemphill/1, 1994.

[11] N. L. Owsley, “Sonar Array Processing,” inArray SignalProcessing,S. Haykin, Ed. Englewood Cliffs, NJ: Prentice-Hall Signal Processing Series, p. 123, 1985.

[12] B. Van Veen and K. Buckley, “Beamforming: A versatileapproach to spatial filtering,”IEEE Acoust., Speech, SignalProcessing Mag.,pp. 4–24, Apr. 1988.

[13] A. H. Sayed and T. Kailath, “A state-space approach to adaptiveRLS filtering,” IEEE Signal Processing Mag.,pp. 18–60, July1994.

[14] E. J. Sullivan, W. M. Carey, and S. Stergiopoulos, “Ed-itorial,”IEEE J. Oceanic Eng.,vol. 17, no. 1, pp. 1–7,1992.

[15] N. C. Yen and W. Carey, “Application of synthetic-apertureprocessing to towed-array data,”J. Acoust. Soc. Amer.,vol. 86,pp. 754–765, 1989.

[16] S. Stergiopoulos and E. J. Sullivan, “Extended towed arrayprocessing by overlapped correlator,”J. Acoust. Soc. Amer.,vol.86, no. 1, pp. 158–171, 1989.

[17] S. Stergiopoulos, “Optimum bearing resolution for a movingtowed array and extension of its physical aperture,”J. Acoust.Soc. Amer.,vol. 87, no. 5, pp. 2128–2140, 1990.

[18] S. Stergiopoulos and H. Urban, “An experimental study informing a long synthetic aperture at sea,”IEEE J. Oceanic Eng.,vol. 17, no. 1, pp. 62–72, 1992.

[19] G. S. Edelson and E. J. Sullivan, “Limitations on the overlap-correlator method imposed by noise and signal characteristics,”IEEE J. Oceanic Eng.,vol. 17, no. 1, pp. 30–39, 1992.

[20] G. S. Edelson and D. W. Tufts, “On the ability to estimatenarrow-band signal parameters using towed arrays,”IEEE J.Oceanic Eng.,vol. 17, no. 1, pp. 48–61, 1992.

[21] C. L. Nikias and J. M. Mendel, “Signal processing with higher-order spectra,”IEEE Signal Processing Mag.,pp. 10–37, July1993.

[22] S. Stergiopoulos, R. C. Trider, and A. T. Ashley, “Implemen-tation of a synthetic aperture processing scheme in a towedarray sonar system,” presented at the 127th ASA Meeting,Cambridge, MA, June 1994.

[23] J. Riley, S. Stergiopoulos, R. C. Trider, A. T. Ashley, and B.Ferguson, “Implementation of adaptive beamforming process-ing scheme in a towed array sonar system,” presented at the127th ASA Meeting, Cambridge, MA, June 1994.

[24] A. B. Baggeroer, W. A. Kuperman, and P. N. Mikhalevsky,“An overview of matched field methods in ocean acoustics,”IEEE J. Oceanic Eng.,vol. 18, no. 4, pp. 401–424, 1993.

[25] R. D. Doolitle, A. Tolstoy, and E. J. Sullivan, “Editorial,”IEEEJ. Oceanic Eng.vol. 18, pp. 153–155, 1993.

[26] P. K. Simpson, “Editorial,”IEEE J. Oceanic Eng.vol. 17, pp.297–298, Oct. 1992.

[27] A. Kummert, “Fuzzy technology implemented in sonar sys-tems,” IEEE J. Oceanic Eng.,vol. 18, no. 4, pp. 483–490,1993.

[28] A. D. Whalen, Detection of Signals in Noise.New York:Academic, 1971.

[29] D. Middleton, Introduction to Statistical Communication The-ory. New York: McGraw-Hill, 1960.

[30] H. L. Van Trees,Detection, Estimation and Modulation Theory.New York: Wiley, 1968.

[31] P. D. Welch, “The use of fast Fourier transform for theestimation of power spectra: A method based on time aver-aging over short, modified periodigrams,”IEEE Trans. AudioElectroacoust.,vol. AU-15, pp. 70–79, 1967.

[32] F. J. Harris, “On the use of windows for harmonic analysis withdiscrete Fourier transform,”Proc. IEEE, vol. 66, pp. 51–83,Jan. 1978.

[33] W. M. Carey and E. C. Monahan, “Guest editorial,”IEEE J.Oceanic Eng.,vol. 15, no. 4, 265–267, 1990.

[34] R. A. Wagstaff, “Iterative technique for ambient noise hori-zontal directionality estimation from towed line array data,”J.Acoust. Soc. Amer.,vol. 63, no. 3, pp. 863–869, 1978.

[35] , “A computerized system for assessing towed array sonarfunctionality and detecting faults,”IEEE J. Oceanic Eng.,vol.18, no. 4, pp. 529–542, 1993.

[36] S. M. Flatte, R. Dashen, W. H. Munk, K. M. Watson, and F.Zachariasen,Sound Transmission Through a Fluctuating Ocean.New York: Cambridge Univ. Press, 1985.

[37] D. Middleton, “Acoustic scattering from composite wind-wavesurfaces in bubble-free regimes,”IEEE J. Oceanic Eng.vol.14, pp. 17–75, 1989.

[38] W. A. Kuperman and F. Ingenito, “Attenuation of the coherentcomponent of sound propagating in shallow water with roughboundaries,”J. Acoust. Soc. Amer.,vol. 61, pp. 1178–1187,1977.

[39] B. J. Uscinski, “Acoustic scattering by ocean irregularities:Aspects of the inverse problem,”J. Acoust. Soc. Amer.,vol.86, pp. 706–715, 1989.

[40] W. M. Carey and W. B. Moseley, “Space-time processing,environmental-acoustic effects,”IEEE J. Oceanic Eng.,vol. 16,pp. 285–301, 1991; also inProgress in Underwater Acoustics.New York: Plenum, 1987, pp. 743–758.

[41] S. Stergiopoulos, “Limitations on towed-array gain imposed bya non isotropic ocean,”J. Acoust. Soc. Amer.,vol. 90, no. 6,pp. 3161–3172, 1991.

[42] W. A. Struzinski and E. D. Lowe, “A performance comparisonof four noise background normalization schemes proposed forsignal detection systems,”J. Acoust. Soc. Amer.,vol. 76, no. 6,pp. 1738–1742, 1984.

[43] S. W. Davies and M. E. Knappe, “Noise background normal-ization for simultaneous broadband and narrowband detection,”in Proc. IEEE-ICASSP 88,vol. U315, 1988, pp. 2733–2736.

[44] S. Stergiopoulos, “Noise normalization technique for beam-formed towed array data,”J. Acoust. Soc. Amer.,vol. 97, no.4, 2334–2345, 1995.

[45] A. H. Nuttall, “Performance of three averaging methods, forvarious distributions,” inProc. SACLANTCEN Conf. Underwa-ter Ambient Noise,vol. II, 1982.

[46] H. Cox, R. M. Zeskind, and M. M. Owen, “Robust AdaptiveBeamforming,”IEEE Trans. Acoust., Speech, Signal Processing,vol. ASSP-35, no. 10, pp. 1365–1376, 1987.

[47] H. Cox, “Resolving power and sensitivity to mismatch ofoptimum array processors,”J. Acoust. Soc. Amer.,vol. 54, no.3, pp. 771–785, 1973.

[48] J. Capon, “High resolution frequency wavenumber spec-tral analysis,” Proc. IEEE, vol. 57, pp. 1408–1418, Aug.1969.

[49] T. L. Marzetta, “A new interpretation for Capon’s maximum

394 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998

Page 38: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

likelihood method of frequency-wavenumber spectra estima-tion,” IEEE Trans. Acoust., Speech, Signal Processing,vol.ASSP-31, no. 2, pp. 445–449, 1983.

[50] S. Haykin, Adaptive Filter Theory. Englewood Cliffs, NJ:Prentice-Hall, 1986.

[51] S. D. Peters, “Near-instantaneous convergence for memory-less narrowband GSC/NLMS adaptive beamformers,”IEEETrans. Acoust., Speech, Signal Processing,submitted for pub-lication.

[52] S. Stergiopoulos, “Influence of underwater environment’s co-herence properties on sonar signal processing,” inProc. 3rd Eu-ropean Conf. Underwater Acoustics, FORTH-IACM,Heraklion-Crete, 1996, vol. V-I, pp. 453–458.

[53] A. C. Dhanantwari and S. Stergiopoulos, “Adaptive beam-forming with near-instantaneous convergence for matched filterprocessing,”J. Acoust. Soc. Amer.,submitted for publication.

[54] R. O. Nielsen, Sonar Signal Processing. Norwood, MA:Artech House, 1991.

[55] D. Middleton and R. Esposito, “Simultaneous optimum detec-tion and estimation of signals in noise,”IEEE Trans. Inform.Theory,vol. IT-14, pp. 434–444, 1968.

[56] V. H. MacDonald and P. M. Schulteiss, “Optimum passivebearing estimation in a spatially incoherent noise environment,”J. Acoust. Soc. Amer.,vol. 46, no. 1, pp. 37–43, 1969.

[57] G. C. Carter, “Coherence and time delay estimation,”Proc.IEEE, vol. 75, pp. 236–255, Feb. 1987.

[58] C. H. Knapp and G. C. Carter, “The generalized correlationmethod for estimation of time delay,”IEEE Trans. Acoust.,Speech, Signal Processing,vol. ASSP-24, pp. 320–327,1976.

[59] D. C. Rife and R. R. Boorstyn, “Single-tone parameter esti-mation from discrete-time observations,”IEEE Trans. Inform.Theory,vol. IT-20, pp. 591–598, 1974.

[60] , “Multiple-tone parameter estimation from discrete-timeobservations,”Bell Syst. Tech. J.,vol. 20, pp. 1389–1410,1977.

[61] S. Stergiopoulos and N. Allcott, “Aperture extension for a towedarray using an acoustic synthetic aperture or a linear predictionmethod,” in Proc. ICASSP 92,Mar. 1992.

[62] S. Stergiopoulos and H. Urban, “ A new passive syntheticaperture technique for towed arrays,”IEEE J. Oceanic Eng.,vol. 17, no. 1, pp. 16–25, 1992.

[63] W. M. X. Zimmer, “High resolution beamforming techniques,performance analysis,” SACLANT Undersea Research Centre,La Spezia, Italy, Rep. SR-104, 1986.

[64] A. Mohammed, “Novel methods of digital phase shifting toachieve arbitrary values of time delays,” Defense ResearchEstablishment Atlantic, Dartmouth, N.S., Canada, DREA Rep.85/106, 1985.

[65] A. Antoniou, Digital Filters: Analysis, Design, and Applica-tions, 2nd ed. New York: McGraw-Hill, 1993.

[66] L. R. Rabiner and B. Gold,Theory and Applications of Dig-ital Signal Processing. Englewood Cliffs, NJ: Prentice-Hall,1975.

[67] A. Mohammed, “A high-resolution spectral analysis technique,”Defense Research Establishment Atlantic, Dartmouth, N.S.,Canada, DREA Memo. 83/D, 1983.

[68] B. G. Ferguson, “Improved time-delay estimates of underwa-ter acoustic signals using beamforming and prefiltering tech-niques,” IEEE J. Oceanic Eng.,vol. 14, no. 3, pp. 238–244,1989.

[69] S. Stergiopoulos and A. T. Ashley, “An experimental evaluationof split-beam processing as a broadband bearing estimatorfor line array sonar systems,”J. Acoust. Soc. Amer.,to bepublished.

[70] G. C. Carter and E. R. Robinson, “Ocean effects on time delayestimation requiring adaptation,”IEEE J. Oceanic Eng.,vol.18, no. 4, 367–378, 1993.

[71] P. Wille and R. Thiele, “Transverse horizontal coherence ofexplosive signals in shallow water,”J. Acoust. Soc. Amer.,vol.50, pp. 348–353, 1971.

[72] P. A. Bello, “Characterization of randomly time-variant linearchannels,”IEEE Trans. Commun. Syst.,vol. CS-10, pp. 360–393, 1963.

[73] D. A. Gray, B. D. O. Anderson, and R. R. Bitmead, “Towedarray shape estimation using Kalman filters—Theoretical mod-els,” IEEE J. Oceanic Eng.,vol. 18, no. 4, pp. 543–556, Oct.1993.

[74] B. G. Quinn, R. S. F. Barrett, P. J. Kootsookos, and S. J. Searle,“The estimation of the shape of an array using a hidden Markovmodel,” IEEE J. Oceanic Eng.,vol. 18, no. 4, pp. 557–564,Oct. 1993.

[75] B. G. Ferguson, “Remedying the effects of array shape distor-tion on the spatial filtering of acoustic data from a line arrayof hydrophones,”IEEE J. Oceanic Eng.,vol. 18, no. 4, pp.565–571, Oct. 1993.

[76] J. L. Riley and D. A. Gray, “Towed array shape estimation usingKalman filters—Experimental investigation,”IEEE J. OceanicEng., vol. 18, no. 4, pp. 572–581, Oct. 1993.

[77] B. G. Ferguson, “Sharpness applied to the adaptive beamform-ing of acoustic data from a towed array of unknown shape,”J.Acoust. Soc. Amer.,vol. 88, no. 6, pp. 2695–2701, 1990.

[78] F. Lu, E. Milios, and S. Stergiopoulos, “A new towed arrayshape estimation method for sonar systems,”J. Acoust. Soc.Amer., submitted for publication.

[79] N. L. Owsley, “Systolic array adaptive beamforming,” NavalUnderwater Warfare Center, New London, CT, Rep. 7981, Sept.1987.

[80] D. A. Gray, “Formulation of the maximum signal-to-noise ratioarray processor in beam space,”J. Acoust. Soc. Amer.,vol. 72,no. 4, pp. 1195–1201, 1982.

[81] O. L. Frost, “An algorithm for linearly constrained adaptivearray processing,”Proc. IEEE,vol. 60, pp. 926–935, Aug. 1972.

[82] H. Wang and M. Kaveh, “Coherent signal-subspace processingfor the detection and estimation of angles of arrival of mul-tiple wideband sources,”IEEE Trans. Acoust., Speech, SignalProcessing,vol. ASSP-33, pp. 823–831, 1985.

[83] J. Krolik and D. N. Swingler, “Bearing estimation of multiplebroadband sources using steered covariance matrices,”IEEETrans. Acoust., Speech, Signal Processing,vol. ASSP-37, pp.1481–1494, 1989.

[84] , “Focused wideband array processing via spatial resam-pling,” IEEE Trans. Acoust., Speech, Signal Processing,vol.ASSP-38, Feb. 1990.

[85] J. P. Burg, “Maximum entropy spectral analysis,” presented atthe 37th Meeting of the Society of Exploration Geophysicists,Oklahoma City, OK, 1967.

[86] C. Lancos,Applied Analysis. Englewood Cliffs, NJ: Prentice-Hall, 1956.

[87] V. E. Pisarenko, “On the estimation of spectra by means ofnonlinear functions on the covariance matrix,”Geophys. J.Astron. Soc.,vol. 28, pp. 511–531, 1972.

[88] A. H. Nuttall, “Spectral analysis of a univariate process withbad data points, via maximum entropy and linear predictivetechniques” Naval Underwater Warfare Center, New London,CT, Rep. TR5303, 1976.

[89] R. Kumaresan and W. D. Tufts, “Estimating the angles of arrivalof multiple plane waves,”IEEE Trans. Acoust., Speech, SignalProcessing,vol. ASSP-30, pp. 833–840, 1982.

[90] R. A. Wagstaff and J.-L. Berrou, “Underwater ambient noise:Directionality and other statistics,” SACLANT Undersea Re-search Centre, La Spezia, Italy, Rep. SR-59, 1982.

[91] S. M. Kay and S. L. Marple, “Spectrum analysis—A modernperspective,”Proc. IEEE,vol. 69, pp. 1380–1419, 1981.

[92] D. H. Johnson and S. R. Degraaf, “Improving the resolution ofbearing in passive sonar arrays by eigenvalue analysis,”IEEETrans. Acoust., Speech, Signal Processing,vol. ASSP-30, pp.638–647, 1982.

[93] D. W. Tufts and R. Kumaresan, “Estimation of frequenciesof multiple sinusoids: Making linear prediction perform likemaximum likelihood,” Proc. IEEE, vol. 70, pp. 975–989,1982.

[94] G. Bienvenu and L. Kopp, “Optimality of high resolutionarray processing using the eigensystem approach,”IEEETrans. Acoust., Speech, Signal Processing,vol. ASSP-31, pp.1235–1248, 1983.

[95] D. N. Swingler and R. S. Walker, “Linear array beamformingusing linear prediction for aperture interpolation and extrapola-tion,” IEEE Trans. Acoust., Speech, Signal Processing,vol. 37,pp. 16–30, 1989.

[96] P. Tomarong and A. El-Jaroudi, “Robust high-resolutiondirection-of-arrival estimation via signal eigenvector domain,”IEEE J. Oceanic Eng.,vol. 18, no. 4, pp. 491–499, 1993.

[97] J. Fawcett, “Synthetic aperture processing for a towed array anda moving source,”J. Acoust. Soc. Amer.,vol. 93, pp. 2832–2837, 1993.

STERGIOPOULOS: IMPLEMENTATION OF PROCESSING SCHEMES 395

Page 39: Implementation Of Adaptive And Synthetic-aperture ...dimitris/research/stergios/00659491.pdf · STERGIOS STERGIOPOULOS, SENIOR MEMBER, IEEE Progress in the implementation of state-of-the-art

[98] L. J. Griffiths and C. W. Jim, “An alternative approach tolinearly constrained adaptive beamforming,”IEEE Trans. An-tennas Propagat.,vol. AP-30, no. 1, pp. 27–34, 1982.

[99] D. T. M. Slock, “On the convergence behavior of the LMS andthe normalized LMS algorithms,”IEEE Trans. Acoust., Speech,Signal Processing,vol. 31, pp. 1811–1825, 1993.

[100] A. C. Dhanantwari, “Adaptive beamforming with near-instantaneous convergence for matched filter processing,”Master’s thesis, Dept. Electrical Engineering, TechnicalUniversity of Nova Scotia, Halifax, N.S., Canada, Sept.1996.

[101] A. Tawfik and S. Stergiopoulos, “A generic processing structuredecomposing the beamforming process of 2-D & 3-D arrays ofsensors into sub-sets of coherent processes,” inProc. IEEE-CCECE,St. John’s, Nfld., Canada, May 1997.

[102] W. A. Burdic, Underwater Acoustic System Analysis.Engle-wood Cliffs, NJ: Prentice-Hall, 1984.

[103] J.-P. Hermand and W. I. Roderick, “Acoustic model-basedmatched filter processing for fading time-dispersive ocean chan-nels,” IEEE J. Oceanic Eng.,vol. 18, no. 4, pp. 447–465,1993.

[104] “Mercury news,” Mercury Computer Systems, Inc., Chelms-ford, MA, Jan. 1997.

[105] Y. Bar-Shalom and T. E. Fortman,Tracking and Data Associ-ation. Boston, MA: Academic, 1988.

[106] S. S. Blackman,Multiple-Target Tracking with Radar Applica-tions. Norwood, MA: Artech House, 1986.

[107] W. Cambell, S. Stergiopoulos, and J. Riley, “Effects of bearingestimation improvements of nonconventional beamformers onbearing-only tracking,” inProc. Oceans’95 MTS/IEEE,SanDiego, CA, 1995.

[108] W. A. Roger and R. S. Walker, “Accurate estimation of sourcebearing from line arrays,” inProc. 13th Biennial Symp. Com-munications,Kingston, Ont., Canada, 1986.

[109] D. Peters, “Long range towed array target analysis—Principlesand practice,” Defense Research Establishment Atlantic, Dart-mouth, N.S., Canada, DREA Memo. 95/217, 1995.

Stergios Stergiopoulos(Senior Member, IEEE)received the B.S. degree from the Universityof Athens, Greece, in 1976 and the M.S. andPh.D. degrees in geophysics from York Uni-versity, Toronto, Canada, in 1977 and 1982,respectively.

He currently is an Adjunct Professor with theDepartment of Electrical and Computer Engi-neering, University of Western Ontario, Canada,and a Senior Defense Scientist with the Defenceand Civil Institute of Environmental Medicine of

the Canadian Department of National Defence (DND). From 1988 to 1991,he was with the SACLANT Undersea Research Centre (SACLANTCEN)in La Spezia, Italy, where he performed both theoretical and experimentalresearch in sonar signal processing. At SACLANTCEN, he developed(with E. J. Sullivan from the Naval Underwater Warfare Center) an acous-tic synthetic-aperture technique that has been patented by the U.S. Navyand the Hellenic Navy. From 1984 to 1988, he developed an underwaterfixed-array surveillance system for the Hellenic Navy in Greece and wasalso appointed Senior Advisor to the Greek Minister of Defense. From1982 to 1984, he was a Research Associate with York University andcollaborated with the U.S. Army Ballistic Research Lab (BRL), Aberdeen,MD, on projects related to the stability of liquid-filled, spin stabilizedprojectiles. His current interests are associated with the implementationof nonconventional processing schemes in multidimensional arrays ofsensors for sonar, radar, and medical tomography systems. He has beena Consultant to a number of companies, including Atlas Elektronik,Germany, Hellenic Arms Industry, and Hellenic Aerospace Industry.

Dr. Stergiopoulos is a fellow of the Acoustical Society of America. He isan Associate Guest Editor for the IEEE JOURNAL OF OCEANIC ENGINEERING,for which he has prepared two special issues on acoustic synthetic apertureand sonar system technology. In 1984, he received a U.S. NationalResearch Council Research Fellowship for BRL. He has received grantsfrom the Canadian DND, the Natural Sciences and Engineering ResearchCouncil of Canada, the North Atlantic Treaty Organization, and theEuropean Esprit program.

396 PROCEEDINGS OF THE IEEE, VOL. 86, NO. 2, FEBRUARY 1998