Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency...

21
I n this article we present a signal processing frame- work that we refer to as quantum signal processing (QSP) [1] that is aimed at developing new or mod- ifying existing signal processing algorithms by bor- rowing from the principles of quantum mechanics and some of its interesting axioms and constraints. However, in contrast to such fields as quantum computing and quantum information theory, it does not inherently depend on the physics associated with quantum mechanics. Conse- quently, in developing the QSP framework we are free to impose quantum mechanical constraints that we find useful and to avoid those that are not. This framework provides a unifying conceptual structure for a variety of traditional processing techniques and a precise mathe- matical setting for developing gener- alizations and extensions of algorithms, leading to a potentially useful paradigm for signal processing with applications in areas including frame theory, quantization and sampling methods, detec- tion, parameter estimation, covariance shaping, and multiuser wireless communication systems. 12 IEEE SIGNAL PROCESSING MAGAZINE NOVEMBER 2002 1053-5888/02/$17.00©2002IEEE Yonina C. Eldar and Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957 captures the transition of a milk drop as it splashes from a single drop into a continuous ring and then coalesces into droplets forming the points of a coronet. Copyright Harold and Ester Edgerton Family Foundation. Reproduced with permission.

Transcript of Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency...

Page 1: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

In this article we present a signal processing frame-work that we refer to as quantum signal processing(QSP) [1] that is aimed at developing new or mod-ifying existing signal processing algorithms by bor-

rowing from the principles ofquantum mechanics and some of itsinteresting axioms and constraints.However, in contrast to such fields asquantum computing and quantuminformation theory, it does not inherently depend on thephysics associated with quantum mechanics. Conse-quently, in developing the QSP framework we are free to

impose quantum mechanical constraints that we finduseful and to avoid those that are not. This frameworkprovides a unifying conceptual structure for a variety oftraditional processing techniques and a precise mathe-

matical setting for developing gener-al izat ions and extensions ofalgorithms, leading to a potentiallyuseful paradigm for signal processingwith applications in areas including

frame theory, quantization and sampling methods, detec-tion, parameter estimation, covariance shaping, andmultiuser wireless communication systems.

12 IEEE SIGNAL PROCESSING MAGAZINE NOVEMBER 20021053-5888/02/$17.00©2002IEEE

Yonina C. Eldar andAlan V. Oppenheim

Measurement, Quantization, Consistency

This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957 captures thetransition of a milk drop as it splashes from a single drop into a continuous ring and then

coalesces into droplets forming the points of a coronet. Copyright Harold and Ester EdgertonFamily Foundation. Reproduced with permission.

Page 2: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

Fig. 1 presents a general overview of the key elementsin quantum physics that provide the basis for the QSPframework and an indication of the key results that haveso far been developed within this framework. In the re-mainder of this article, we elaborate on the various ele-ments in this figure.

OverviewMany new classes of signal processing algorithms havebeen developed by emulating the behavior of physicalsystems. There are also many examples in the signal pro-cessing literature in which new classes of algorithms havebeen developed by artificially imposing physical con-straints on implementations that are not inherently sub-ject to these constraints. In this article, we survey a newclass of algorithms of this type that we refer to as QSP. Afully detailed treatment is contained in [1] as well as in thevarious journal and conference papers referred tothroughout this article. Among the many well-known ex-amples of digital signal processing algorithms that are de-rived by using physical phenomena and constraints asmetaphors are wave digital filters [2]. This class of filtersrelies on emulating the physical constraints of passivityand energy conservation associated with analog filters toachieve low sensitivity in coefficient variations in digitalfilters. As another class of ex-amples, the fractal-like as-pects of nature [3] andrelated modeling have in-spired interesting signal pro-cessing paradigms that arenot constrained by the asso-ciated physics. These includefractal modulation [4],which emulates the fractalcharacteristic of nature forcommunicating over a par-ticular class of unreliablechannels, and the various ap-proaches to image compres-sion based on syntheticgeneration of fractals [5].Likewise, the chaotic behav-ior of certain features of na-ture have inspired newclasses of signals for securecommunications, remotesensing, and a variety ofother signal processing ap-plications [6]-[9]. Other ex-amples of algorithms usingphysical systems as a simileinclude solitons [10], ge-netic algorithms [11], simu-lated annealing [12], andneural networks [13].

These examples are just several of many that under-score the fact that even in signal processing contexts thatare not constrained by the physics, exploiting laws of na-ture can inspire new methods for algorithm design andmay lead to interesting, efficient, and effective process-ing techniques.

Three fundamental inter-related underlying principlesof quantum mechanics that play a major role in QSP aspresented here are the concept of a measurement, theprinciple of measurement consistency, and the principleof quantization of the measurement output. In addition,when using quantum systems in a communication con-text, other principles arise such as inner product con-straints. QSP is based on exploiting these variousprinciples and constraints in the context of signal process-ing algorithms.

In a broad sense, the terms measurement, measure-ment consistency, and output quantization are wellknown in signal processing, although not with the samemeaning or precise mathematical interpretation and con-straints as in quantum mechanics.

In signal processing, the term measurement can begiven a variety of precise or imprecise interpretations.However, as discussed further in this article, in quantummechanics measurement has a very specific definition andmeaning, much of which we carry over to the QSP frame-

NOVEMBER 2002 IEEE SIGNAL PROCESSING MAGAZINE 13

QuantumPhysics

QuantumMeasurement

Generalized QuantumMeasurement

MeasurementQuantization

QuantumDetection

MeasurementConsistency

Measurement VectorOrthogonality

QSPMeasurement

Subspace Coding

Probabilistic Quantization

CS Matched Filter

General Sampling Framework

Inner ProductConstraints

Least-SquaresInner Product Shaping

MMSE Covariance Shaping

CS Least-Squares Estimation

CS Multiuser Detection

Combined QSPMeasurement

Randomized Algorithms

Frames

Subspace Detectors

QuantumPhysics

QuantumSignal Processing

� 1. QSP framework, its relationship to quantum physics, and key results. In the figure, CS standsfor covariance shaping.

Page 3: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

work. Similarly, in signal processing, quantization is tra-ditionally thought of in fairly specific terms. In quantummechanics, quantization of the measurement output is afundamental underlying principle, and applying thisprinciple, along with the quantum mechanical notions ofmeasurement and consistency, leads to some potentiallyintriguing generalizations of quantization as it is typicallyviewed in signal processing.

Measurement consistency also has a precise meaningin quantum mechanics, specifically that repeated applica-tions of a measurement must yield the same outcome. Insignal processing, a similar consistency concept is the ba-sis for a variety of classes of algorithms including signalestimation, interpolation, and quantization methods.Some early examples of consistency as it typically arises insignal processing are the interpolation condition in filterdesign [14] and the condition for avoiding intersymbolinterference in waveforms for pulse amplitude modula-tion [15]. More recent examples include perfect recon-struction filter banks [16], [17], multiresolution andwavelet approximations [18], [19], and sampling meth-ods in which the perfect reconstruction requirement is re-placed by the less stringent consistency requirement[20]-[24]. Here again, viewing measurement consis-tency in a broader framework motivated by quantum me-chanics can lead to some new and interesting signalprocessing algorithms.

In the next section, we summarize the basic principlesof measurement, consistency, and quantization as they re-late to quantum mechanics. We also outline the key ele-ments and constraints that are associated with the use ofquantum states for communication in the context of whatis commonly referred to as the quantum detection prob-lem [25]. We then indicate how these principles and con-straints imposed by the physics are emulated and appliedin the framework of QSP.

Quantum SystemsIn developing the QSP framework, it is convenient to dis-cuss both signal processing and quantum mechanics in avector space setting. Specifically we consider an arbitraryHilbert (inner product) space� with inner product ⟨ ⟩x y,for any vectors x y, in �. Typically we will refer to ele-ments of � as vectors or signals interchangeably. In thissection we summarize the key elements of physical quan-tum systems that are emulated in QSP, including those as-sociated with quantum detection.

MeasurementA quantum system in a pure state is characterized by anormalized vector in �. Information about a quantumsystem is extracted by subjecting the system to a quantummeasurement.

A quantum measurement is a nonlinear (probabilistic)mapping that in the simplest case can be described in

terms of a set of measurement vectors µ i that span mea-surement subspaces � i in �. The laws of quantum me-chanics impose the constraint that the vectors µ i must beorthonormal and therefore also linearly independent. Ameasurement of this form is referred to as a rank-onequantum measurement. In the more general case, thequantum measurement is described in terms of a set ofprojection operators Pi onto subspaces � i of �, wherefrom the laws of quantum mechanics these projectionsmust form a complete set of orthogonal projections. Sucha measurement is referred to as a subspace quantum mea-surement. In quantum mechanics, the outcome of a mea-surement is inherently probabilistic, with theprobabilities of the outcomes of any measurement deter-mined by the vector representing the underlying state ofthe system at the time of the measurement. The measure-ment collapses (projects) the state of the quantum systemonto a state that is compatible with the measurement out-come. In general the final state of the system is differentthan the state of the system prior to the measurement.

Measurement consistency is a fundamental postulate ofquantum mechanics, i.e., repeated measurements on asystem must yield the same outcomes; otherwise wewould not be able to confirm the output of a measure-ment. Therefore the state of the system after a measure-ment must be such that if we instantaneously remeasurethe system in this state, then the final state after this sec-ond measurement will be identical to the state after thefirst measurement.

Quantization of the measurement outcome is a directconsequence of the consistency requirement. Specifically,the consistency requirement leads to a class of states re-ferred to as determinate states of the measurement [26].These are states of the quantum system for which themeasurement yields a known outcome with probabilityone and are the states that lie completely in one of themeasurement subspaces � i . Furthermore, even when thestate of the system is not one of the determinate states, af-ter performing the measurement the system is quantizedto one of these states, i.e., is certain to be in one of thesestates, where the probability of being in a particular deter-minate state is a function of the inner products betweenthe state of the system and the determinate states. See“Quantum Measurement” for more details.

As an example of a rank-one quantum measurement,suppose that the measurement is defined by twoorthonormal measurement vectors µ1 and µ 2 and thestate of the system is given by x = +( / ) ( / )1 2 3 21 2µ µ .As outlined further in “Quantum Measurement,” themeasurement will project the state x onto one of the mea-surement vectors µ1 or µ 2 , where the probability of pro-ject ing onto µ1 is p x( ) | , | /1 1 41

2= ⟨ ⟩ =µ and theprobabi l i ty of project ing onto µ 2 isp x( ) | , | /2 3 42

2= ⟨ ⟩ =µ . The measurement process is illus-trated in Fig. 2.

14 IEEE SIGNAL PROCESSING MAGAZINE NOVEMBER 2002

Page 4: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

NOVEMBER 2002 IEEE SIGNAL PROCESSING MAGAZINE 15

Quantum Measurement

A standard (von Neumann) measurement in quan-tum mechanics is defined by a collection of pro-

ject ion operators { , }P ii ∈� onto subspaces{ , }� � �i i⊆ ∈ , where � denotes an index set and theindex i ∈� corresponds to a possible measurementoutcome. The laws of quantum mechanics impose theconstraint that the operators { , }P ii ∈� form a com-plete set of orthogonal projections so that with( ) *⋅ de-noting the adjoint of the correspondingtransformation, for any i k, ∈� ,

P Pi i= * ;(1)

P Pi i2 = ;

(2)

P P i ki k = ≠0, if ;(3)

P Iii∈∑ =� �

.(4)

Conditions (3) and (4) imply that the measure-ment subspaces �i are orthogonal and that their directsum is equal to �.

If the state vector is φ, then from the rules of quan-tum mechanics, the probability of observing the ithoutcome is

p i Pi( ) ,= ⟨ φ φ⟩ .(5)

Since the state is normalized,

p i Pi ii

( ) , ,∑ ∑= ⟨ ⟩ = ⟨φ φ⟩ =φ φ 1.

In the simplest case, the projection operators arerank-one operators Pi i i= µ µ * for some nonzero vectors{ , }µ i i∈ ∈� � . We refer to such measurements asrank-one quantum measurements. Then (3) and (4)imply that the measurement vectors{ , }µ i i ∈� form anorthonormal basis for �. If the state vector is φ, thenthe probability of observing the ith outcome is

p i i( ) ,= ⟨ φ⟩µ 2.(6)

For a rank-one quantum measurement defined byorthonormal measurement vectors µ i , it follows from(1) that if φ = µ i for some i, then p i( ) =1 and output i isobtained with probability one (w.p. 1). The states{ }φ = µ i are therefore called the determinate states ofthe measurement. More generally, the determinatestates are the states that lie completely in one of the

measurement spaces �i . Indeed, if φ ∈�i , then Pi φ = φand from (5), p i( ) =1.

The quantum measurement can be formulated interms of a probabilistic mapping between � and thedeterminate states. Given a space of states � and anobservation space � , a probabilistic mapping from �

to � is a function f : � � �× → , where � is the sam-ple space of an auxi l iary chance variableW x p w xW X( ) { , ( | )}|= � with a probability distributionp w xW X| ( | ) on � that in general depends on x ∈� .Note that a deterministic mapping f : � �→ is a spe-cial case of a probabilistic mapping in which the auxil-iary chance variable is deterministic; i.e., it has oneoutcome w w.p. 1. In this case the function f is inde-pendent of W.

A rank-one quantum measurement correspondingto orthonormal measurement vectors { , }µ i i∈ ∈� �

that span subspaces{ , }� � �i i⊆ ∈ can be viewed as aprobabilistic mapping between � and the determi-nate states that is� 1) a deterministic identity mapping for φ ∈�i ;� 2) a probabilistic mapping for nondeterminate statesthat maps φ to a normalized vector in the direction ofthe orthogonal projection Pi φ for some value i ∈� ,where i f k wk i= ⟨ φ⟩ ∈({ , , }, )µ � .

Here f : � � �× → is a probabilistic mapping be-tween elements φ of � and indices i ∈� , which de-pends on a chance variableW with a discrete alphabet� �= such that the probability of outcome wi ∈�

depends on the input φ only through the inner prod-ucts { , , }µ k kφ ∈� . Specifically, the probability of out-come wi is | , |µ i φ 2. I f wi is observed, thenf x k w ik i({ , , }, )⟨ ⟩ ∈ =µ � .

A subspace quantum measurement correspondingto a complete set of orthogonal projection operators{ , }P ii ∈� onto subspaces { , }� � �i i⊆ ∈ can beviewed as a probabilistic mapping between � and thedeterminate states that is� 1) a deterministic identity mapping for φ ∈�i ;� 2) a probabilistic mapping for nondeterminatestates that maps φ to a normalized vector in the direc-tion of the orthogonal projection Pi φ for some valuei ∈� , where i f P P k wk k i= ⟨ φ φ⟩ ∈({ , , }, )� .

Here f : � � �× → is a probabilistic mapping be-tween elements φ of � and indices i ∈� that dependson a chance variableW with a discrete alphabet � �=such that the probability of outcome wi ∈� dependson the input φ only through the inner products{ , , }⟨ φ φ⟩ ∈P P kk k � . Specifically, the probability of out-come wi is ⟨ φ φ⟩P Pi i, . I f wi is observed, thenf P P k w ik k i({ , , }, )φ φ ∈ =� .

Page 5: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

Quantum DetectionThe constraints imposed by the physics on a quantummeasurement lead to some interesting problems withinthe framework of quantum mechanics. In particular, aninteresting problem that arises when using quantumstates for communication is a quantum detection problem[25], which we outline here. As we discuss later in a sig-nal processing context, this problem suggests a variety ofnew or generalized algorithms within the QSP frame-work based on specific ways of imposing inner productconstraints.

In a quantum detection problem a sender conveys clas-sical information to a receiver using a quantum-mechani-cal channel. The sender represents messages by preparingthe quantum channel in a pure quantum state drawn froma collection of known states φ i . The receiver detects theinformation by subjecting the channel to a quantum mea-surement with measurement vectors µ i that are con-strained by the physics to be orthogonal. If the states arenot orthogonal, then no measurement can distinguishperfectly between them. In the general context of quan-tum mechanics, a fundamental problem is to constructmeasurements optimized to distinguish between a set ofnonorthogonal pure quantum states. In the context of acommunications system, we would like to choose themeasurement vectors to minimize the probability of de-tection error. In this context this problem is commonlyreferred to as the quantum detection problem.

Necessary and sufficient conditions for an optimummeasurement minimizing the probability of detection er-ror are known [25], [27]-[29]. However, except in someparticular cases [25], [30]-[33], obtaining a closed-formanalytical expression for the optimal measurement di-rectly from these conditions is a difficult and unsolvedproblem.

In [32], an alternative approach is taken based onchoosing a squared-error criterion and determining ameasurement that minimizes this criterion. Specifically,the measurement vectors µ i are chosen to be orthogonaland closest in a least-squares (LS) sense to the given set ofstate vectors φ i so that the vectors µ i are chosen to mini-mize the sum of the squared norms of the error vectorsei i i= − φµ , as illustrated in Fig. 3. The optimal measure-ment is referred to as the LS measurement.

The LS measurement problem has a simple closed-form solution with many desirable properties. Its con-struction is relatively simple; it can be determined directlyfrom the given collection of states; it minimizes the prob-ability of detection error when the states exhibit certainsymmetries [32]; it is “pretty good” when the states to bedistinguished are equally likely and almost orthogonal[34]; it achieves a probability of error within a factor oftwo of the optimal probability of error [35]; and it is as-ymptotically optimal [36].

Thus, in the context of quantum detection the con-straints of the physics lead to the interesting problem ofchoosing an optimal set of orthogonal vectors. Bor-rowing from quantum detection, a central idea in QSPapplications is to impose orthogonality or more generalinner product constraints on algorithms and then use theLS measurement and the results derived in the context ofquantum detection to design optimal algorithms subjectto these constraints.

Quantum Signal ProcessingAs mentioned previously, the QSP framework drawsheavily on the notions of measurement, consistency, andquantization as they relate to quantum systems. Further-more it borrows from and generalizes the inner productconstraint, specifically orthogonality, that quantum phys-ics imposes on measurement vectors. However, the QSPframework is broader and less restrictive than the quan-tum measurement framework since in designing algo-rithms we are not constrained by the physical limitationsof quantum mechanics.

In quantum mechanics, systems are “processed” byperforming measurements on them. In signal processing,signals are processed by applying an algorithm to them.Therefore, to exploit the formalism and rich mathemati-cal structure of quantum physics in the design of algo-rithms we first draw a parallel between a quantummechanical measurement and a signal processing algo-rithm by associating a signal processing measurementwith a signal processing algorithm. We then apply the for-

16 IEEE SIGNAL PROCESSING MAGAZINE NOVEMBER 2002

OriginalState

OutputState

µ2

µ1

p(1)=14

p(2)= 34

√32

x = +µ µ1 212

� 2. Illustration of a rank-one quantum measurement.

φ2

φ1

µ2

µ1

e2

e1

� 3. Two-dimensional example of the LS measurement. The vec-tors µ i are chosen to be orthonormal and to minimize

⟨ ⟩ = ⟨φ − φ − ⟩∑ ∑i i i i i i i ie e, ,µ µ .

Page 6: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

malism and fundamental principles of quantum measure-ment to the definition of the QSP measurement. Analgorithm is then described by a QSP measurement, withadditional input and output mappings if appropriate.This conceptual framework is illustrated schematically inFig. 4.

The QSP framework is primarily concerned with thedesign of the QSP measurement, borrowing from theprinciples, axioms, and constraints of quantum physicsand a quantum measurement. As we will show, the QSPmeasurement depends on a specific set of measurementparameters, so that this framework provides a convenientand useful setting for deriving new algorithms by choos-ing different measurement parameters, borrowing fromthe ideas of quantum mechanics. Furthermore, since theQSP measurement is defined to have a mathematicalstructure similar to a quantum measurement, the mathe-matical constraints imposed by the physics on the quan-tum measurement can also be imposed on the QSPmeasurement leading to some intriguing new signal pro-cessing algorithms.

QSP MeasurementThe quantum-mechanical principles of measurement,consistency, and quantization lead to the definition of theQSP measurement.

Measurement of a signal in the QSP framework corre-sponds to applying an algorithm to a signal. As indicatedin the previous section and in Fig. 4, one class of algo-rithms in the QSP frame-work consists of a QSPmeasurement (i.e., an algo-rithm) applied in a signalspace that may perhaps be aremapping of the overall in-put and output signalspaces. For example, if thesignal we wish to process is asequence of scalars, then wemay first map the scalar val-ues into vectors in a higherdimensional space, whichcorresponds to the inputmapping in Fig. 4. We thenmeasure the vector represen-tation. The measurementoutcome is a signal in thesame signal space as themeasured signal, which isthen mapped to the algo-rithm output using an out-put mapping as illustrated inFig. 4, so that the measure-ment output represents theoutput of the algorithm,which in turn may be a sig-nal or any other element. As

in quantum mechanics, we require that if we remeasurethe outcome signal, then the new outcome will be equalto the original outcome.

In analogy with the measurement in quantum me-chanics, a rank-one QSP measurement (ROM) M on� isdefined by a set of measurement vectors qi that spanone-dimensional subspaces � i in�. Since we are not con-strained by the physics of quantum mechanics, these vec-tors are not constrained to be orthonormal nor are theyconstrained to be linearly independent. Nonetheless, insome applications we will find it useful to impose anorthogonality constraint. A subspace QSP measurementon � is defined by a set of projection operators Ei ontosubspaces � i in �. Here again, since we are not con-strained by the physics, the projection operators and thesubspaces � i are not constrained to be orthogonal. Themeasurement of a signal x is denoted by M x( ).

Measurement consistency in our framework is formu-lated mathematically as

M M x M x( ( )) ( )= . (7)

Note that by our definition of measurement, if x is asignal in a signal space � then M x( ) is also a signal in �

and can therefore be remeasured.Quantization of the measurement outcome is imposed

by requiring that the outcome signal M x( ) is one of a setof signals determined by the measurement M. Spe-cifically, in analogy with the quantum mechanical deter-minate states we define the set of determinate signals,

NOVEMBER 2002 IEEE SIGNAL PROCESSING MAGAZINE 17

MetaphorQuantumPhysics

Signal ProcessingAlgorithm

QuantumMeasurement

QSPMeasurement

InputMapping

QSPMeasurement

OutputMapping

Signal Processing Algorithm

� 4. Illustration of a class of algorithms within the QSP framework. In this framework quantumphysics is used as a metaphor to design new signal processing algorithms by drawing a parallelbetween a signal processing algorithm and a quantum mechanical measurement. One class ofalgorithms is designed by constructing a QSP measurement borrowing from the principles of aquantum measurement, which is then translated into a signal processing algorithm using appro-priate input and output mappings.

Page 7: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

which are the signals that lie completely in one of themeasurement subspaces � i .

The measurement M is then defined to preserve thetwo fundamental properties of a quantum measurement.� The measurement outcome is always equal to one ofthe determinate signals.� For every input signal x to a QSP measurement, (7) issatisfied.

Rank-One QSP MeasurementsA ROM M defined by a set of measurement vectors qithat span one-dimensional measurement subspaces � i in� is in general a nonlinear mapping between� and the setof determinate signals of M. The measurement subspace� i is the one-dimensional subspace that contains all vec-tors that are multiples of qi . Therefore, the determinatesignals in this case are all signals of the form aqi forsome index iand scalar a. With Ei denoting a projectiononto � i , the measurement is defined such that if x is adeterminate signal then M x E x xi( )= = , and otherwiseM x E xi( )= where

{ }( )i f x qM k= ⟨ ⟩, . (8)

Here f M is a mapping between the input signal xand the set of indices that may be probabilistic (see“Quantum Measurement”) and that depends on theinput x only through the inner products between xand the measurement vectors qi , which are a subsetof the determinate signals. For example, we maychoose f x x qM k k( ) max ,= ⟨ ⟩arg . As another example,we may choose f x q x q qM k k k k( ) max ( , , )= ⟨ ⟩ − ⟨ ⟩arg . Thismapping f M chooses the vector qk that minimizes thedistance x qk− between x and each of the measurement

vectors, so that it maps x to the closest vector qk .Note that since E xi is in � i for any x, the outcome

M x( ) is always a determinate signal of M, and since forany determinate signal x, M x x( )= , this definition of ameasurement satisfies the required properties.

As an example of a ROM, suppose that the measure-ment input is x q q= +( / ) ( / )1 2 1 21 2 where q T

1 1 0= [ ]and q T

2 5 5= [. . ] with[]⋅ T denoting the conjugate trans-pose, are two measurement vectors. Then the measure-ment output will be either a vector in the direction of q1or a vector in the direction of q2 . The particular output

chosen is determined by the mapping f M that depends onthe input x only through the inner products ⟨ ⟩ =x q, /1 3 4and ⟨ ⟩ =x q, /2 1 2. The measurement process is illus-trated in Fig. 5.

Our definition of a ROM is very similar to the defini-tion of a rank-one quantum measurement, with two maindifferences. First, we allow for an arbitrary mapping f Min (8); as described in “Quantum Measurement,” inquantum mechanics f M is unique and is the probabilisticmapping in which qi is chosen with probability| , |⟨ ⟩x qi

2 .Second, the measurement vectors are not constrained tobe orthonormal nor are they constrained to be linearly in-dependent, as in quantum mechanics. The properties of arank-one quantum measurement and a ROM are summa-rized in Table 1.

Subspace MeasurementsThe definition of a subspace QSP measurement parallelsthat of a ROM and borrows from the definition of asubspace quantum measurement.

A subspace QSP measurement M defined by a set ofmeasurement projections Ei that span measurementsubspaces � i in � is a nonlinear mapping between � andthe set of determinate signals of M where if x is a determi-nate signal then M x E x xi( )= = , and otherwiseM x E xi( )= where

{ }( )i f E x E xM k k= ⟨ ⟩, . (9)

Here f M is a (possibly probabilistic) mapping betweenthe input signal x and the set of indices that depends onthe input x only through the inner products{ , }⟨ ⟩E x E xk k .

A special case of a subspace QSP measurement is thecase in which the measurement is defined by a singleprojection. Then M x Ex( )= for all x and the subspaceQSP measurement reduces to a linear projection opera-tor. We refer to such a measurement as a simplesubspace measurement.

18 IEEE SIGNAL PROCESSING MAGAZINE NOVEMBER 2002

MeasurementInput

MeasurementOutput

fM ( ),34

12 x q q= 1 2+1

2 12

q1

q2

� 5. Illustration of a ROM.

Table 1. Comparison BetweenRank-One Measurements.

QuantumMeasurement

QSPMeasurement

Input x Vector in � Vector in �

Measurementvectors qk

Orthonormal Any vectors in �

Selection rule

Function of< >x qk,

Function of < >x qk,

Probabilistic Deterministic orprobabilistic

Output Multiple of qkfor one value k

Multiple of qk for onevalue k

Page 8: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

The subspace QSP measurement is very similar to asubspace quantum measurement, with three main differ-ences: we allow for an arbitrary mapping f M , the mea-surement projections are not constrained to beorthogonal, and the measurement subspaces are not con-strained to be orthogonal.

The properties of a subspace quantum measurement anda subspace QSP measurement are summarized in Table 2.

Algorithm Design in the QSP FrameworkWithin the QSP framework, the QSP measurement playsa central role in the design of signal processing algo-rithms. In this framework, signals are processed by eithersubjecting them to a QSP measurement or by using someof the QSP measurement parameters f qM i, , and Ei butnot directly applying the measurement, as described inthe following sections.

Designing AlgorithmsBased on QSP MeasurementsAlgorithm DesignTo design an algorithm using a QSP measurement wefirst identify the measurement vectors qi in a ROM, orthe measurement projection operators Ei in a subspaceQSP measurement, that specify the possible measure-ment outcomes. For example, in a detection scenario themeasurement vectors may be equal to the transmitted sig-nals or may represent these signals in a possibly differentspace. As another example, in a scalar quantizer the mea-surement vectors may be chosen as a set of vectors thatrepresent the scalar quantization levels. In a subspaceQSP measurement, the measurement projection opera-tors may be projections onto a set of subspaces used forsignaling. We then embed the measurement vectors (pro-

jections) in a Hilbert space �. This basic strategy is illus-trated in Fig. 6. If the signal~x to be processed does not liein�, then we first map it into a signal x in� using a map-ping T

�. To obtain the algorithm output we measure the

representation x of the signal to be processed. If x is a de-terminate signal of M, then the measurement outcome isy M x x= =( ) . Otherwise we approximate x by a determi-nate signal y using a mapping f M . If appropriate, themeasurement outcome y may be mapped to the algo-rithm output ~y using a mapping T

�.

By choosing different input and output mappings T�

and T�

and different measurement parameters f qM i, ,and Ei , and using the QSP measurement framework ofFig. 6, we can arrive at a variety of new and interestingprocessing techniques.

Modifying Known AlgorithmsAs demonstrated in [1], many traditional detection andprocessing techniques fit naturally into the framework ofFig. 6. Examples include traditional and ditheredquantization, sampling methods, matched-filter detec-tion, and multiuser detection. Once an algorithm is de-scribed in the form of a QSP measurement, modificationsand extensions of the algorithm can be derived by chang-ing the measurement parameters f M , qi , and Ei . Thus,the QSP framework provides a unified conceptual struc-ture for a variety of traditional processing techniques anda precise mathematical setting for generating new, poten-tially effective, and efficient processing methods by modi-fying the measurement parameters.

To modify an existing algorithm represented by a map-ping T using the QSP framework, we first cast the algo-rithm as a QSP measurement M, i.e., we choose an inputmapping T

�and an output mapping T

�if appropriate,

and the measurement parameters f qM i, , and Ei . Wethen systematically change some of these parameters, re-sulting in a modified measurement M′, which can then betranslated into a new signal processing algorithm repre-sented by a mapping T ′. The modifications we considerresult from either imposing some of the additional con-straints of quantum mechanics on the measurement pa-rameters of M or from relaxing some of these constraintswhich we do not have to impose in signal processing.These basic steps are summarized in Fig. 7.

Typical modifications of the parameters that we con-sider include� using a probabilistic mapping f M� imposing inner product constraints on the measure-ment vectors qi

NOVEMBER 2002 IEEE SIGNAL PROCESSING MAGAZINE 19

x~ T� T�Mx

y~

{ , , }M i iq Ef

y

� 6. Designing algorithms using a QSP measurement.

Table 2. Comparison BetweenSubspace Measurements.

QuantumMeasurement

QSPMeasurement

Input x Vector in � Vector in �

Measurementprojections Ek

Orthogonal Can benonorthogonal

Measurementsubspaces �k

Orthogonal Can benonorthogonal

Selection rule

Function of< >E x E xk k,

Function of< >E x E xk k,

Probabilistic Deterministic orprobabilistic

Output Multiple of E xkfor one value k

Multiple of E xkfor one value k

Page 9: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

� using nonorthogonal (oblique) projections Ei in placeof orthogonal projections.

Designing AlgorithmsUsing the Measurement ParametersAnother class of algorithms we develop results from pro-cessing a signal with some of the measurement parame-ters and then imposing quantum mechanical constraintsdirectly on these parameters. For example, we may viewany linear processing of a signal as processing with a set ofmeasurement vectors and then imposing inner productconstraints on these vectors. Using the ideas of quantumdetection we may then design linear algorithms that areoptimal subject to these inner product constraints.

To generate new algorithms or modify existing ones,we describe the algorithm as processing by one of themeasurement parameters and then modifying these pa-rameters using one of the three modifications outlined atthe end of the previous section.

In the remainder of this section we discuss each ofthese modifications and indicate how they will be appliedto the development of new processing methods.

Probabilistic MappingsThe QSP framework naturally gives rise to probabilisticand randomized algorithms by letting f M be a probabilis-tic mapping, emulating the quantum measurement. Weexpand on this idea below in the context of quantization.Further examples are developed in [1]. However, the fullpotential benefits of probabilistic algorithms in generalresulting from the QSP framework remain an interestingarea of future study.

Imposing Inner Product ConstraintsOne of the important elements of quantum mechanics isthat the measurement vectors are constrained to beorthonormal. This constraint leads to some interestingproblems such as the quantum detection problem de-scribed previously. A fundamental problem in quantummechanics is to construct optimal measurements subjectto this constraint that best represent a given set of statevectors. In analogy to quantum mechanics, an importantfeature of QSP is that of imposing particular types of con-straints on algorithms. The QSP framework provides asystematic method for imposing such constraints: the

measurement vectors are restricted to have a certain innerproduct structure, as in quantum mechanics. However,since we are not limited by physical laws, we are not con-fined to an orthogonality constraint. As part of the QSPframework, we develop methods for choosing a set ofmeasurement vectors that “best” represent the signals ofinterest and have a specified inner product structure [37],[1]; these methods rely on ideas and results we obtainedin the context of quantum detection [32], which unlikeQSP are subject to the constraints of quantum physics.Specifically, we construct measurement vectors qi with agiven inner product structure that are closest in an LSsense to a given set of vectors s i , so that the vectors qi arechosen to minimize the sum of the squared norms of theerror vectors e q si i i= − . These techniques are referred to asLS inner product shaping. Further details on LS inner prod-uct shaping are summarized in “Least Squares Inner Prod-uct Shaping.”

As we show, the concept of LS inner product shapingcan be used to develop effective solutions to a variety ofproblems that result from imposing a deterministic orstochastic inner product constraint on the algorithm andthen designing optimal algorithms subject to this con-straint. In each of these problems we either describe thealgorithm as a QSP measurement and impose an innerproduct constraint on the corresponding measurementvectors or we consider linear algorithms on which the in-ner product constraints can be imposed directly. We dem-onstrate that, even for problems without inherent innerproduct constraints, imposing such constraints in combi-nation with LS inner product shaping leads to new pro-cessing techniques in diverse areas including frametheory, detection, covariance shaping, linear estimation,and multiuser wireless communication, which often ex-hibit improved performance over traditional methods.

Oblique ProjectionsIn a quantum measurement defined by a set of projectionoperators, the rules of quantum mechanics impose theconstraint that the projections must be orthogonal. InQSP we may explore more general types of measure-ments defined by projection operators that are not re-stricted to be orthogonal, i.e., oblique projections[38]-[40].

An oblique projection is a projection operator E satis-fying E E2 = that is not necessarily Hermitian, i.e., therange space and null space of E are not necessarily or-thogonal spaces. The notation E

��denotes an oblique

projection with range space � and null space �. If � �= ⊥ ,then E

��is an orthogonal projection onto �. An oblique

projection E��

can be used to decompose x into its com-ponents in two disjoint vector spaces � and � that are notconstrained to be orthogonal, as illustrated in Fig. 8.(Two subspaces are said to be disjoint if the only vectorthey have in common is the zero vector.)

Oblique projections are used within the QSP frame-work to develop new classes of frames, effective subspace

20 IEEE SIGNAL PROCESSING MAGAZINE NOVEMBER 2002

1. Original Algorithm

2. QSP Representation

3. Modify Parameters

4. Obtain New Algorithm

x~

x~

x~

x~

x

x

y~

y~

y~

y~

y

y

T

T ′

M

M ′

T�

T�

T�

T�

� 7. Using the QSP measurement framework to modify existingsignal processing algorithms.

Page 10: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

detectors, and a general sampling framework for sam-pling and reconstruction in arbitrary spaces.

Applications of Rank-One MeasurementsQSP QuantizationBy emulating the quantum measurement, in [1] we de-velop a probabilistic quantizer and show that it can beused to efficiently implement a dithered quantizer.

In dithered quantization a random signal called adither signal is added to the input signal prior toquantization [41]-[44]. Dithering techniques have be-come commonplace in applications in which data isquantized prior to storage or transmission. However, the

utility of dithering techniques is limited by the computa-tional complexity associated with generating a randomprocess with an arbitrary joint probability distribution.

A probabilistic quantizer can be used to effectivelyrealize a dither signal with an arbitrary joint probabilitydistribution, while requiring only the generation ofone uniform random variable per input. By introduc-ing memory into the probabilistic selection rule wederive a probabilistic quantizer that shapes thequantization noise.

Covariance Shaping Matched Filter DetectionAs an example of the type of procedure we may follow inusing the concept of LS inner product shaping and opti-

NOVEMBER 2002 IEEE SIGNAL PROCESSING MAGAZINE 21

Least-Squares Inner Product Shaping

Suppose we are given a set of m vectors { , }s i mi 1≤ ≤in a Hilbert space �, with inner product ⟨ ⟩x y, for

any x y, ∈�. The vectors si span an n-dimensionalsubspace. If the vectors are linearly independent, thenn m= ; otherwise n m< . Our objective is to construct aset of optimal vectors { , }h i mi 1≤ ≤ with a specified in-ner product structure, from the given vectors{ , }s i mi 1≤ ≤ . Specifically, we seek the vectors hi thatare “closest” to the vectors si in the LS sense. Thus, thevectors are chosen to minimize

εLS = ⟨ − − ⟩,=

∑i

m

i i i is h s h1

,(10)

subject to the constraint

⟨ ⟩ =h h c ri k ik, 2 ,(11)

for some c > 0 and constants rik .We may wish to constrain the constant c in (11) or

may choose c such that the LS error εLS is minimized.Similarly, withR denoting the matrix with ikth elementrik , we may wish to constrain the elements rik of R, orwe may choose R to have a specified structure so thatthe eigenvectors of R are fixed, but choose theeigenvalues to minimize the LS error.

For example, we may wish to construct a set of or-thogonal vectors, so that R is a diagonal matrix witheigenvector matrix equal to I, but choose the norms ofthe vectors, i.e., the eigenvalues of R, to minimize theLS error. As another example, we may wish to con-struct a cyclic set hi so that ⟨ ⟩h hi k, depends only onk i m− mod . In this case { , , }⟨ ⟩ ≤ ≤h h i mi k 1 is a cyclicpermutation of { , , }⟨ ⟩ ≤ ≤h h k mk1 1 for all k, and theGram matrix H H* is a circulant matrix diagonalizedby a DFT matrix, so that the eigenvectors ofR are fixed.(In [37] we show that a vector set has a circulant Gram

matrix if and only if the set is cyclic.) We may then wishto specify the values { , , }⟨ ⟩ ≤ ≤h h k mk1 1 (possibly up toa scale factor), which corresponds to specifying theeigenvalues of R, or we may choose these values,equivalently the eigenvalues ofR, so that the LS error isminimized.

In [1] we consider both the case in which R is fixedand the case in which the eigenvalues ofR are chosento minimize the LS error εLS . As we show, for fixedR theLS inner product shaping problem has a simple closedform solution; by contrast, if the eigenvalues of R arenot specified, then there is no known analytical solu-tion to the LS inner product shaping problem for arbi-trary vectors si . An iterative algorithm is developed in[1]. In the simplest case where R is a specified rank-rmatrix, with r n m= = , the LS vectors are the “col-umns” of the (possibly infinite) matrix

H S S S S S S= =− −~ ( ) ~ ( )* / * /α αR R R R1 2 1 2,

(12)

where S is the matrix of columns si . If c in (11) is speci-fied then ~α = c, and if c is chosen to minimize the LS er-ror then

( )~ ( )

( )

* /

α =Tr

Tr

S SR

R

1 2

,

(13)

where Tr( )⋅ denotes the trace of the corresponding ma-trix.

If r n= and � �( ) ( )S = R where � ( )⋅ denotes thenull space of the corresponding transformation, then

( ) ( )H S S S S S S= =~ ( ) ~ ( ) ,* /†

* /†

α αR R R R1 2 1 2

(14)

where( )†⋅ denotes the Moore-Penrose pseudoinverse.

Page 11: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

mal QSP measurements to derive new processing meth-ods, in [45] and [46] we consider a generic detectionproblem where one of a set of signals is transmitted overa noisy channel. When the additive noise is white andGaussian, it is well known (see, e.g., [15] and [47]) thatthe receiver that maximizes the probability of correct de-tection is the matched filter (MF) receiver. If the noise isnot Gaussian, then the MF receiver does not necessarilymaximize the probability of correct detection. However,it is still used as the receiver of choice in many applica-tions since the optimal detector for non-Gaussian noiseis typically nonlinear (see, e.g., [48] and referencestherein) and depends on the noise distribution, whichmay not be known.

By describing the MF detector as a QSP measurementand imposing an inner product constraint on the mea-surement vectors, we derive a new class of receivers con-sisting of a bank of correlators with correlating signalsthat are matched to a set of signals with a specified innerproduct structure R and are closest in an LS sense to thetransmitted signals. These receivers depend only on thetransmitted signals, so that they do not require knowl-edge of the noise distribution or the channel sig-nal-to-noise ratio (SNR). We refer to these receivers ascovariance shaping MF receivers [1]. In the special case inwhich R I= , the receiver consists of a bank of correlatorswith orthogonal correlating signals that are closest in anLS sense to the transmitted signals and is referred to as theorthogonal matched filter (OMF) receiver [45], [46].

Alternatively, we show that the modified receivers canbe implemented as an MF demodulator followed by anoptimal covariance shaping transformation that opti-mally shapes the correlation of the outputs of the MFprior to detection. This equivalent representation leads tothe concept of minimum mean-squared error (MMSE)covariance shaping, which we consider in its most generalform in [1] and [49]. Simulations presented in [45] and[46] show that when the noise is non-Gaussian this ap-proach can lead to improved performance over conven-tional MF detection in many cases, with only a minor

impact in performance when the noise is Gaussian. Theseresults are encouraging since they suggest that if the re-ceiver is designed to operate in different noise environ-ments or in an unknown noise environment, then we mayprefer using the modified detectors since for certainnon-Gaussian noise distributions these detectors may re-sult in an improvement in performance over an MF detec-tor, without significantly degrading the performance ifthe noise is Gaussian.

In Fig. 9 we illustrate the performance advantage withone simulation from [45]. In this figure we plot the mean

22 IEEE SIGNAL PROCESSING MAGAZINE NOVEMBER 2002

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

Pd

0 5 10 15 20 25 30

Number of Signals in Transmitted Constellation

Orthogonal Matched FilterMatched Filter

� 9. Comparison between the OMF and MF detectors in Beta-dis-tributed noise, as a function of the number of signals in thetransmitted constellation. The parameters of the distributionare a b= = 01. , and the SNR is 0 dB. The dashed line is themean Pd using the OMF detector, and the solid line is themean Pd using the MF detector. The vertical bars indicate thestandard deviation of the corresponding Pd .

S

E xUS

U

E xSU

S

x

� 8. Decomposition of x into its components in � and in � givenby E x

��and E x

��, respectively.

1

0.9

0.8

0.7

0.6

0.5

0.4

0.3

0.2

0.1

0

Pd

–10 –5 0 5 10 15 20SNR [dB]

Orthogonal Matched FilterMatched Filter

� 10. Comparison between the OMF and MF detectors for trans-mitted constellations of seven signals in Gaussian noise, as afunction of SNR. The dashed line is the mean Pd using the OMFdetector, and the solid line is the mean Pd using the MF detec-tor. The vertical bars indicate the standard deviation of the cor-responding Pd .

Page 12: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

and standard deviation of the probability of correct detec-t ion Pd for the OMF detector with R I= inBeta-distributed noise, as a function of the number of sig-nals in the transmitted constellation. The dimension m ofthe signals in the constellation is equal to the number ofsignals and the samples of the signals are mutually inde-pendent zero-mean Gaussian random variables with vari-ance 1 / m, scaled to have norm one. The vertical barsindicate the standard deviation of Pd . The results in thefigure were obtained by generating 500 realizations ofsignals. For each particular signal realization, we deter-mined the probability of correct detection for the detec-tors in both types of noise by recording the number ofsuccessful detections over 500 noise realizations. Fromthe figure it is evident that the OMF detector outper-forms the MF detector, particularly when the probabilityof correct detection with the MF is marginal. The relativeimprovement in performance of the OMF detector overthe MF detector increases for increasing constellationsize. In Fig. 10 we plot the mean of Pd for the OMF detec-tor and the MF detector in Gaussian noise, for transmit-ted constellations of seven signals in Gaussian noise, as afunction of SNR. Again, the vertical bars indicate thestandard deviation of Pd .

Optimal Covariance ShapingDrawing from the quantum detection problem, we candevelop new classes of linear algorithms that result fromimposing a deterministic or stochastic inner product con-straint on the algorithm, i.e., a covariance constraint, andthen using the results we obtained in the context of quan-tum detection to derive optimal algorithms subject to thisconstraint. In particular, we may extend the concept of LSinner product shaping suggested by the quantum detec-tion framework to develop optimal algorithms that mini-mize a stochastic mean-squared error (MSE) criterionsubject to a covariance constraint.

As an example of this approach, in [1] we exploit theconcept of LS inner product shaping to develop a newviewpoint towards whitening and other covariance shap-ing problems.

Suppose we have a random vector a that lies in them-dimensional space �

m with covariance C a , and wewant to shape the covariance of the vector a using a shap-ing transformation T to obtain the random vector b Ta= ,where the covariance matrix of b is given by C Rb c= 2

for some c >0 and some covariance matrix R. Thus weseek a transformation T such that

C TC T Rb a c= =* 2 , (15)

for some c >0.Data shaping arises in a variety of contexts in which it

is useful to shape the covariance of a data vector eitherprior to subsequent processing or to control the spectralshape after processing [50], [51]. As is well known, givena covariance matrix C a , there are many ways to choose a

shaping transformation T satisfying (15). While in someapplications certain conditions might be imposed on thetransformation such as causality or symmetry, with theexception of the work in [46], [45], [52], [53], [49], and[54], which explicitly relies on the optimality propertiesdeveloped in [1], there have been no general assertions ofoptimality for various choices of a linear shaping transfor-mation. In particular, the shaped vector may not be“close” to the original data vector. If this vector under-goes some noninvertible processing or is used as an esti-mator of some unknown parameters represented by thedata, then we may wish to choose the covariance shapingtransformation so that the shaped output is close to theoriginal data in some sense.

Building upon the concept of LS inner product shap-ing, we propose choosing an optimal shaping transfor-mation that results in a shaped vector b that is as close aspossible to the original vector a in an MSE sense, whichwe refer to as MMSE covariance shaping. Specifically,among all possible transformations we seek the one thatminimizes the total MSE given by

( ) ( )ε MSE = − = − −=∑ E a b Ei

m

i i1

2( ) ( ) ( )*a b a b ,(16)

subject to (15), where a i and bi are the ith components ofa and b, respectively.

The solution to the MMSE covariance shaping prob-lem is developed in [1]; further details are summarized in“MMSE Covariance Shaping.” This new concept ofMMSE shaping can be useful in a variety of signal pro-cessing methods that incorporate shaping transforma-tions in which we can imagine using an optimalprocedure that shapes the data but at the same time mini-mizes the distortion to the original data.

Linear EstimationAs another example of an algorithm suggested by thequantum detection framework, where we use the ideasof LS inner product shaping to design an optimal linearalgorithm subject to a stochastic inner product con-straint, in [54] we derive a new linear estimator for theunknown deterministic parameters in a linear model.The estimator is chosen to minimize an MSE criterion,subject to a constraint on the covariance of the estima-tor. This new estimator is defined as the covariance shap-ing LS (CSLS) estimator.

Many estimation problems can be represented by thelinear model y Hx w= + , whereH is a known matrix, x isa vector of unknown deterministic parameters to be esti-mated, and w is a random vector with covariance C w . Acommon approach to estimating the parameters x is to re-strict the estimator to be linear in the data y and then findthe linear estimate of x that results in an estimated datavector that is as close as possible to the given data vector yin a (weighted) LS sense, so that it minimizes the totalsquared error in the observations [55]-[58]. It is well

NOVEMBER 2002 IEEE SIGNAL PROCESSING MAGAZINE 23

Page 13: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

known that among all possible unbiased linear estima-tors, the LS estimator minimizes the variance [56]. How-ever, this does not imply that the resulting variance orMSE is small, where the MSE of an estimator is the sumof the variance and the squared norm of the bias. In par-ticular, a difficulty often encountered when using the LSestimator to estimate the parameters x is that the error inestimating x can have a large variance and a covariancestructure with a very high dynamic range. This is due tothe fact that in many cases the data vector y is not verysensitive to changes in x, so that a large error in estimatingx may translate into a small error in estimating the datavector y, in which case the LS estimate may result in apoor estimate of x. This effect is especially predominantat low to moderate SNR, where the data vector y is typi-cally affected more by the noise than by changes in x; theexact SNR range will depend on the properties of themodel matrix H.

The CSLS estimator, denoted by $x CSLS , is a biased es-timator directed at improving the performance of thetraditional LS estimator at low to moderate SNR bychoosing the estimate to minimize the (weighted) totalerror variance in the observations subject to a constrainton the covariance of the estimation error, so that we con-trol the dynamic range and spectral shape of thecovariance of the estimation error. Thus, $x GyCSLS = ischosen to minimize

( )ε CSLS = ′− ′ ′− ′−E w( ) ( )*y HGy C y HGy1 , (26)

where y y y′ = − E( ), subject to the constraint that thecovariance of the error in the estimate $x CSLS , which isequal to the covariance of the estimate $x CSLS , is propor-tional to a given covariance matrix R. ThusG must satisfy

GC G Rw c* = 2 , (27)

24 IEEE SIGNAL PROCESSING MAGAZINE NOVEMBER 2002

MMSE Covariance Shaping

In [1] we show that the MMSE covariance shapingproblem can be interpreted as an LS inner product

shaping problem, so that the MMSE shaping transfor-mation can be found by applying results derived in thatcontext. In the simplest case in which Ca and R are in-vertible, the optimal shaping transformation is givenby

T R C R RC R= =− −α α( ) ( )/ /a a

1 2 1 2 ,(17)

where if c in (15) is specified then α = c, and if c is cho-sen to minimize the MSE then

α = TrTr

(( ) )( )

/C RR

a1 2

.(18)

Here ( ) /⋅ 1 2 denotes the unique nonnegative-definitesquare root of the corresponding matrix.

When R I= so that T is a whitening transformation,(17) reduces to

T C= −α a1 2/

(19)

and (18) becomes

( )α = 1 1 2

m aTr C / .(20)

It is interesting to note that the MMSE whiteningtransformation has the additional property that it is theunique symmetric whitening transformation (up to afactor of ±1) [103]. It is also proportional to theMahalanobis transformation, which is frequently used

in signal processing applications incorporating whitening(see, e.g., [57], [50], and [51]).

Another interesting case is when the random vector a iswhite so thatC Ia = and the problem is to optimally shape itscovariance. The MMSE transformation in this case is

T R= α 1 2/ ,(21)

where if c is fixed then α = c, and if c is chosen to minimizethe MSE then

α = TrTr( )( )

/RR

1 2

.(22)

We may also consider a weighted MMSE covariance shap-ing problem in which the shaping T is chosen to minimize aweighted MSE. Thus we seek a transformation T such thatb TA= has covarianceC Rb c= 2 for some c > 0and such that

εMSEw E= − −(( ) ( ))*a b A a b ,

(23)

is minimized, where A is some nonnegative-definiteHermitian weighting matrix. In the simplest case in whichCa ,R, and A are all invertible, the weighted MMSE covarianceshaping transformation is

T RAC A RA= −α( ) /a

1 2 ,(24)

where if c is specified then α = c, and if c is chosen to mini-mize the weighted MSE then

α = TrTr

(( ) )( )

/RAC ARA

a1 2

.(25)

Page 14: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

NOVEMBER 2002 IEEE SIGNAL PROCESSING MAGAZINE 25

Covariance Shaping Least-Squares Estimation

If the model matrix H has full column rank, then thecovariance shaping LS estimator that minimizes (26)

subject to (27) is given by

$ ( )* / *x y,CSLS = − − −β RH C H RH Cw w1 1 2 1

(28)

where if c in (27) is specified thenβ = c, and if c is chosen tominimize the error then

β =−

TrTr(( ) )

( )

* /

*

RH C HRH C H

w

w

1 1 2

1. (29)

The CSLS estimator can also be expressed as an LS esti-mator followed by a weighted MMSE covariance shapingtransformation. Specifically, suppose we estimate the pa-rameters x using the LS estimate $xLS . Since $ ~x x wLS = +where ~ ( )w w= − −H * C H H * Cw w

1 1 , the covariance of thenoise component ~w in~xLS is equal to the covariance of $xLS ,denoted C $xLS

, which is given by C H C H$*( )

xLS= − −σ2 1 1

w . To im-prove the performance of the LS estimator, we may shapethe covariance of the noise component in the estimator$xLS . Thus we seek a transformation W such that thecovariance matrix of $ $x xLS= W , denoted by C $x

, satisfiesC WC W R$ $x

c= =x LS

* 2 , for some c > 0. To minimize the dis-tortion to the estimator $xLS , we choose the transformationW that minimizes the weighted MSE

( )E ( $ $ ) ( $ $ )*$′ − ′ ′ − ′−x x x xLS LS x LS LS

LSW C W1 . (30)

As we show in [54], the resulting estimator $x is equal to$xCSLS , so that the CSLS estimator can be determined by firstfinding the LS estimator $xLS and then optimally shaping itscovariance. The CSLS estimator with fixed scaling can alsobe expressed as a matched correlator estimator followedby MMSE shaping.

If the noise w in the model y H w= +x is Gaussian withzero-mean and covariance Cw , then the CSLS estimatorachieves the CRLB for biased estimators with bias B equalto the bias of the CSLS estimator, which is given by

( )B w= −−β( )* /RH C H1 1 2 I x. (31)

Thus, from all estimators with bias given by (31) for someβand R, the CSLS estimator minimizes the variance.

While it would be desirable to analyze the MSE of theCSLS estimator for more general forms of bias, we cannotdirectly evaluate the MSE of the CSLS estimator since thebias, and consequently the MSE, depend explicitly on theunknown parameters x. Instead, we may compare the MSEof the CSLS estimator with the MSE of the LS estimator. Theanalysis in [54] indicates that there are many cases inwhich the CSLS estimator performs better than the LS esti-mator in a MSE sense, for all values of the unknown pa-rameters x. Specifically, for a variety of choices of theoutput covarianceR, there is a threshold SNR, such that for

SNR values below this threshold the CSLS estimator yieldsa lower MSE than the LS estimator, for all values of x.

If the output covariance is chosen to be equal to σR,where C Cw = σ2 for some covariance matrix C, then it canbe shown that there is a threshold SNR value so that forSNR values below this threshold, the MSE of the CSLS esti-mator is always smaller than the MSE of the LS estimator,regardless of the value of x. Specifically, let ζ σ= x 2 2/ ( )mdenote the SNR per dimension. Then with B H C H= −* 1 ,γ σ= arg max k , and σk denoting the eigenvalues ofQ I I= − −(( ) ) (( ) )/ * /RB RB1 2 1 2 , the MSE of the CSLS estima-tor is less than or equal to the MSE of LS estimator forζ ζ≤ $

WC , where

$ ( ) ( )ζσγ

WC

Tr Tr= −−B R1

. (32)

The bound $ζWC is a worst case bound, since it corre-sponds to the worst possible choice of parameters, namelywhen the unknown vector x is in the direction of theeigenvector of Q corresponding to the eigenvalue σγ . Inpractice the CSLS estimator will outperform the LS estima-tor for higher values of SNR than $ζWC .

Since we have freedom in designing R, we may alwayschoose R so that $ζWC > 0. In this case we are guaranteedthat there is a range of SNR values for which the CSLS esti-mator leads to a lower MSE than the LS estimator for allchoices of the unknown parameters x.

For example, suppose we wish to design an estimatorwith covariance proportional to some given covariancematrix Z, so that R Z= a for some a > 0. If we choosea < −Tr Tr( ) / ( )B Z1 , then we are guaranteed that there is anSNR range for which the CSLS estimator will have a lowerMSE than the LS estimator for all values of x.

In specific applications it may not be obvious how tochoose a particular proportionality factor a. In such cases, wemay prefer using the CSLS estimator with optimal scaling. Inthis case, the scaling is a function of R and therefore cannotbe chosen arbitrarily, so that in general we can no longerguarantee that there is a positive SNR threshold, i.e., thatthere is always an SNR range over which the CSLS performsbetter than the LS estimator. However, as shown in [54], inthe special case in which R = I, there is always such an SNRrange. Specifically, with { , }λk k m1≤ ≤ denoting theeigenvalues ofB H C H= −* 1 , and α λ=

=∑( ) //

k

m

k1

1 2 ( )k

m

k=∑ 1λ ,

the MSE of the CSLS estimator is less than or equal to theMSE of LS estimator for ζ ζ≤ WC , where

ζλ α

αλγ

WC =−

−=

−∑( / )/

1

11

1 2

1 2 2

mk

m

k , (33)

and γ αλ= −argmax k1 2 2

1/ .Here again ζWC is a worst case bound; in practice the CSLS

estimator will outperform the LS estimator for higher valuesof SNR than ζWC . Examples presented in [54] indicate that in avariety of applications ζWC can be pretty large.

Page 15: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

where c >0 is a constant that is either specified or chosento minimize the error (26).

The CSLS estimator $x CSLS is developed in [1] and[54]. Some of its properties are discussed in “CovarianceShaping Least-Squares Estimation.”

Various modifications of the LS estimator under thelinear model assumption have been previously proposedin the literature. Among the more prominent alternativesare the ridge estimator [59] (also known as Tikhonovregularization [60]) and the shrunken estimator [61]. In[1] we show that both the ridge estimator and theshrunken estimator can be formulated as CSLS estima-tors, which allows us to interpret these estimators as theestimators that minimize the total error variance in theobservations, from all linear estimators with the samecovariance.

As shown in [54], the CSLS estimator has a prop-erty analogous to the property of the LS estimator. Spe-cifically, it achieves the Cramer-Rao lower bound(CRLB) for biased estimators [56], [62], [63] whenthe noise is Gaussian. This implies that for Gaussiannoise, there is no linear or nonlinear estimator with asmaller variance, or MSE, and the same bias as theCSLS estimator.

Analysis of the MSE of the CSLS estimator [54] dem-onstrates that the covariance of the estimation error canbe chosen such that over a wide range of SNR, the CSLSestimator results in a lower MSE than the traditional LSestimator, for all values of the unknown parameters. Sim-ulations presented in [1] and [54] strongly suggest thatthe CSLS estimator can significantly decrease the MSE ofthe estimation error over the LS estimator for a widerange of SNR values. As an example, in Fig. 11 we plotthe MSE in estimating a set of AR parameters in anARMA model contaminated by white noise, using boththe CSLS with R = I and the LS estimators from 20 noisyobservations of the channel, averaged over 2000 noise re-alizations, as a function of −10 2log σ where σ2 is the

noise variance. As can be seen from the figure, in this ex-ample the CSLS estimator significantly outperforms theLS estimator. In general, the performance advantage us-ing the CSLS estimator will depend on the properties ofthe model matrixHand the noise covarianceC w , as indi-cated by the analysis in [54] and in “Covariance ShapingLeast-Squares Estimation.”

Multiuser DetectionBased on the concept of CSLS estimation, we propose anew class of linear receivers for synchronous code-divi-sion multiple-access (CDMA) systems, which we refer toas the covariance shaping multiuser (CSMU) receivers.These receivers depend only on the users’ signatures anddo not require knowledge of the channel parameters.Nonetheless, over a wide range of these parameters theperformance of these receivers can approach the perfor-mance of the linear MMSE receiver which is the optimallinear receiver that assumes knowledge of the channel pa-rameters and maximizes the output signal-to-interferenceratio. These receivers generalize the recently proposed or-thogonal multiuser receiver [52].

Consider the problem of detecting information trans-mitted by each of the users in an m-user CDMA system.Each user transmits information by modulating a signa-ture sequence. The discrete-time model for the receivedsignal y is given by [64]

y SAb w= + , (34)

where S is the n m× matrix of columns s k and s k is thelength-n s ignature vector of the kth user,A = diag( , , )A Am1 K where Ak >0 is the received ampli-tude of the kth user’s signal, b is a vector of elements bkwhere bk ∈ −{ , }1 1 is the bit transmitted by the kth user,and w is a white Gaussian noise vector with zero meanand covariance C Iw = σ2 .

Given the received signal y the problem is to detect theinformation bits bk of the different users. One approachconsists of estimating the vector x Ab= and then detect-ing the kth symbol as $ ( )b xk k= sgn where xk is the kth com-ponent of x.

Estimating x using an LS estimator results in thewell-known decorrelator receiver, first proposed byLupas and Verdu [65]. Alternatively, we may estimate xusing the CSLS estimator, which results in a new receiverwhich we refer to as the CSMU receiver. The form of thisreceiver is discussed in “Covariance Shaping MultiuserDetection.”

From the general properties of the CSLS estimator [1],[54], it follows that the CSMU receiver can be imple-mented as a decorrelator receiver followed by a (weighted)MMSE covariance shaping transformation that optimallyshapes the covariance of the output of the decorrelator.The choice of shaping can be tailored to the specific set ofsignatures. It can also be shown that this receiver is equiva-lent to an MF receiver followed by an MMSE covariance

26 IEEE SIGNAL PROCESSING MAGAZINE NOVEMBER 2002

100

10−1

10−2

10−3

10−4

10−5

MS

E

–5 0 5 10 15 20 25 30 35SNR [dB]

CSLSLS

� 11. Mean-squared error in estimating the AR parameters usingthe LS estimator and the CSLS estimator.

Page 16: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

shaping transformation. Finally, the CSMU receiver canalso be implemented as a correlation demodulator withcorrelating signals with inner product matrix R that areclosest in an LS sense to the signature vectors.

In the special case in which R I= , the shaping transfor-mation is a whitening transformation, and the CSMU re-ceiver is equivalent to a decorrelator receiver followed byMMSE whitening. This receiver has been referred to asthe orthogonal multiuser receiver [22], [53]. By allowingfor other choices of R we can improve the performanceover both the decorrelator and the orthogonal multiuserreceiver for a wide range of channel parameters.

To demonstrate the performance advantage in usingthe CSLS estimator, we consider the case in which the sig-nature vectors are chosen as pseudonoise sequences cor-responding to maximal-length, shift-register sequences[64], [66], so that

s sk l

l kn l k

*, ;

/ , .==

− ≠11 (41)

The shaping R is chosen as a circulant matrix with param-eter ρ:

R =

⋅ ⋅ ⋅⋅ ⋅ ⋅

⋅ ⋅ ⋅

11

1

ρ ρ ρρ ρ ρ

ρ ρ ρM O M

.

(42)

Fig. 12 compares the theoretical probability of bit errorof the CSMU receiver in the case of ten users with ρ =035.and with accurate power control so that Am =1 for all m.The corresponding curves for the decorrelator, MF, andlinear MMSE receivers are plotted for comparison. TheMMSE receiver is the linear receiver that maximizes thesignal-to-interference ratio and, unlike the decorrelator,MF, and CSMU receivers, requires knowledge of the chan-nel parameters. We see that the CSMU receiver performsbetter than the decorrelator and the MF and performs sim-ilarly to the linear MMSE receiver, even though it does notrely on knowledge of the channel parameters.

In [67] we develop methods to analyze the output sig-nal-to-interference+noise ratio (SINR) of the CSMU receiverin the large system limit. We show that the SINR converges toa deterministic limit and compare this limit to the knownSINRlimits for thedecorrelator,MF,and linearMMSEreceiv-ers [67]-[69]. The analysis suggests that this modified receivercan lead to improved performance over the decorrelator andMF receiver and can approach the performance of the linearMMSEreceiverover awide rangeof channelparameterswith-out requiring knowledge of these parameters.

Applications of Subspace MeasurementsSimple Subspace MeasurementsA simple subspace QSP measurement is equivalent to alinear projection operator. Numerous signal processing

NOVEMBER 2002 IEEE SIGNAL PROCESSING MAGAZINE 27

Covariance Shaping Multiuser Detection

The CSMU detector results from estimating x = Abin (34) using the CSLS estimator, which in the case

of linearly independent signature vectors leads to theestimator

$ ( )* / *xCSLS = −RS S RS y1 2 ,

(35)

whereR is any positive-definite Hermitian matrix. Notethat the scaling of $xCSLS will not effect the detector out-put and therefore can be chosen arbitrarily. The result-ing receiver cross-correlates y with each of thecolumns qk of Q = −SR S SR( )* /1 2 to yield the outputsdk k= q y* . The kth users’ bit is then detected as$ ( )b dk k= sgn .

In the large system limit when both the number of us-ers and the length of the signature vectors goes to infinitywith fixed ratio β = n m/ , it can be shown that the SINR γat the output of the CSMU detector withR I= converges toa deterministic limit when random Gaussian signaturesand accurate power control are used. Specifically, in [53]we show that

( ) ( )[ ]

γ

η η η η η η η ηπ β ζ

. .→

−+ − − −

+

m s

1

11 2 1

9 12 1 2 1 2 1 1 2

2

2 2

( ) / /

( )

E K1

(36)

where1/ ζ is the received SNR,

K kdt

k t

dx

x k x( )

sin ( )( )

/=

−=

− −∫ ∫

1 1 12 20

2

2 2 20

(37)

E k k tdtk xx

dx( ) sin/

= − = −−∫ ∫1

11

2 2

0

2 2 2

20

(38)

are the complete elliptic integrals of the first and sec-ond kinds, respectively [104], and

( )η β1

2

1= −

(39)

( )η β2

2

1= + .

(40)

The notation →m.s.denotes convergence in the

mean-squared (L2) sense [105].

Page 17: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

and detection algorithms based on orthogonal projec-tions have been developed. Algorithms based onoblique projections have received much less attention inthe signal processing literature. Recently, oblique pro-jections have been applied to various detection prob-lems [70], [71], to signal and parameter estimation[72], to computation of wavelet transforms [73], and tothe formulation of consistent interpolation and sam-pling methods [20], [74].

In [20] the authors develop consistent reconstructionalgorithms for sampled signals, in which the recon-structed signal is in general not equal to the original signalbut nonetheless yields the same samples. Using a simplesubspace QSP measurement corresponding to an obliqueprojection operator, in [23] and [24] the results of [20]are extended to a broader framework that can be appliedto arbitrary subspaces of an arbitrary Hilbert space, aswell as arbitrary input signals. The algorithms developedyield perfect reconstruction for signals in a subspace of �,and consistent reconstruction for arbitrary signals, so thatthis framework includes the more restrictive perfect re-construction theories as special cases. This frameworkleads to some new sampling theorems and can also beused to construct signals from a certain class with pre-scribed properties [75]. For example, we can use thisframework to construct a finite-length signal with speci-fied low-pass coefficients or an odd-symmetric signalwith specified local averages.

Subspace Coding and DecodingSubspace measurements also lead to interesting and po-tentially useful coding and decoding methods for com-munication-based applications over a variety of channelmodels. In particular, in [1] a subspace approach fortransmitting information over a noisy channel is pro-posed, in which the information is encoded in disjointsubspaces. To detect the information, we design a receiverbased on a subspace QSP measurement and show that fora certain class of channel models this receiver implementsa generalized likelihood ratio test. Although the discus-sion constitutes a rather preliminary exploration of suchcoding techniques, it represents an interesting and poten-tially useful model for communication in many contexts.In particular, decoding methods suggested by the QSPframework may prove useful in the context of recent ad-vances in multiple-antenna coding techniques [76], [77].

Combined MeasurementsAn interesting class of measurements in quantum me-chanics results from restricting measurements to asubspace in which the quantum system is known a pri-ori to lie. This leads to the notion of generalized mea-surements, or positive operator-valued measures(POVMs) [78], [79]. It can be shown that a general-ized measurement on a quantum system can be imple-mented by performing a standard measurement on a

larger system. Alternatively, we can view a generalizedquantum measurement as a combination of a standardmeasurement followed by an orthogonal projectiononto a lower space.

Drawing from the quantum mechanical POVM, aspart of the QSP framework we also consider combinedQSP measurements. The QSP analogue of a quantumPOVM is a ROM followed by a simple subspace QSPmeasurement corresponding to an orthogonal projectionoperator. Since the QSP framework does not depend onthe physics associated with quantum mechanics, we mayextend the notion of a (physically realizable) POVM toinclude other forms of combined QSP measurements,where we perform any two measurements successively.As developed further in [1], such measurements lead to avariety of extensions and rich insights into frames, to newclasses of frames, and to the concept of oblique frame ex-pansions. This framework also leads to subspace MF de-tectors and randomized algorithms for improvingworst-case performance.

Combined Measurements and Tight FramesEmulating the quantum POVM leads to combined mea-surements where a ROM is followed by an orthogonalprojection onto a subspace �. Such measurements arecharacterized by an effective set of measurement vectors.In [80] we show that the family of possible effective mea-surement vectors in � is equal to the family of rank-onePOVMs on � and is precisely the family of (normalized)tight frames for �.

Frames are generalizations of bases which lead to re-dundant signal expansions [81], [82]. A frame for aHilbert space � is a set of not necessarily linearly inde-pendent vectors that spans � and has some additionalproperties. Frames were first introduced by Duffin andSchaeffer [81] in the context of nonharmonic Fourier se-ries and play an important role in the theory of nonuni-form sampling [81]-[83]. Recent interest in frames hasbeen motivated in part by their utility in analyzing wave-let expansions [84], [85]. A tight frame is a special case ofa frame for which the reconstruction formula is particu-larly simple and is reminiscent of an orthogonal basis ex-pansion, even though the frame vectors in the expansionare linearly dependent.

Exploiting the equivalence between tight frames andquantum POVMs, we develop frame-theoretic analoguesof various quantum-mechanical concepts and results[80]. In particular, motivated by the construction of opti-mal LS quantum measurements [32], we consider theproblem of constructing optimal LS tight frames for asubspace � from a given set of vectors that span �.

The problem of frame design has received relatively lit-tle attention in the frame literature. A popular frame con-struction from a given set of vectors is the canonical frame[86]-[89], first proposed in the context of wavelets in[90]. The canonical frame is relatively simple to con-struct, can be determined directly from the given vectors,

28 IEEE SIGNAL PROCESSING MAGAZINE NOVEMBER 2002

Page 18: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

and plays an important role in wavelet theory [18], [91],[92]. In [80] we show that the canonical frame vectorsare proportional to the LS frame vectors.

This relationship between combined measurementsand frames suggests an alternative definition of frames interms of projections of a set of linearly independent sig-nals in a larger space. This perspective provides additionalinsights into frames and suggests a systematic approachfor generating new classes of frames by changing theproperties of the signals or changing the properties of theprojection.

Geometrically Uniform FramesIn the context of a single QSP measurement, we have seenthat imposing inner product constraints on the measure-ment vectors of a ROM leads to interesting new process-ing techniques. Similarly, in the context of combinedmeasurements imposing such constraints leads to the def-inition of geometrically uniform frames [93]. This class offrames is highly structured, resulting in nice computa-tional properties, and possesses strong symmetries thatmay be advantageous in a variety of applications such aschannel coding [94]-[96] and multiple descriptionsource coding [97]. Further results regarding theseframes are developed in [93].

Consistent Sampling andOblique Dual Frame VectorsWe also explore extensions of frames that result fromchoosing an oblique projection operator onto �. In thiscase the the measurement is described in terms of twosets of effective measurement vectors, where the firstset forms a frame for � and the second set forms whatwe define as an oblique dual frame. The frame operatorcorresponding to these vectors is referred to as anoblique dual frame operator and is a generalization of thewell-known dual frame operator [86]. As we show in[23], these frame vectors have properties that are verysimilar to those of the conventional dual frame vectors.However, in contrast with the dual frame vectors, theyare not constrained to lie in the same space as the origi-nal frame vectors. Thus, using oblique dual frame vec-tors we can extend the notion of a frame expansion toinclude redundant expansions in which the analysisframe vectors and the synthesis frame vectors lie in dif-ferent spaces.

Based on the concept of oblique dual frame vectors, in[23] we develop redundant consistent sampling proce-dures with (almost) arbitrary sampling and reconstruc-tion spaces. By allowing for arbitrary spaces, thesampling and reconstruction algorithms can be greatlysimplified in many cases with only a minor increase in ap-proximation error [20], [21], [98]-[101]. By usingoblique dual frame vectors, we can further simplify thesampling and reconstruction processes while still retain-ing the flexibility of choosing the spaces almost arbi-

trarily, due to the extra degrees of freedom offered by theuse of frames that allow us to construct frames with pre-scribed properties [84], [102]. Furthermore, if the mea-surements are quantized prior to reconstruction, then aswe show the average power of the reconstruction, errorusing this redundant procedure can be reduced by asmuch as the redundancy of the frame in comparison withthe nonredundant procedure.

AcknowledgmentsWe are extremely grateful to Prof. G.D. Forney for hismany thoughtful comments and valuable suggestionsthat greatly influenced the formulation of the QSPframework and many of the specific results that followfrom it. This work has been supported in part by theArmy Research Laboratory (ARL) Collaborative Tech-nology Alliance through BAE Systems, Inc. subcontractRK7854; BAE Systems Cooperative AgreementRP6891 under Army Research Laboratory GrantDAAD19-01-2-0008; BAE Systems, Inc. through Con-tract RN5292; Texas Instruments through the TI Lead-

NOVEMBER 2002 IEEE SIGNAL PROCESSING MAGAZINE 29

10

10

10

10

10

10

10

10

10

0

-1

-2

-3

-4

-5

-6

-7

-8

0 2 4 6 8 10 12 14 16 18 20

Pro

babi

lity

of E

rror

SNR [dB]

CSMU ReceiverDecorrelatorMFMMSE

� 12. Probability of bit error with ten users, ρ = 0.35, and accu-

rate power control, as a function of SNR.

Three principles of quantummechanics that play a majorrole in QSP are the concept ofa measurement, the principleof measurement consistency,and the principle ofquantization of themeasurement output.

Page 19: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

ership University Consortium; and by IBM through anIBM Fellowship.

Yonina C. Eldar received the B.Sc. degree in physics(summa cum laude) in 1995 and the B.Sc. degree inelectrical engineering (summa cum laude) in 1996 bothfrom Tel-Aviv University (TAU), Tel-Aviv, Israel, andthe Ph.D. degree in electrical engineering and computerscience in 2001 from the Massachusetts Institute ofTechnology (MIT), Cambridge. From January 2002 toJuly 2002 she was a postdoctoral fellow at the DigitalSignal Processing Group at MIT. She is currently a se-nior lecturer in the Department of Electrical Engi-neering at the Technion-Israel Institute of Technology,Haifa. Her current research interests are in the generalareas of quantum detection and signal processing. In1998 she held the Rosenblith Fellowship for study inelectrical engineering at MIT, and in 2000 she held anIBM Research Fellowship.

Alan V. Oppenheim received the S.B. and S.M. degreesin 1961 and the Sc.D. degree in 1964, all in electricalengineering, from the Massachusetts Institute of Tech-nology (MIT). He is also the recipient of an honorarydoctorate from Tel Aviv University in 1995. In 1964,he joined the faculty at MIT, where he currently is FordProfessor of Engineering and a MacVicar Faculty Fel-low. His research interests are in the general area of sig-nal processing and its applications. He is coauthor ofthe textbooks Discrete-Time Signal Processing and Sig-nals and Systems. He is also editor of several advancedbooks on signal processing. He is a member of the Na-tional Academy of Engineering, a Fellow of the IEEE,and a member of Sigma Xi and Eta Kappa Nu. He hasbeen a Guggenheim Fellow and a Sackler Fellow at TelAviv University. He has also received a number ofawards for outstanding research and teaching includ-ing the IEEE Education Medal, the IEEE CentennialAward, the Society Award, the Technical AchievementAward, and the Senior Award of the IEEE Society onAcoustics, Speech and Signal Processing. He has alsoreceived a number of awards at MIT for excellence inteaching, including the Bose Award and the EverettMoore Baker Award.

References[1] Y.C. Eldar, “Quantum signal processing,” Ph.D. dissertation, Mass. Inst.

Technol., Dec. 2001. Available: http://allegro.mit.edu/dspg/publica-tions/TechRep/index.html

[2] A. Fettweis, “Wave digital filters: Theory and practice,” Proc. IEEE, vol. 74,pp. 270-316, Feb. 1986.

[3] B.B. Mandelbrot, The Fractal Geometry of Nature. San Francisco, CA: Free-man, 1982.

[4] G.W. Wornell and A.V. Oppenheim, “Wavelet-based representations for aclass of self similar signals with application to fractal modulation,” IEEETrans. Inform. Theory, vol. 38, pp. 785-800, Mar. 1992.

[5] M.F. Barnsley and A.D. Sloan, “A better way to compress images,” BYTE,vol. 1, no. 12, pp. 215-223, Jan. 1988.

[6] T.L. Carroll and L.M. Pecora, “Synchronizing chaotic circuits,” IEEETrans. Circuits Syst., vol. 38, pp. 453-456, Apr. 1991.

[7] L.M. Pecora and T.L. Carroll, “Driving systems with chaotic signals,” Phys.Rev. A, vol. 44, pp. 2374-2383, Aug. 1991.

[8] K.M. Cuomo, A.V. Oppenheim, and S.H. Strogatz, “Synchronization ofLorenz-based chaotic circuits with applications to communications,” IEEETrans. Circuits Syst. II, vol. 40, pp. 626-639, Oct. 1993.

[9] “Special issue on applications of chaos in modern communications sys-tems,” IEEE Trans. Circuits Syst. I, vol. 48, Dec. 2001.

[10] A.C. Singer, A.V. Oppenheim, and G.W. Wornell, “Detection and esti-mation of multiplexed soliton signals,” IEEE Trans. Signal Processing, vol.47, pp. 2768-2782, Oct. 1999.

[11] K.S. Tang, K.F. Man, S. Kwong, and Q. He, “Genetic algorithms andtheir applications,” IEEE Signal Processing Mag., pp. 22-37, Nov. 1996.

[12] M. Pirlot and R.V.V. Vidal, “Simulated annealing: A tutorial,” Contr.Cybern., vol. 25, no. 1, pp. 9-31, 1996.

[13] S.S. Haykin, Neural Networks: A Comprehensive Foundation, 2nd ed. UpperSaddle River, NJ: Prentice-Hall, 1999.

[14] H.W. Schüssler and P. Steffen, “Some advanced topics in filter design,” inAdvanced Topics in Signal Processing, J.S. Lim and A.V. Oppenheim, Eds.Englewood Cliffs, NJ: Prentice-Hall, 1988.

[15] J.G. Proakis, Digital Communications, 3rd ed. New York: McGraw-Hill,1995.

[16] P.P. Vaidyanathan, Multirate Systems and Filter Banks. Englewood Cliffs,NJ: Prentice-Hall, 1992.

[17] G. Strang and T. Nguyen, Wavelets and Filter Banks. Wellesley, MA:Wellesley-Cambridge, 1996.

[18] S.G. Mallat, “A theory of multiresolution signal decomposition: Thewavelet representation,” IEEE Trans. Pattern Anal. Mach. Intell., vol. 11,pp. 674-693, 1989.

[19] M. Vetterli and J. Kovacevic, Wavelets and Subband Coding. EnglewoodCliffs, NJ: Prentice-Hall, 1995.

[20] M. Unser and A. Aldroubi, “A general sampling theory for nonideal ac-quisition devices,” IEEE Trans. Signal Processing, vol. 42, pp. 2915-2925,Nov. 1994.

[21] M. Unser, “Sampling—50 years after Shannon,” Proc. IEEE, vol. 88, pp.569-587, Apr. 2000.

[22] Y.C. Eldar and A.V. Oppenheim, “Nonredundant and redundant sam-pling with arbitrary sampling and reconstruction spaces,” in Proc. 2001Workshop on Sampling Theory and Applicatations, SampTA’01, May 2001,pp. 229-234.

[23] Y.C. Eldar, “Sampling and reconstruction in arbitrary spaces and obliquedual frame vectors,” J. Fourier Analys. Appl., to be published.

[24] Y.C. Eldar, “Sampling without input constraints: Consistent reconstruc-tion in arbitrary spaces,” Sampling, Wavelets and Tomography, A. Zayed andJ.J. Benedetto, Eds., to be published.

[25] C.W. Helstrom, Quantum Detection and Estimation Theory. New York:Academic, 1976.

[26] D.G. Griffiths, Introduction to Quantum Mechanics. Upper Saddle River,NJ: Prentice-Hall, 1995.

[27] A.S. Holevo, “Statistical decisions in quantum theory,” J. Multivar. Anal.,vol. 3, pp. 337-394, Dec. 1973.

[28] H.P. Yuen, R.S. Kennedy, and M. Lax, “Optimum testing of multiple hy-potheses in quantum detection theory,” IEEE Trans. Inform. Theory, vol.IT-21, pp. 125-134, Mar. 1975.

30 IEEE SIGNAL PROCESSING MAGAZINE NOVEMBER 2002

Page 20: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

[29] Y.C. Eldar, A. Megretski, and G.C. Verghese, “Designing optimal quan-tum detectors via semidefinite programming,” IEEE Trans. Inform. Theory,to be published.

[30] M. Charbit, C. Bendjaballah, and C.W. Helstrom, “Cutoff rate for the-ary PSK modulation channel with optimal quantum detection,” IEEETrans. Inform. Theory, vol. 35, pp. 1131-1133, Sept. 1989.

[31] M. Ban, K. Kurokawa, R. Momose, and O. Hirota, “Optimum measure-ments for discrimination among symmetric quantum states and parameterestimation,” Int. J. Theor. Phys., vol. 36, pp. 1269-1288, 1997.

[32] Y.C. Eldar and G.D. Forney, Jr., “On quantum detection and thesquare-root measurement,” IEEE Trans. Inform. Theory, vol. 47, pp.858-872, Mar. 2001.

[33] Y.C. Eldar, A. Megretski, and G.C. Verghese, “Optimal detection of sym-metric mixed quantum states,” in preparation.

[34] P. Hausladen and W.K. Wootters, “A ‘pretty good’ measurement for dis-tinguishing quantum states,” J. Mod. Opt., vol. 41, pp. 2385-2390, 1994.

[35] H. Barnum and E. Knill, “Reversing quantum dynamics with near-opti-mal quantum and classical fidelity,” [Online]. Available:http://xxx.lanl.gov/abs/quant-ph/0004088

[36] P. Hausladen, R. Josza, B. Schumacher, M. Westmoreland, and W.K.Wootters, “Classical information capacity of a quantum channel,” Phys. Rev.A, vol. 54, pp. 1869-1876, Sept. 1996.

[37] Y.C. Eldar, “Least-squares inner product shaping,” Linear AlgebraApplicat., vol. 348, pp. 153-174, May 2002.

[38] K. Hoffman and R. Kunze, Linear Algebra, 2nd ed. Englewood Cliffs,NJ: Prentice-Hall, 1971.

[39] A. Aldroubi, “Oblique projections in atomic spaces,” Proc. Amer. Math.Soc., vol. 124, no. 7, pp. 2051-2060, 1996.

[40] S. Kayalar and H.L. Weinert, “Oblique projections: Formulas, algo-rithms, and error bounds,” Math. Contr. Signals Syst., vol. 2, no. 1, pp.33-45, 1989.

[41] S.P. Lipshitz, R.A. Wannamaker, and J. Vanderkooy, “Quantization anddither: A theoretical survey,” J. Audio Eng. Soc., vol. 40, no. 5, pp.355-375, May 1992.

[42] R.A. Wannamaker, S.P. Lipshitz, J. Vanderkooy, and J.N. Wright, “Atheory of nonsubtractive dither,” IEEE Trans. Signal Processing, vol. 48, pp.499-516, Feb. 2000.

[43] R.A. Wannamaker, “The theory of dithered quantization,” Ph.D. disserta-tion, Univ. Waterloo, Waterloo, Canada, 1997.

[44] R.M. Gray and T.G. Stockham, “Dithered quantizers,” IEEE Trans. In-form. Theory, vol. 39, pp. 805-812, May 1993.

[45] Y.C. Eldar, A.V. Oppenheim, and D. Egnor, “Orthogonal and projectedorthogonal matched filter detection,” Signal Processing, June 2002, sub-mitted.

[46] Y.C. Eldar and A.V. Oppenheim, “Orthogonal matched filter detection,”in Proc. Int. Conf. Acoust., Speech, Signal Processing (ICASSP-2001), SaltLake City, UT, 2001, vol. 5, pp. 2837-2840.

[47] L.L. Scharf, Statistical Signal Processing: Detection, Estimation and Time Se-ries Analysis. Reading, MA: Addison-Wesley, 1991.

[48] T. Kailath and V. Poor, “Detection of stochastic processes,” IEEE Trans.Inform. Theory, vol. 44, pp. 2230-2259, Oct. 1998.

[49] Y.C. Eldar and A.V. Oppenheim, “MMSE whitening and subspace whit-ening,” submitted for publication.

[50] E.M. Friel and K.M. Pasala, “Direction finding with compensation for anear field scatterer,” in Proc. Int. Symp. Ant. Propagat. Soc., 1995, pp.106-109.

[51] R.J. Piechocki, N. Canagarajah, and J.P. McGeehan, “Improving the di-rection-of-arrival resolution via double code filtering in WCDMA,” in Proc.First Int. Conf. 3G Mobile Commun. Technol., Mar. 2000, pp. 204-207.

[52] Y.C. Eldar and A.V. Oppenheim, “Orthogonal multiuser detection,” Sig-nal Processing, vol. 82, pp. 321-325, 2002.

[53] Y.C. Eldar and A.M. Chan, “An optimal whitening approach to linearmultiuser detection,” submitted for publication.

[54] Y. C. Eldar and A. V. Oppenheim, “Covariance shaping least-squares esti-mation,” IEEE Trans. Signal Processing, to be published.

[55] T. Kailath, Lectures on Linear Least-Squares Estimation. Wein, NY:Springer, 1976.

[56] S.M. Kay, Fundamentals of Statistical Signal Processing: Estimation Theory.Upper Saddle River, NJ: Prentice-Hall, 1993.

[57] C.W. Therrien, Discrete Random Signals and Statistical Signal Processing.Englewood Cliffs, NJ: Prentice-Hall, 1992.

[58] J.A. Cadzow, “Signal processing via least squares modeling,” IEEE ASSPMag., pp. 12-31, Oct. 1990.

[59] A.E. Hoerl and R.W. Kennard, “Ridge regression: Biased estimation fornonorthogonal problems,” Technometrics, vol. 12, pp. 55-67, Feb. 1970.

[60] A.N. Tikhonov and V.Y. Arsenin, Solution of Ill-Posed Problems. Washing-ton, DC: Winston, 1977.

[61] L.S. Mayer and T.A. Willke, “On biased estimation in linear models,”Technometrics, vol. 15, pp. 497-508, Aug. 1973.

[62] C.R. Rao, “Minimum variance and the estimation of several parameters,”Proc. Cambridge Phil. Soc., pp. 280-283, 1946.

[63] H.L. Van Trees, Detection, Estimation, and Modulation Theory, Part I. NewYork: Wiley, 1968.

[64] S. Verdu, Multiuser Detection, Cambridge, U.K.: Cambridge Univ. Press,1998.

[65] R. Lupas and S. Verdu, “Linear multiuser detectors for synchronouscode-division multiple-access channels,” IEEE Trans. Inform. Theory, vol.35, pp. 123-136, Jan. 1989.

[66] S. Ulukus and R.D. Yates, “Optimum multiuser detection is tractable forsynchronous CDMA systems using m-sequences,” IEEE Commun. Lett.,vol. 2, pp. 89-91, Apr. 1998.

[67] Y.C. Eldar and A.M. Chan, “On the asymptotic large system performanceof the decorrelator,” IEEE Trans. Inform. Theory, to be published.

[68] D.N.C. Tse and S.V. Hanly, “Linear multiuser receivers: Effective inter-ference, effective bandwidth and user capacity,” IEEE Trans. Inform. Theory,vol. 45, pp. 641-657, Mar. 1999.

[69] S. Verdu and S. Shamai (Shitz), “Spectral efficiency of CDMA with ran-dom spreading,” IEEE Trans. Inform. Theory, vol. 45, pp. 622-640, Mar.1999.

[70] R.T. Behrens and L.L. Scharf, “Signal processing applications of obliqueprojection operators,” IEEE Trans. Signal Processing, vol. 42, pp.1413-1424, June 1994.

[71] Y.C. Eldar, “On geometrical properties of the decorrelator,” IEEECommun. Lett., vol. 6, pp. 16-18, Jan. 2002.

[72] R.T. Behrens and L.L. Scharf, “Parameter estimation in the presence oflow-rank noise,” in Proc. 22nd Asilomar Conf. Signals Syst. Comput., PacificGrove, CA, Nov. 1988.

[73] M.J. Vrhel, C. Lee, and M. Unser, “Rapid computation of the continuouswavelet transform by oblique projections,” IEEE Trans. Signal Processing,vol. 45, pp. 891-900, Apr. 1997.

[74] C. Lee, M. Eden, and M. Unser, “High-quality image resizing usingoblique projection operators,” IEEE Trans. Image Processing, vol. 7, pp.670-692, May 1998.

[75] Y.C. Eldar, “Constructing signals with prescribed properties,” IEEE SignalProcessing Lett., to be published.

[76] B.M. Hochwald and W. Sweldens, “Space-time modulation for unknownfading,” in Digital Wireless Communication, R.M. Rao, S.A. Dianat, andM.D. Zoltowski, Eds., SPIE Proc., vol. 3708, June 1999.

NOVEMBER 2002 IEEE SIGNAL PROCESSING MAGAZINE 31

Page 21: Yonina C. Eldar and Alan V. Oppenheim...Alan V. Oppenheim Measurement, Quantization, Consistency This famous strobe photograph taken by Dr. Harold “Doc” Edgerton at MIT in 1957

[77] B.M. Hochwald and T.L. Marzetta, “Unitary space-time modulation formultiple-antenna communications in rayleigh flat fading,” IEEE Trans. In-form. Theory, vol. 46, pp. 543-564, 2000.

[78] M.A. Neumark, “On a representation of additive operator set functions,”C. R. (Doklady) Acad. Sci. URSS (N.S.), vol. 41, pp. 359-361, 1943.

[79] A. Peres, “Neumark’s theorem and quantum inseparability,” Found. Phys.,vol. 20, no. 12, pp. 1441-1453, 1990.

[80] Y.C. Eldar and G.D. Forney, Jr., “Optimal tight frames and quantummeasurement,” IEEE Trans. Inform. Theory, vol. 48, pp. 599-610, Mar.2002.

[81] R.J. Duffin and A.C. Schaeffer, “A class of nonharmonic Fourier series,”Trans. Amer. Math. Soc., vol. 72, pp. 314-366, 1952.

[82] R.M. Young, An Introduction to Nonharmonic Fourier Series. New York:Academic, 1980.

[83] J.J. Benedetto, “Irregular sampling and frames,” in Wavelets—A Tutorialin Theory and Applications, C.K. Chui, Ed. Boca Raton, FL: CRC, 1992,pp. 445-507.

[84] C.E. Heil and D.F. Walnut, “Continuous and discrete wavelet trans-forms,” SIAM Rev., vol. 31, no. 4, pp. 628-666, Dec. 1989.

[85] I. Daubechies, “The wavelet transform, time-frequency localization andsignal analysis,” IEEE Trans. Inform. Theory, vol. 36, pp. 961-1005, Sept.1990.

[86] I. Daubechies, Ten Lectures on Wavelets. Philadelphia, PA: SIAM, 1992.

[87] H. Bölcskei, “Oversampled filter banks and predictive subband coders,”Ph.D. dissertation, Vienna Univ. Technol., Nov. 1997.

[88] H. Bölcskei and A.J.E.M. Janssen, “Gabor frames, unimodularity, andwindow decay,” J. Fourier Analys. Applicat., vol. 6, no. 3, pp. 255-276,2000.

[89] A.J.E.M. Janssen and H. Bölcskei, “Equivalence of two methods for con-structing tight Gabor frames,” IEEE Signal Processing Lett., vol. 7, pp.79-82, Apr. 2000.

[90] Y. Meyer, “Ondelettes et functions splines,” Dec. 1986, Seminaire EDP,Ecole Polytechn., Paris, France.

[91] I. Daubechies, “Orthonormal bases of compactly supported wavelets,”Comm. Pure Appl. Math., pp. 909-996, 1988.

[92] M. Unser and A. Aldroubi, “Families of multiresolution and waveletspaces with optimal properties,” Numer. Funct. Anal. Optimiz., vol. 14, pp.417-446, 1993.

[93] Y.C. Eldar and H. Bölcskei, “Geometrically uniform frames,” IEEE Trans.Inform. Theory, to be published.

[94] G.D. Forney, Jr., “Geometrically uniform codes,” IEEE Trans. Inform.Theory, vol. 37, pp. 1241-1260, Sept. 1991.

[95] G. Ungerboeck, “Channel coding with multilevel/phase signals,” IEEETrans. Inform. Theory, vol. 28, pp. 55-67, 1982.

[96] D. Raphaeli, “On multidimensional coded modulations having uniformerror property for generalized decoding and flat-fading channels,” IEEETrans. Commun., vol. 46, pp. 34-40, 1998.

[97] V.K. Goyal, J. Kovavcevic, and J.A. Kelner, “Quantized frame expansionswith erasures,” Appl. Comp. Harmonic Anal., vol. 10, pp. 203-233, May2001.

[98] M. Unser, “Splines: A perfect fit for signal and image processing,” IEEESignal Processing Mag., pp. 22-38, Nov. 1999.

[99] M. Unser and J. Zerubia, “Generalized sampling: Stability and perfor-mance analysis,” IEEE Trans. Signal Processing, vol. 45, pp. 2941-2950,Dec. 1997.

[100] T. Blu and M. Unser, “Quantitative Fourier analysis of approximationtechniques: Part I—Interpolators and projectors,” IEEE Trans. Signal Pro-cessing, vol. 47, pp. 2783-2795, Oct. 1999.

[101] T. Blu and M. Unser, “Quantitative Fourier analysis of approximationtechniques: Part II—Wavelets,” IEEE Trans. Signal Processing, vol. 47, pp.2796-2806, Oct. 1999.

[102] A. Aldroubi, “Portraits of frames,” Proc. Amer. Math. Soc., vol. 123, pp.1661-1668, 1995.

[103] J.J. Atick and A.N. Redlich, “Convergent algorithm for sensory receptivefiled development,” Neurak Comp., vol. 5, 1993.

[104] I.S. Gradshteyn and I.M. Ryzhik, Table of Integrals, Series and Products.New York: Academic, 1980.

[105] K.L. Chung, A Course in Probability Theory. San Diego, CA: Academic,2001.

32 IEEE SIGNAL PROCESSING MAGAZINE NOVEMBER 2002