Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic...

29
Section for Cognitive Systems Informatics and Mathematical Modelling Technical University of Denmark Correlation Functions and Power Spectra Jan Larsen 8th Edition c 1997–2009 by Jan Larsen

Transcript of Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic...

Page 1: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

Section for Cognitive SystemsInformatics and Mathematical ModellingTechnical University of Denmark

Correlation Functions and Power Spectra

Jan Larsen

8th Editionc© 1997–2009 by Jan Larsen

Page 2: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

i

Page 3: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

Contents

Preface iii

1 Introduction 1

2 Aperiodic Signals 1

3 Periodic Signals 1

4 Random Signals 34.1 Stationary Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54.2 Ergodic Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74.3 Sampling of Random Signals . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94.4 Discrete-Time Systems and Power/Cross-Power Spectra . . . . . . . . . . . . . 11

5 Mixing Random and Deterministic Signals 145.1 Erogodicity Result . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155.2 Linear Mixing of Random and Periodic Signals . . . . . . . . . . . . . . . . . . 16

A Appendix: Properties of Correlation Functions and Power Spectra 19A.1 Definitions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19A.2 Definitions of Correlation Functions and Power Spectra . . . . . . . . . . . . . . 19A.3 Properties of Autocorrelation Functions . . . . . . . . . . . . . . . . . . . . . . 21A.4 Properties of Power Spectra . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22A.5 Properties of Crosscorrelation Functions . . . . . . . . . . . . . . . . . . . . . . 23A.6 Properties of Cross-Power Spectra . . . . . . . . . . . . . . . . . . . . . . . . . 24

Bibliography 25

ii

Page 4: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

Preface

The present note is a supplement to the textbook Digital Signal Processing [5, 6] used in the DTUcourse 02451 (former 04361) Digital Signal Processing (Digital Signalbehandling).

The note addresses correlations functions and power spectra and extends the material in Ch.12 [5, 6].

Parts of the note are based on material by Peter Koefoed Møller used in the former DTUCourse 4232 Digital Signal Processing.

The 6th edition provides an improvement of example 3.2 for which Olaf Peter Strelcyk isacknowledged.

In the 7th edition small errors and references are corrected.In the 8th edition a few topics are elaborated.

Jan LarsenKongens Lyngby, November 2009

The manuscript was typeset in 11 points Times Roman using LATEX 2ε.

iii

Page 5: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

1 Introduction

The definitions of correlation functions and spectra for discrete-time and continuous-time (analog)signals are pretty similar. Consequently, we confine the discussion mainly to real discrete-timesignals. The Appendix contains detailed definitions and properties of correlation functions andspectra for analog as well as discrete-time signals.

It is possible to define correlation functions and associated spectra for aperiodic, periodic andrandom signals although the interpretation is different. Moreover, we will discuss correlationfunctions when mixing these basic signal types.

In addition, the note include several examples for the purpose of illustrating the discussedmethods.

2 Aperiodic Signals

The crosscorrelation function for two aperiodic, real1, finite energy discrete-time signals xa(n),ya(n) is given by:

rxaya(m) =∞∑

n=−∞xa(n)ya(n−m) = xa(m) ∗ ya(−m) (1)

Note that rxaya(m) is also an aperiodic signal. The autocorrelation function is obtained by settingxa(n) = ya(n). The associated cross-energy spectrum is given by

Sxaya(f) =∞∑

m=−∞rxaya(m)e−j2πfm = Xa(f)Y

∗a (f) (2)

The energy of xa(n) is given by

Exa =

∫ 1/2

−1/2Sxaxa(f) df = rxaxa(0) (3)

3 Periodic Signals

The crosscorrelation function for two periodic, real, finite power discrete-time signals xp(n),yp(n) with a common period N is given by:

rxpyp(m) =1

N

N−1∑n=0

xp(n)yp((n−m))N = xp(m) ∗ yp(−m) (4)

Note that rxpyp(m) is a periodic signal with period N . The associated cross-power spectrum isgiven by:

Sxpyp(k) =1

N

N−1∑m=0

rxpyp(m)e−j2π kN

m = Xp(k)Y∗p (k) (5)

where Xp(k), Yp(k) are the spectra of xp(n), yp(n) 2. The spectrum is discrete with componentsat frequencies f = k/N , k = 0, 1, · · · , N−1, or F = kFs/N where Fs is the sampling frequency.Further, the spectrum is periodic, Sxpyp(k) = Sxpyp(k +N).

1In the case of complex signals the crosscorrelation function is defined by rxaya(m) = xa(m) ∗ y∗a(−m) where

y∗a(m) is the complex conjugated.

2Note that the definition of the spectrum follows [5, 6, Ch. 4.2] and differs from the definition of the DFT in [5, 6,Ch. 7]. The relation is: DFT{xp(n)} = N ·Xp(k).

1

Page 6: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

The power of xp(n) is given by

Pxp =

∫ 1/2

−1/2Sxpxp(f) df = rxpxp(0) (6)

Example 3.1 Determine the autocorrelation function and power spectrum of the tone signal:

xp(n) = a cos(2πfxn+ θ)

with frequency 0 ≤ fx ≤ 1/2. The necessary requirement for xp(n) to be periodic is that thefundamental integer period N is chosen according to Nfx = q where q is an integer. That means,fx has to be a rational number. If fx = A/B is an irreducible fraction we choose Nmin = B. Ofcourse any N = Nmin�, � = 1, 2, 3, · · · is a valid choice. Consequently, using Euler’s formulawith q = A� gives:

xp(n) = a cos

(2π

q

Nn+ θ

)=

a

2

(ejθej2π

qNn + e−jθe−j2π q

Nn)

Thus since xp(n) =∑N−1

k=0 Xp(k)ej2π k

Nn, the spectrum is:

Xp(k) =a

2ejθδ(k − q) +

a

2e−jθδ(k + q)

where δ(n) is the Kronecker delta function. The power spectrum is then found as:

Sxpxp(k) = Xp(k)X∗p (k) =

a2

4(δ(k − q) + δ(k + q))

Using the inverse Fourier transform,

rxpxp(m) =a2

2cos

(2π

q

Nm

)When mixing periodic signals with signals which have continuous spectra, it is necessary to deter-mine the spectrum Sxpxp(f) where −1/2 ≤ f ≤ 1/2 is the continuous frequency. Using that theconstant (DC) signal a

2e±jθ has the spectrum a

2e±jθδ(f), where δ(f) is the Dirac delta function,

and employing the frequency shift property, we get:

a cos(2πfxn+ θ) =a

2ejθej2πfxn +

a

2e−jθe−j2πfxn

That is,

Xp(f) =a

2ejθδ(f − fx) +

a

2e−jθδ(f + fx)

and thus

Sxpxp(f) =a2

4(δ(f − fx) + δ(f + fx))

2

Page 7: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

Example 3.2 Consider two periodic discrete-time signals xp(n), yp(n) with fundamental frequen-cies 0 ≤ fx ≤ 1/2 and 0 ≤ fy ≤ 1/2, respectively. Give conditions for which the cross-powerspectrum vanishes.

Let us first consider finding a common period N , i.e., we have the requirements: Nfx =px and Nfy = py where px, py are integers. It is possible to fulfill these requirements onlyif both fx and fy are rational numbers. Suppose that fx = Ax/Bx and fy = Ay/By whereAx, Bx, Ay, By are integers, then the minimum common period Nmin = lcm(Bx, By) wherelcm(·, ·) is the least common multiple3. If N is chosen as N = �Nmin where � = 1, 2, 3, · · · thesignals will be periodic and xp(n) has potential components at kx = �pxqx, where px = Nminfxand qx = 0, 1, 2, · · · , �1/fx� − 1. Similarly, yp(n) has potential components at ky = �pyqy,where py = Nminfy and qy = 0, 1, 2, · · · , �1/fy� − 1. The cross-power spectrum does notvanish if kx = ky occurs for some choice of qx, qy. Suppose that we choose a common periodN = Nmin� = BxBy, then kx = Nfxqx = ByAxqx and ky = Nfyqy = BxAyqy. Now, if xp(n)has a non-zero component at qx = BxAy and yp(n) has a non-zero component at qy = ByAx thenkx = ky and the cross-power spectrum does not vanish4. Otherwise, the cross-power spectrumwill vanish. If N is not chosen as N = �Nmin the cross-power spectrum does generally not vanish.

Let us illustrate the ideas by considering xp(n) = cos(2πfxn) and yp(n) = cos(2πfyn).

Case 1: In the first case we choose fx = 4/33 and fy = 2/27. Bx = 3 · 11 and By = 33, i.e.,Nmin = lcm(Bx, By) = 32 · 11 = 297. Choosing N = Nmin, xp(n) has components atkx = 36 and kx = 297−36 = 261. yp(n) has components at ky = 22 and ky = 297−22 =275. The cross-power spectrum thus vanishes.

Case 2: In this case we choose fx = 1/3, fy = 1/4 and N = 10. thus Nmin = lcm(Bx, By) =lcm(3, 4) = 12. Since N is not �Nmin the stated result above does not apply. In fact, thecross-power spectrum does not vanish, as shown in Fig. 1.

4 Random Signals

A random signal or stochastic process X(n) has random amplitude values, i.e., for all time indicesX(n) is a random variable. A particular realization of the random signal is x(n). The randomsignal is characterized by its probability density function (PDF)5 p(xn), where xn is a particularvalue of the signal. As an example, p(xn) could be Gaussian with zero mean, μ = E[xn] =∫xnp(xn)dxn = 0 and variance σ2

x = E[(xn − μ)2] =∫(xn − μ)2p(xn)dxn. That is, the PDF is

p(xn) =1√2πσx

e

−x2(n)2σ2

x , ∀n (7)

Fig. 2 shows three different realizations, x(n, ξ), ξ = 1, 2, 3 of the random signal. The family ofdifferent realizations is denoted the ensemble. Note that for e.g., n = n1 that the outcomes of

3In order to find the least common multiple of A, B, we first prime number factorize A and B. Then lcm(A,B) isthe product of these prime factors raised to the greatest power in which they appear.

4If N = Nmin then this situation happens for qx = BxAy/� and qy = ByAx/�5Also referred to as the first-order distribution.

3

Page 8: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

0 1 2 3 4 5 6 7 8 90

0.5

k

|Xp(

k)|

0 1 2 3 4 5 6 7 8 90

0.2

0.4

k

|Yp(

k)|

0 1 2 3 4 5 6 7 8 90

0.1

k

|Xp(

k)Y

p* (k)

|

Figure 1: Magnitude spectra |Xp(k)|, |Yp(k)| and magnitude cross-power spectrum |Sxpyp(k)| =|Xp(k)Y

∗p (k)|.

0 20 40 60 80 100−5

0

5

x(n,

1)

0 20 40 60 80 100−5

0

5

x(n,

2)

0 20 40 60 80 100−5

0

5

n

x(n,

3)

n1

n2

Figure 2: Three different realizations x(n, ξ), ξ = 1, 2, 3 of a random signal.

x(n1) are different for different realizations. If one generated an infinite amount of realizations,

4

Page 9: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

ξ = 1, 2, · · ·, then these will reflect the distribution6, as shown by

P (xn) = Prob{X(n) ≤ xn} = limK→∞

K−1K∑ξ=1

μ(x− x(n, ξ)) (8)

where P (x;n) is the distribution function and μ(·) is the step function which is zero if the argu-ment is negative, and one otherwise.

Random signals can be classified according to the taxonomy in Fig. 3.

Non-Ergodic

�Stationary

Ergodic

Non-Stationary

Special CasesCyclostationary

� � �

Random

Figure 3: Taxonomy of random signals. Stationary signals are treated in Sec. 4.1, ergodic signalsin Sec. 4.2, and cyclostationary signals are briefly mentioned in Sec. 5.2.

4.1 Stationary Signals

In generally we consider k’th order joint probability densities associated with the signal x(n)defined by p(xn1 , xn2 , · · · , xnk

), i.e., the joint probability of x(ni)’s, i = 1, 2, · · · , k.A signal is strictly stationary if

p(xn1 , xn2 , · · · , xnk) = p(xn1+�, xn2+�, · · · , xnk+�), ∀ �, k (9)

That is for any k, the k’th order probability density does not change over time, i.e., invariant toany time shift, �.

Normally we consider only wide-sense stationary7 in which random signal is characterized byits time-invariant the mean value and the autocorrelation function. The mean value is defined by:

mx = E[x(n)] =

∫ ∞

−∞xn · p(xn) dxn (10)

where E[·] is the expectation operator. The autocorrelation function is defined by:

γxx(m) = E[x(n)x(n−m)] =

∫ ∞

−∞

∫ ∞

−∞xnxn−m · p(xn, xn−m) dxndxn−m (11)

6The density function p(xn) is the derivative of the (cumulative) distribution function ∂P (x;n)/∂x7Also known as second order stationarity or weak stationarity.

5

Page 10: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

Since the 2nd order probability density p(xn, xn−m) is invariant to time shifts for wide-sensestationary processes, the autocorrelation function is a function of m only.

The covariance function is closely related to the autocorrelation function and defined by:

cxx(n) = E[(x(n)−mx) · (x(n−m)−mx)] = γxx(m)−m2x (12)

For two signals x(n), y(n) we further define the crosscorrelation and crosscovariance functionsas:

γxy(m) = E[x(n)y(n−m)] (13)

cxy(m) = E[(x(n)−mx) · (y(n−m)−my)] = γxy(m)−mxmy (14)

If γxy(m) = mx · my, i.e. cxy(m) = 0 for all m, the signals are said to be uncorrelated, and ifγxy(m) = 0 they are said to be orthogonal.

The power spectrum and cross-power spectrum are defined as the Fourier transforms of theautocorrelation and crosscorrelation functions, respectively, i.e.,

Γxx(f) =∞∑

m=−∞γxx(m)e−j2πfm (15)

Γxy(f) =∞∑

m=−∞γxy(m)e−j2πfm (16)

The power of x(n) is given by

Px =

∫ 1/2

−1/2Γxx(f) df = γxx(0) = E[x2(n)] (17)

The inverse Fourier transforms read:

γxx(m) =

∫ 1/2

−1/2Γxx(f)e

j2πfm df (18)

γxy(m) =

∫ 1/2

−1/2Γxy(f)e

j2πfm df (19)

Example 4.1 Let x(n) be a white noise signal with power Px, i.e., the power spectrum is flat(white) Γxx(f) = Px. By inverse Fourier transform, the associated autocorrelation functionis γxx(m) = Px · δ(m). Note that according to properties of autocorrelation functions 21,limm→∞ γxx(m) = m2

x. That is the mean value of a white noise signal is mx = 0.

Example 4.2 Evaluate the autocorrelation function and power spectrum for the signal z(n) =ax(n) + by(n) + c where and a, b, c are constants and x(n), y(n) are stationary signals withmeans mx,my, autocorrelation functions γxx(m), γyy(m), and crosscorrelation function γxy(m).Using the definition Eq. (11), the fact that that the mean value operator is linear, i.e., E[ax+by] =

6

Page 11: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

aE[x] + bE[y], and the symmetry property of the crosscorrelation function (γxy(−m) = γyx(m))we get:

γzz(m) = E[z(n)z(n−m)]

= E[(ax(n) + by(n) + c) · (ax(n−m) + by(n−m) + c)]

= E[a2x(n)x(n−m) + b2y(n)y(n−m) + abx(n)y(n−m) + aby(n)x(n−m)

+acx(n) + acx(n−m) + bcy(n) + bcy(n−m) + c2]

= a2γxx(m) + b2γyy(m) + ab(γxy(m) + γxy(−m)) + 2acmx + 2bcmy + c2 (20)

According to Eq. (15) and (16) the power spectrum yields:

Γzz(f) = a2Γxx(f) + b2Γyy(f) + ab(Γxy(f) + Γ∗xy(f)) + (2acmx + 2bcmy + c2)δ(f)

= a2Γxx(f) + b2Γyy(f) + 2abRe[Γxy(f)] + (2acmx + 2bcmy + c2)δ(f) (21)

Note that the power spectrum is a sum of a continuous part and a delta function in f = 0.

4.2 Ergodic Signals

Assuming a wide-sense stationary signal to be ergodic means that expectations - or ensembleaverages - involved in determining the mean or correlation functions can be substituted by timeaverages. For example,

mx = 〈x(n) 〉 = limN→∞

1

N

N−1∑n=0

x(n) (22)

γxy(m) = 〈x(n)y(n−m) 〉 = limN→∞

1

N

N−1∑n=0

x(n)y(n−m) (23)

In the case Eq. (22) holds the signal is said to be mean ergodic and if Eq. (23) holds the signals aresaid to be correlation ergodic, see further [4], [5, 6, Ch. 12]. Most physical processes are meanand correlation ergodic and in general, we will tacitly assume ergodicity.

4.2.1 Correlation Function Estimates

Suppose that rxy(m) is an estimate of γxy(m) based on N samples of x(n) and y(n). The estimaterxy(m) is recognized as a random signal since it is a function of the random signals x(n) andy(n). In order to assess the quality of an estimate we normally consider the bias, B[rxy(m)], thevariance, V [rxy(m)], and the mean square error, MSE[rxy(m)], defined by:

B[rxy(m)] = E[rxy(m)]− γxy(m) (24)

V [rxy(m)] = E[ (rxy(m)− E[rxy(m)])2 ] (25)

MSE[rxy(m)] = E[ (rxy(m)− γxy(m))2 ] = B2[rxy(m)] + V [rxy(m)] (26)

Note that the variance and mean square error are positive quantities.Suppose that x(n), y(n) are correlation ergodic random signals and we have collected N sam-

ples of each signal for n = 0, 1, · · · , N − 1. Using a truncated version of Eq. (23) an estimatebecomes:

r′xy(m) =1

N −m

N−1∑n=m

x(n)y(n−m), for m = 0, 1, · · · , N − 1 (27)

7

Page 12: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

For 0 ≤ m ≤ N − 1, the bias is assessed by evaluating

E[r′xy(m)] = E

[1

N −m

N−1∑n=m

x(n)y(n−m)

]

=1

N −m

N−1∑n=m

E[x(n)y(n−m)]

=1

N −m

N−1∑n=m

γxy(m) = γxy(m) (28)

That is B[r′xy(m)] = 0, and the estimator is said to be unbiased. The variance is more complicatedto evaluate. An approximate expression is given by (see also [5, Ch. 14])

V [r′xy(m)] =N

(N −m)2

∞∑n=−∞

γxx(n)γyy(n) + γxy(n−m)γyx(n+m) (29)

Provided the sum is finite (which is the case for correlation ergodic signals), the variance vanishesfor N → ∞, and consequently limN→∞ rxy(m) = γxy(m). The estimate is thus referred to as aconsistent estimate. However, notice that V [r′xy(m)] = O(1/(N −m)), i.e., for m close to N thevariance becomes very large.

An alternative estimator is given by:

rxy(m) =1

N

N−1∑n=m

x(n)y(n−m), for m = 0, 1, · · · , N − 1 (30)

For 0 ≤ m ≤ N − 1, the bias is evaluated by considering

E[rxy(m)] =1

N

N−1∑n=m

E[x(n)y(n−m)]

=N −m

Nγxy(m) =

(1− m

N

)γxy(m) (31)

That is, the bias is B[rxy(m)] = E[rxy(m)]− γxy(m) = −mγxy(m)/N . rxy(m) is thus a biasedestimate, but vanishes as N → ∞, for which reason the estimator is referred to as asymptoticallyunbiased. The variance can be approximated by

V [rxy(m)] =1

N

∞∑n=−∞

γxx(n)γyy(n) + γxy(n−m)γyx(n+m) (32)

Thus, generally limN→∞ V [rxy(m)] = 0. Moreover, V [rxy(m)] = O(1/N), which means thatthe variance does not increase tremendously when m is close to N , as was the case for r′xy(m).The improvement in variance is achieved at the expense of increased bias. This phenomenon isknown as the bias-variance dilemma which illustrated in Fig. 4. If the objective is to find anestimator which has minimum mean square error, this is achieved by optimally trading off biasand variance according to Eq. (26)8.

In most situations the rxy(m) estimator has the smallest MSE, and is therefore preferable.

8MSE is the sum of the variance and the squared bias.

8

Page 13: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

r′xy(m)

γxy(m) = E[r′xy(m)]

rxy(m)

E[rxy(m)]√V [rxy(m)]

√V [r′xy(m)]

Figure 4: The bias/variance dilemma.

4.3 Sampling of Random Signals

4.3.1 Sampling theorem for random signals

Following [4]: Suppose that xa(t) is a real stationary random analog signal with power densityspectrum Γxaxa(F ) which is band-limited by Bx, i.e., Γxaxa(F ) = 0, for |F | > Bx. By samplingwith a frequency Fs = 1/T > 2Bx, x(t) can be reconstructed from the samples x(n) = xa(nT )by the usual reconstruction formula

x̂a(t) =∞∑

n=−∞x(n)

sin(π/T (t− nT ))

π/T (t− nT )(33)

The reconstruction x̂a(t) equals xa(t) in the mean square sense9, i.e.,

E[(x̂a(t)− xa(t))2] = 0 (34)

As the autocorrelation function γxaxa(τ) is a non-random function of time, hence it is an ordinaryaperiodic continuous-time signal with spectrum Γxaxa(F ). As a consequence when the samplingtheorem is fulfilled then as usual [5, 6, Ch. 6.1]:

γxx(m) = γxaxa(mT ) (35)

Γxx(f) = Fs

∞∑k=−∞

Γxaxa((f − k)Fs) (36)

9Convergence in mean square sense does not imply convergence everywhere; however, the details are subtle andnormally of little practical interest. Further reading on differences between convergence concepts, see [3, Ch. 8-4].

9

Page 14: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

4.3.2 Equivalence of Correlation Functions

In order further to study the equivalence between correlations functions for analog and discrete-time random signals, suppose that xa(t) and ya(t) are correlation ergodic random analog withpower density spectra Γxaxa(F ) and Γyaya(F ) band-limited by Bx and By, respectively10. Thecrosscorrelation function is defined as:

γxaya(τ) = E[xa(t)ya(t− τ)] = limTi→∞

1

Ti

∫ Ti/2

−Ti/2xa(t)ya(t− τ) dt (37)

That is, γxaya(τ) can be interpreted as the integration of the product signalza(t) = xa(t)ya(t − τ) for a given fixed τ . The analog integrator is defined as the filter withimpulse and frequency responses:

hint(t) =

⎧⎪⎪⎪⎨⎪⎪⎪⎩1Ti

, |t| < Ti/2

0 , otherwise

Hint(F ) =sinπTiF

πTiF(38)

Thus γxaya(τ) = limTi→∞ za(t) ∗ hint(t)|t=0.The question is: what is the required sampling frequency in order to obtain a discrete-time

equivalent γxx(n) of γxaxa(τ)?. Suppose that X(F ), Y (F ) are the Fourier transforms of real-izations of xa(t), ya(t) for |t| < Ti/2. Then, since za(t) is a product of the two signals, thecorresponding Fourier transform is:

Z(F ) = X(F ) ∗ Y (F )e−j2πFτ (39)

Thus, Z(F ) will generally have spectral components for |F | < Bx + By. Sampling za(t) inaccordance with the sampling theorem thus requires Fs > 2(Bx + By). The power spectrumΓzz(f), f = F/Fs of the discrete-time signal z(n) is sketched in Fig. 5. Notice, in principle we

Γzz(f)

f

� ��δ(f)

(Bx +By)/Fs(−Bx −By)/Fs 1−1

Figure 5: The power spectrum of the sampled z(n), Γzz(f) with possible δ-functions located atf + k, k = 0,±1,±2, · · ·.

can perform extreme subsampling with Fs arbitrarily close to zero. This causes aliasing; however,since the purpose of the integrator is to pick out the possible DC-component, the aliasing does notintroduce error. The drawback is that it is necessary to use a large integration time, Ti, i.e., the

10That is, Γxaxa(F ) = 0 for |F | > Bx and Γyaya(F ) = 0 for |F | > By .

10

Page 15: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

signals need to be observed for a long time. Secondly, we are normally not content with a digitaldetermination of the crosscorrelation for a single lag, τ . Often the goal is to determine spectralproperties by Fourier transformation of the discrete-time crosscorrelation function. That is, wewant γxaya(τ) for lags τ = m/Fs where Fs ≥ 2Bxy and Bxy is the band-limit of Γxaya(F ). Thatis, xa(t), ya(t) are sampled with Fs > 2Bxy. According to the table in Sec. A.6, |Γxaya(F )|2 ≤Γxaxa(F )Γyaya(F ), which means that the band-limit Bxy ≤ min(Bx, By). In consequence, xa(t)and/or ya(t) are allowed to be under-sampled when considering the crosscorrelation function11.

4.4 Discrete-Time Systems and Power/Cross-Power Spectra

4.4.1 Useful Power/Cross-Power Expressions

Suppose that the real random stationary signals x(n) and y(n) are observed in the interval 0 ≤n ≤ N − 1. Now, perform the Fourier transforms of the signals, as shown by:

X(f) =N−1∑n=0

x(n)e−j2πfn Y (f) =N−1∑n=0

y(n)e−j2πfn (40)

Note that X(f) and Y (f) also are (complex) random variables since they are the sum of randomvariables times a deterministic complex exponential function.

The intention is to show that the power and cross-power spectra can be expressed as:

Γxx(f) = limN→∞

1

NE

[|X(f)|2

]= lim

N→∞1

NE[X(f)X∗(f)] (41)

Γxy(f) = limN→∞

1

NE[X(f)Y ∗(f)] (42)

Here we only give the proof of Eq. (42) since the proof of Eq. (41) is similar, see also [1, Ch. 5],[7, Ch. 11]. We start by evaluating

X(f)Y ∗(f) =N−1∑n=0

N−1∑k=0

x(n)y(k)e−j2πf(n−k) (43)

Next performing expectation E[·] gives12

E[X(f)Y ∗(f)] =N−1∑n=0

N−1∑k=0

E[x(n)y(k)]e−j2πf(n−k)

=N−1∑n=0

N−1∑k=0

γxy(n− k)e−j2πf(n−k) (44)

Let m = n − k and notice −(N − 1) ≤ m ≤ N − 1. In the summation w.r.t. n and k it is easyto verify that a particular value of m appears N − |m| times. By changing the summation w.r.t. nand k by a summation w.r.t. m:

1

NE[X(f)Y ∗(f)] =

1

N

N−1∑m=−(N−1)

γxy(m)(N − |m|)e−j2πfm

11When considering second order correlation functions and spectra, it suffices to study linear mixing of randomsignals. Suppose that xa(t) = g1(t) + g2(t) and ya(t) = g2(t) + g3(t) where the gi(t) signals all are orthogonalwith band-limits Bi. The band-limit Bx = max(B1, B2) and By = max(B2, B3). Since γxaya(τ) = γg2g2(τ),Bxy = B2. Accordingly, Bxy ≤ min(Bx, By).

12Note that the expectation of a sum is the sum of expectations.

11

Page 16: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

=N−1∑

m=−(N−1)

γxy(m)(1− |m|N

)e−j2πfm (45)

By defining the signal v(m) = 1 − |m|/N then N−1E[X(f)Y ∗(f)] is seen to be the Fouriertransform of the product γxy(m) · v(m). That is,

1

NE[X(f)Y ∗(f)] = V (f) ∗

N−1∑m=−(N−1)

γxy(m)e−j2πfm (46)

where ∗ denotes convolution and V (f) is the spectrum of v(m) given by

V (f) =1

N

sin2 πfN

sin2 πf(47)

which tends to a Dirac delta function V (f) → δ(f) as N → ∞. Consequently,

limN→∞

1

NE[X(f)Y ∗(f)] =

∞∑m=−∞

γxy(m)e−j2πfm = Γxy(f) (48)

Sufficient conditions are that the crosscovariance cxy(m) = γxy(m)−mxmy obey

limN→∞

N∑m=−N

|cxy(m)| < ∞ or limm→∞ cxy(m) = 0 (49)

These conditions are normally fulfilled and implies that the process is mean ergodic [4].Eq. (41) and (42) are very useful for determining various power and cross-power spectra in

connection with linear time-invariant systems. The examples below show the methodology.

Example 4.3 Find the power spectrum Γyy(f) and the cross-power spectrum Γxy(f) where x(n)is a random input signal to a LTI system with impulse response h(n) and output y(n) = h(n) ∗x(n). Suppose that finite realizations of length N of x(n) and y(n) are given, and denote byX(f) and Y (f) the associated Fourier transforms which are related as: Y (f) = H(f)X(f)where H(f) ↔ h(n) is the frequency response of the filter. In order to find the cross-powerspectrum we evaluate

X(f)Y ∗(f) = X(f)H∗(f)X∗(f) = H∗(f)X(f)X∗(f) (50)

Since H(f) is deterministic, the expectation becomes

E[X(f)Y ∗(f)] = H∗(f)E[X(f)X∗(f)] (51)

Dividing by N and performing the limit operation yields:

Γxy(f) = limN→∞

1

NE[X(f)Y ∗(f)] = H∗(f)Γxx(f) (52)

Since Γyx(f) = Γ∗xy(f) we further have the relation

Γyx(f) = H(f)Γxx(f) (53)

12

Page 17: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

In the time domain, this corresponds to the convolution

γyx(m) = h(m) ∗ γxx(m) (54)

The output spectrum is found by evaluating

Y (f)Y ∗(f) = H(f)X(f)H∗(f)X∗(f) = |H(f)|2|X(f)|2 (55)

Proceeding as aboveΓyy(f) = |H(f)|2Γxx(f) (56)

In the time domain:

γyy(m) = rhh(m) ∗ γxx(m) = h(m) ∗ h(−m) ∗ γxx(m) (57)

Example 4.4 Suppose that a signal source g(n) and a noise source s(n) in Fig. 6 are fully orthog-onal. Find the power spectra Γx1x1(f), Γx2x2(f) and the cross-power spectrum Γx2x1(f).

s(n)

h3(n)

h2(n) +�

h1(n)

��

g(n)

+� ��

x1(n)

x2(n)

Figure 6: Two microphones x1(n), x2(n) record signals from a noise source s(n) and a signalsource g(n).

Since s(n) and g(n) are fully orthogonal the superposition principle is applicable. Using theresults of Example 4.3 we find:

Γx1x1(f) = Γgg(f) + Γss(f)|H1(f)|2 (58)

Γx2x2(f) = Γgg(f)|H3(f)|2 + Γss(f)|H2(f)|2 (59)

In order to determine Γx2x1(f) we use Eq. (42), hence, we evaluate

X2(f)X∗1 (f) = (H3(f)G(f) +H2(f)S(f)) · (G∗(f) +H∗

1 (f)S∗(f))

= |G(f)|2H3(f) +G∗(f)S(f)H2(f) +G(f)S∗(f)H∗1 (f)H3(f)

+|S(f)|2H∗1 (f)H2(f) (60)

13

Page 18: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

Performing expectation, dividing by N , and finally carrying out the limit operation gives

Γx2x1(f) = Γgg(f)H3(f) + Γss(f)H∗1 (f)H2(f) (61)

Here we used Γgs(f) = 0 due to the fact that g(n) and s(n) are fully orthogonal.

4.4.2 Some Properties

Following [1, Ch. 5.2.4]: suppose that the cross-power spectrum is expressed by its magnitude andphase, i.e.,

Γxy(f) = |Γxy(f)|e−jθxy(f) (62)

As in Section 4.4.1, consider length N realization of x(n) and y(n) with Fourier transforms X(f),Y (f) and form a signal z(n) via the frequency domain relation

Z(f) = aX(f) + Y (f)e−jθxy(f) (63)

where a is a real constant. Notice, that X(f), Y (f) and Z(f) are random variables whereasθxy(f) is not. Using Eq. (63) and the fact that |Z(f)|2 ≥ 0 gives:

a2|X(f)|2 + aX(f)Y ∗(f)ejθxy(f) + aX∗(f)Y (f)e−jθxy(f) + |Y (f)|2 ≥ 0 (64)

Proceeding as in Section 4.4.1 – in particular using Eq. (41) – the previous equation leads to

a2Γxx(f) + aΓxy(f)ejθxy(f) + aΓyx(f)e

−jθxy(f) + Γyy(f) ≥ 0 (65)

Γxy(f)ejθxy(f) = |Γxy(f)| and Γyx(f) = Γ∗

xy(f). Consequently,

a2Γxx(f) + 2a|Γxy(f)|+ Γyy(f) ≥ 0 (66)

Note that Eq. (66) is a quadratic inequality in a, which means that the determinant is negative orzero, i.e.,

4|Γxy(f)|2 − 4Γxx(f)Γyy(f) ≤ 0 (67)

– or equivalently –|Γxy(f)|2 ≤ Γxx(f)Γyy(f) (68)

Evaluating Eq. (66) for a = −1 results in

2|Γxy(f)| ≤ Γxx(f) + Γyy(f) (69)

Further properties are listed in the Appendix.

5 Mixing Random and Deterministic Signals

Signals do not normally appear in their basic “pure” types like random, periodic or aperiodic.Often it is necessary to be able to handle the mixing of the basic signal types. It is very importantto notice that in general, the mixing of a stationary random signal and a deterministic signal willresult in a non-stationary signal.

14

Page 19: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

In order to handle this problem one can adopt the framework of [2, Ch. 2.3]. Suppose thatx(n) is a mixed random/deterministic signal. x(n) is said to be quasi-stationary if the followingconditions hold:

(i) mx(n) = E[x(n)], ∀n, |mx(n)| < ∞ (70)

(ii) γxx(n1, n2) = E[x(n1)x(n2)], ∀n1, n2, |γxx(n1, n2)| < ∞

γxx(m) = limN→∞

1

N

N−1∑n1=0

γxx(n1, n1 −m) = 〈 γxx(n1, n1 −m) 〉, ∀m (71)

Here 〈 · 〉 denotes time average and the expectation E[·] is carried out w.r.t. the random componentsof the signal. If x(n) is a pure random stationary signal then condition (i) and (ii) are triviallyfulfilled. If x(n) is a pure deterministic signal condition (ii) gives

γxx(m) = limN→∞

1

N

N−1∑n=0

x(n)x(n−m) = 〈x(n)x(n−m) 〉 (72)

which coincides with the expression Eq. (4) for periodic signals13. When aperiodic componentsare present, γxx(m) – as defined in Eq. (72) – will normally be equal to zero.

For a general signal g(n), we will introduce the notation

〈E[g(n)] 〉 = E[g(n)] = limN→∞

1

N

N−1∑n=0

E[g(n)] (73)

to denote that both ensemble and time averages are carried out.

5.1 Erogodicity Result

[2, Theorem 2.3] states that if x(n) is a quasi-stationary signal with mean mx(n) = E[x(n)]fulfilling

x(n)−mx(n) =∞∑q=0

hn(q)e(n− q) (74)

where 1) e(n) is a sequence of independent random variables with zero mean, fininte variancesσ2e(n), and bounded fourth order moments. 2) hn(q), n = 1, 2, · · · are uniform stable filters. That

is, the random part of x(n) can be described as filtered white noise.With probability one, as N → ∞,

1

N

N−1∑n=0

x(n)x(n−m) → E[x(n)x(n−m)] = γxx(m) (75)

In summary, the result is identical to the standard result for stationary random signals Eq. (22),(23): the time average equals the joint time-ensemble average E(·).

The framework addressed autocorrelation functions; however, it can easily be adopted to cross-correlation functions as well.

13It does not matter whether the average is over one or an infinite number of periods.

15

Page 20: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

5.2 Linear Mixing of Random and Periodic Signals

The above framework is applied to a simple example of linearly mixing a random and a periodicsignal. Suppose that x(n) is given by x(n) = p(n) + s(n) where p(n) is a periodic signal withperiod M , i.e., p(n) = p(n+M), and s(n) is a stationary, correlation ergodic random signal withmean ms.

Consider first the formal definition of the mean of x(n):

E[x(n)] = E[p(n) + s(n)] = p(n) + E[s(n)] (76)

Thus the mean is time-varying; hence, the process is non-stationary. However, it is easy to verifythat the mean is periodic with M as E[x(n + M)] = p(n + M) + E[s(n)] = E[x(n)]. This isknown as cyclostationarity [3, Ch. 9]. The autocorrelation function of x(n) is:

γxx(n1, n2) = E[(p(n1) + s(n1)) · (p(n2) + s(n2))]

= p(n1)p(n2) + E[s(n1)]p(n2) + E[s(n2)]p(n1) + E[s(n1)s(n2)]

= p(n1)p(n2) +ms(p(n1) + p(n2)) + γss(n1 − n2) (77)

Again, it is easy to verify γxx(n1 +M,n2 +M) = γxx(n1, n2). Thus x(n) is wide-sense cyclo-stationary. Moreover, x(n) is a quasi-stationary signal, cf. Eq. (70), (71), since E[x(n)] is limitedand

〈γxx(n1, n1 −m)〉 = 〈 p(n1)p(n1 −m) 〉+ 2ms〈 p(n1) 〉+ γss(m)

= rpp(m) + 2msmp + γss(m) (78)

is a function of m only. Here mp is the time average of p(n).Next we will show that x(n) is ergodic according to the definition in Section 5.1. Consider the

ergodic formulation of the autocorrelation function of x(n):

rxx(m) = 〈x(n)x(n−m) 〉 = limN→∞

1

N

N−1∑n=0

x(n)x(n−m) (79)

Substituting the expression for x(n) gives

rxx(m) = 〈 (p(n) + s(n)) · (p(n−m) + s(n−m)) 〉= 〈 p(n)p(n−m) 〉+ 〈 p(n)s(n−m) 〉〈 s(n)p(n−m) 〉+ 〈 s(n)s(n−m) 〉= rpp(m) + rps(m) + rsp(m) + γss(m) (80)

rpp(m) and γss(m) are the usual autocorrelation functions for periodic and random signals, re-spectively. The crucial object is the crosscorrelation function rps(m) = rsp(−m). Focus on

rps(m) = limN→∞

1

N

N−1∑n=0

p(n)s(n−m) (81)

If we use the periodicity of p(n) we can rewrite the sum of products by considering first allproducts which involves p(0) = p(kM), next all products which involves p(1) = p(kM + 1) andso on. Assuming14 N = KM , we can write:

rps(m) = limN→∞

1

N

[p(0)

K−1∑k=0

s(kM −m)+

14It is pretty easy to verify that this restriction is not crucial to the subsequent arguments.

16

Page 21: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

p(1)K−1∑k=0

s(kM −m+ 1) +

· · ·+

p(M − 1)K−1∑k=0

s(kM −m+M − 1)

]

= limN→∞

1

N

⎡⎣M−1∑q=0

p(q)K−1∑k=0

s(kM −m+ q)

⎤⎦= lim

N→∞

⎡⎣M−1∑q=0

p(q)

M

M

N

K−1∑k=0

s(kM −m+ q)

⎤⎦ (82)

Since M is a constant we can instead perform the limit operation w.r.t. K, that is,

rps(m) =1

M

M−1∑q=0

p(q) limK→∞

[1

K

K−1∑k=0

s(kM −m+ q)

](83)

Define m̂s = K−1 ∑K−1k=0 s(kM −m+ q), then E[m̂s] = ms = E[s(n)] and the variance

V [m̂s] = E[m̂2s]−m2

s

=1

K2

K−1∑k=0

K−1∑�=0

E[s(kM −m+ q)s(�M −m+ q)]−m2s

=1

2K − 1

K−1∑p=−(K−1)

(1− |p|

2K − 1

)γss(pM)−m2

s (84)

Under the standard mean ergodic condition, limK→∞ V [m̂s] = 0 thus limK→∞ m̂s = ms. FromEq. (83) we then conclude that

rps(m) = 〈 p(n) 〉 · E[s(n)], ∀ 0 ≤ m ≤ N − 1 (85)

where

〈 p(n) 〉 = limM→∞

1

M

M−1∑n=0

p(n) =1

M

M−1∑n=0

p(n) (86)

In conclusion, a periodic and a random signal are fully uncorrelated, and the crosscorrelationfunction is the product of the time average of the periodic signal and the mean value of the randomsignal. If either the random signal has zero mean or the periodic signal has zero time average, thesignals are fully orthogonal, i.e., rps(m) = 0.

Example 5.1 This example considers the determination of a weak periodic signal contaminatedby strong noise. Suppose that s(n) is a random noise signal which is generated by a first order ARprocess, cf. [5, 6, Ch. 12.2]. The autocorrelation function is given by

γss(m) = σ2 · a|m| (87)

The power of s(n) is Ps = γss(0) = σ2 and the squared mean value ism2

s = limm→∞ γss(m) = 0. Further, assume that p(n) is a periodic signal given by p(n) =b cos(2πn/N0) with period N0. The autocorrelation function is

rpp(m) = p(m) ∗ p(−m) =b2

2cos(2πm/N0) (88)

17

Page 22: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

the power is Pp = b2/2, and we assume that Pp � Ps.The autocorrelation function of x(n) = s(n) + p(n) is given by Eq. (80). Using the fact that

rsp(m) = 0 as E[s(n)] = 0 we have

rxx(m) = σ2 · a|m| +b2

2cos(2πm/N0) (89)

Thus for |m| large the first term vanishes and we can employ the approximation

rxx(m) ≈ b2

2cos(2πm/N0), for |m| → ∞ (90)

The maximum of rxx(m) for m large is then b2/2 = Pp and the period N0 can be determined asthe difference between two maxima.

18

Page 23: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

A Appendix: Properties of Correlation Functions and Power Spec-tra

A.1 Definitions

xa(t): Aperiodic analog real signal with finite energy.limt→∞ xa(t) = xa∞.

xa(n): Aperiodic discrete-time real signal with finite energy.limn→∞ xa(n) = xa∞.

Xa(F ), Xa(f): Fourier transform of xa(t) and xa(n).xp(t): Periodic analog real signal with finite power and period Tp, i.e.,

xp(t+ Tp) = x(t).xp(n): Periodic discrete-time real signal with finite power and period N , i.e.,

xp(n+N) = xp(n).Xp(k): Fourier transform of xp(n) or xp(t).x(t): Stationary, ergodic random real analog signal with mean E[x(t)] = mx.x(n): Stationary, ergodic random real discrete-time signal with mean E[x(n)] =

mx.X(f): Fourier transform of x(n) where n ∈ [0;N − 1].

γxx(τ), γxy(τ): Auto- and crosscorrelation functions of random signals.rxx(τ), rxy(τ): Auto- and crosscorrelation functions for aperiodic and periodic signals.Γxx(f), Γxy(f): Power and cross-power spectra for random signals.Sxx(f), Sxy(f): Power(energy) and cross-power(energy) spectra for aperiodic signals.

E[x(n)]: Mean value operator, E[x(n)] = mx.V [x(n)]: Variance operator, V [x(n)] = E[x2(n)]− E2[x(n)].

P : Power, P = PAC + PDC.E: Energy, E = EAC + EDC.

A.2 Definitions of Correlation Functions and Power Spectra

A.2.1 Analog Random Signals

γxy(τ) = E[x(t)y(t− τ)] = limTp→∞

1

Tp

∫ Tp/2

−Tp/2x(t)y(t− τ) dt (91)

Γxy(F ) =

∫ ∞

−∞γxy(τ)e

−j2πFτ dτ (92)

γxy(τ) =

∫ ∞

−∞Γxy(F )ej2πFτ dF (93)

Remark: Γxy(F ) is a continuous cross-power density spectrum.

A.2.2 Discrete-time Random Signals

γxy(m) = E[x(n)y(n−m)] = limN→∞

1

N

N−1∑n=0

x(n)y(n−m) (94)

19

Page 24: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

Γxy(f) =∞∑

m=−∞γxy(m)e−j2πfm (95)

γxy(m) =

∫ 1/2

−1/2Γxy(f)e

j2πfm df (96)

Remark: Γxy(f) is a continuous cross-power spectrum, replicated periodically with f = 1 corre-sponding to the sampling frequency Fs.

A.2.3 Analog Periodic Signals

rxpyp(τ) =1

Tp

∫ Tp/2

−Tp/2xp(t)yp(t− τ) dt = xp(τ) ∗ yp(−τ) (97)

Sxpyp(k) =1

Tp

∫ Tp/2

−Tp/2rxpyp(τ)e

−j2π kTp

τdτ (98)

rxpyp(τ) =∞∑

k=−∞Sxpyp(k)e

j2π kTp

τ(99)

Remark: Discrete cross-power spectrum at frequencies F = k/Tp.

A.2.4 Discrete-time Periodic Signals

rxpyp(m) =1

N

N−1∑n=0

xp(n)yp(n−m) = xp(m) ∗ yp(−m) (100)

Sxpyp(k) =1

N

N−1∑m=0

rxpyp(m)e−j2π kN

m (101)

rxpyp(m) =N−1∑k=0

Sxpyp(k)ej2π k

Nm (102)

Remark: Discrete cross-power spectrum at frequencies f = k/N or F = kFs/N replicatedperiodically with k = N corresponding to the sampling frequency Fs. Further, note that the pairof Fourier transforms defined in Eq. (101) and (102) corresponds to the definitions in [5, 6, Ch.4.2]. The Discrete Fourier Transform defined in [5, 6, Ch. 7] is related by: DFT{rxpyp(m)} =N · Sxpyp(k).

A.2.5 Analog Aperiodic Signals

rxaya(τ) =

∫ ∞

−∞xa(t)ya(t− τ) dt = xa(τ) ∗ ya(−τ) (103)

Sxaya(F ) =

∫ ∞

−∞rxaya(τ)e

−j2πFτ dτ (104)

rxaya(τ) =

∫ ∞

−∞Sxaya(F )ej2πFτ dF (105)

Remark: Sxaya(F ) is a continuous cross-energy density spectrum.

20

Page 25: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

A.2.6 Discrete-time Aperiodic Signals

rxaya(m) =∞∑−∞

xa(n)ya(n−m) = xa(m) ∗ ya(−m) (106)

Sxaya(f) =∞∑

m=−∞rxaya(m)e−j2πfm (107)

rxaya(m) =

∫ 1/2

−1/2Sxaya(f)e

j2πfm df (108)

Remark: Sxaya(f) is a continuous cross-energy spectrum, replicated periodically with the sam-pling frequency Fs.

A.3 Properties of Autocorrelation Functions

Random Periodic Aperiodic

γxx(τ) = γxx(−τ) rxpxp(τ) = rxpxp(−τ) rxaxa(τ) = rxaxa(−τ)

|γxx(τ)| ≤ γxx(0) |rxpxp(τ)| ≤ rxpxp(0) |rxaxa(τ)| ≤ rxaxa(0)

limτ→∞ γxx(τ) = m2

x = PDC rxpxp(τ) = rxpxp(τ + Tp) limτ→∞ rxaxa(τ) = x2a∞

P = γxx(0) = m2x + V [x(n)] P = rxpxp(0) E = rxaxa(0)

γyy(τ) = rhh(τ) ∗ γxx(τ) rypyp(τ) = rhh(τ) ∗ rxpxp(τ) ryaya(τ) = rhh(τ) ∗ rxaxa(τ)

Note: Similar properties exists for discrete-time signals.

21

Page 26: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

A.4 Properties of Power Spectra

Ran

dom

Per

iodi

cA

peri

odic

Γxx(F

)=

Γxx(−

F)

Sxpxp(k)=

Sxpxp(−

k)

Sxaxa(F

)=

Sxaxa(−

F)

P=

∫ ∞ −∞Γxx(F

)dF

P=

∞ ∑k=−∞

Sxpxp(k)

E=

∫ ∞ −∞Sxaxa(F

)dF

P=

∫ 1/2 −1/2Γxx(f)df

PDC=

Sxpxp(0)

Γxx(F

)=

lim

Tp→

∞1 TpE

[ |X(F

)|2] ≥

0Sxpxp(k)=

|Xp(k)|2

≥0

Sxaxa(F

)=

|Xa(F

)|2≥

0

X(F

)=

∫ T p 0x(t)e

−j2πFtdt

Con

t.re

alsp

ect.

+R

eald

iscr

ete

spec

t.C

ont.

real

spec

t.+

m2 xδ(F)

atF

=k/T

px2 a∞δ(F)

Γyy(F

)=

|H(F

)|2Γxx(F

)Sypyp(k)=

|H(k/T

p)|2

Sxpxp(k)

Syaya(F

)=

|H(F

)|2Sxaxa(F

)

22

Page 27: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

A.5 Properties of Crosscorrelation Functions

Ran

dom

Per

iodi

cA

peri

odic

γxy(τ)=

γyx(−

τ)

r xpyp(τ)=

r ypxp(−

τ)

r xaya(τ)=

r yaxa(−

τ)

γ2 xy(τ)≤

γxx(0)γ

yy(0)

r2 xpyp(τ)≤

r xpxp(0)r

ypyp(0)

r2 xaya(τ)≤

r xaxa(0)r

yaya(0)

2|γxy(τ)|≤

γxx(0)+γyy(0)

2|rxpyp(τ)|≤

r xpxp(0)+r y

pyp(0)

2|rxaya(τ)|≤

r xaxa(0)+

r yaya(0)

lim

τ→

∞γxy(τ)=

mxm

yr x

pyp(τ)=

r xpyp(τ

+Tp)

lim

τ→

∞r x

aya(τ)=

xa∞y a

γxy(τ)=

mxm

yr x

pyp(τ)=

0,∀τ

r xaya(τ)=

xa∞y a

unco

rrel

ated

ifno

com

mon

freq

uenc

ies

unco

rrel

ated

γyx(τ)=

h(τ)∗γ

xx(τ)

r ypxp(τ)=

h(τ)∗r

xpxp(τ)

r yaxa(τ)=

h(τ)∗r

xaxa(τ)

23

Page 28: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

A.6 Properties of Cross-Power Spectra

Ran

dom

Per

iodi

cA

peri

odic

Γxy(F

)=

Γ∗ yx(F

)Sxpyp(k)=

S∗ ypxp(k)

Sxaya(F

)=

S∗ yaxa(F

)

Γxy(F

)=

Γ∗ xy(−

F)

Sxpyp(k)=

S∗ xpyp(−

k)

Sxaya(F

)=

S∗ xaya(−

F)

2|Γxy(F

)|≤

Γxx(F

)+

Γyy(F

)2|S

xpyp(k)|≤

Sxpxp(k)+Sypyp(k)

2|Sxaya(F

)|≤

Sxaxa(F

)+Syaya(F

)

|Γxy(F

)|2≤

Γxx(F

)Γyy(F

)Sxpyp(k)=

0,∀k

|Sxaya(F

)|2=

Sxaxa(F

)Syaya(F

)

ifno

com

mon

freq

uenc

ies

Γxy(F

)=

lim

Tp→

∞1 TpE[X

(F)Y

∗ (F)]

Sxpyp(k)=

Xp(k)Y

∗ p(k)

Sxaya(F

)=

Xa(F

)Y∗ a(F

)

X(F

)=

∫ T p 0x(t)e

−j2πFtdt

Com

pl.

cont

.spe

ctru

mC

ompl

.di

scre

tesp

ectr

umC

ompl

.co

nt.s

pect

rum

+m

xm

yδ(F)

atF

=k/T

+xa∞y a

∞δ(F)

Γyx(F

)=

H(f)Γ

xx(f)

Sypxp(k)=

H(k/T

)Sxpxp(k)

Syaxa(F

)=

H(f)S

xaxa(f)

24

Page 29: Correlation Functions and Power Spectra · 2009. 12. 4. · Example 3.2 Consider two periodic discrete-time signals x p(n), y p(n)with fundamental frequen- cies 0≤f x ≤1/2and

References

[1] J.S. BENDAT & A.G. PIERSOL: Random Data, Analysis and Measurement Procedures,New York, New York: John Wiley & Sons, 1986.

[2] L. LJUNG: System Identification: Theory for the User, Englewood Cliffs, NJ: Prentice-Hall,1987.

[3] A. PAPOULIS: Probability, Random Variables and Stochastic Processes, Second edition,New York, New York: McGraw-Hill, Inc., 1984.

[4] A. PAPOULIS: Signal Analysis, Second edition, New York, New York: McGraw-Hill, Inc.,3rd printing, 1987.

[5] J.G. PROAKIS & D.G. MANOLAKIS: Digital Signal Processing: Principles, Algorithmsand Applications, 4th edition, Upper Saddle River, New Jersey: Pearson Prentice Hall,Inc., hardcover version ISBN 0-13-187374-1, 2007.

[6] J.G. PROAKIS & D.G. MANOLAKIS: Digital Signal Processing: Principles, Algorithmsand Applications, 4th edition, Upper Saddle River, New Jersey: Pearson Prentice Hall,Inc., paperback version ISBN 0-13-228731-5, 2007.

[7] G. ZELNIKER & F.J. TAYLOR: Advanced Digital Signal Processing: Theory and Applica-tion, New York, New York: Marcel Dekker, Inc., 1994.

25