Dig Comm Ex

41
Digital Communication Exercises Contents 1 Converting a Digital Signal to an Analog Signal 2 2 Decision Criteria and Hypothesis Testing 7 3 Generalized Decision Criteria 11 4 Vector Communication Channels 13 5 Signal Space Representation 17 6 Optimal Receiver for the Waveform Channel 23 7 The Probability of Error 28 8 Bit Error Probability 34 9 Connection with the Concept of Capacity 39 1

Transcript of Dig Comm Ex

Page 1: Dig Comm Ex

Digital Communication Exercises

Contents

1 Converting a Digital Signal to an Analog Signal 2

2 Decision Criteria and Hypothesis Testing 7

3 Generalized Decision Criteria 11

4 Vector Communication Channels 13

5 Signal Space Representation 17

6 Optimal Receiver for the Waveform Channel 23

7 The Probability of Error 28

8 Bit Error Probability 34

9 Connection with the Concept of Capacity 39

1

Page 2: Dig Comm Ex

1 Converting a Digital Signal to an Analog Signal

1. [1, Problem 4.15].Consider a four-phase PSK signal represented by the equivalent lowpass signal

u(t) =∑n

Ing(t− nT )

where In takes on one of of the four possible values√

1/2(±1 ± j) with equal probability. Thesequence of information symbols {In} is statistically independent (i.i.d).

(a) Determine the power density spectrum of u(t) when

g(t) =

{A, 0 ≤ t ≤ T,0, otherwise.

(b) Repeat (1a) when

g(t) =

{A sin(πt/T ), 0 ≤ t ≤ T,0, otherwise.

(c) Compare the spectra obtained in (1a) and (1b) in terms of the 3dB bandwidth and thebandwidth to the first spectral zero. Here you may find the frequency numerically.

Solution:

We have that SU (f) = 1T |G(f)|2

∑∞m=−∞ CI(m)e−j2πfmT , E(In) = 0, E(|In|2) = 1, hence

CI(m) =

{1, m = 0,

0, m 6= 0.

therefore∑∞m=−∞ CI(m)e−j2πfmT = 1⇒ SU (f) = 1

T |G(f)|2.

(a) For the rectangular pulse:

G(f) = ATsinπfT

πfTe−j2πfT/2 ⇒ |G(f)|2 = A2T 2 sin2 πfT

(πfT )2

where the factor e−j2πfT/2 is due to the T/2 shift of the rectangular pulse from the center.Hence:

SU (f) = A2Tsin2 πfT

(πfT )2

(b) For the sinusoidal pulse: G(f) =∫ T0A sin(πt/T ) exp(−j2πft)dt. By using the trigonometric

identity sinx = exp(jx)−exp(−jx)2j it is easily shown that:

G(f) =2AT

π

cosπTf

1− 4T 2f2e−j2πfT/2 ⇒ |G(f)|2 =

(2AT

π

)2cos2 πTf

(1− 4T 2f2)2

Hence:

SU (f) =

(2A

π

)2

Tcos2 πTf

(1− 4T 2f2)2

2

Page 3: Dig Comm Ex

(c) The 3dB frequency for (1a) is:

sin2 πf3dBT

(πf3dBT )2=

1

2⇒ f3dB ∼=

0.44

T

(where this solution is obtained graphically), while the 3dB frequency for the sinusoidal pulseon (1b) is: f3dB ∼= 0.59

T .The rectangular pulse spectrum has the first spectral null at f = 1/T , whereas the spectrumof the sinusoidal pulse has the first null at 3/2T . Clearly the spectrum for the rectangularpulse has a narrower main lobe. However, it has higher sidelobes.

2. [1, Problem 4.21].The lowpass equivalent representation of a PAM signal is

u(t) =∑n

Ing(t− nT )

Suppose g(t) is a rectangular pulse and

In = an − an−2

where {an} is a sequence of uncorrelated 1 binary values (1,−1) random variables that occur withequal probability.

(a) Determine the autocorrelation function of the sequence {In}.(b) Determine the power density spectrum of u(t).

(c) Repeat (2b) if the possible values of an are (0, 1).

Solution:

(a)

CI(m) = E{In+mIn} = E{(an+m − an+m−2)(an − an−2)}

=

2, m = 0,

−1, m = ±2

0, otherwise.

= 2δ(m)− δ(m− 2)− δ(m+ 2)

(b) SU (f) = 1T |G(f)|2

∑∞m=−∞ CI(m)e−j2πfmT , where

∞∑m=−∞

CI(m)e−j2πfmT = 4 sin2 2πfT,

and

|G(f)|2 = (AT )2(

sinπfT

πfT

)2

.

Therefore:

SU (f) = 4A2T

(sinπfT

πfT

)2

sin2 2πfT

1E{anam} = 0 for n 6= m.

3

Page 4: Dig Comm Ex

(c) If {an} takes the values (0, 1) with equal probability then E{an} = 1/2 and E{an+man} =14 [1 + δ(m)]. Then:

CI(m) =1

4[2δ(m)− δ(m− 2)− δ(m+ 2)]⇒ Φii(f) = sin2 2πfT

SU (f) = A2T

(sinπfT

πfT

)2

sin2 2πfT

Thus, we obtain the same result as in (2b) but the magnitude of the various quantities isreduced by a factor of 4.

3. [2, Problem 1.16].A zero mean stationary process x(t) is applied to a linear filter whose impulse response is definedby a truncated exponential:

h(t) =

{ae−at, 0 ≤ t ≤ T,0, otherwise.

Show that the power spectral density of the filter output y(t) is defined by

SY (f) =a2

a2 + 4π2f2(1− 2 exp(−aT ) cos 2πfT + exp(−2aT ))SX(f)

where SX(f) is the power spectral density of the filter input.

Solution:

The frequency response of the filter is:

H(f) =

∫ ∞−∞

h(t) exp(−j2πft)dt

=

∫ ∞−∞

a exp(−at) exp(−j2πft)dt

= a

∫ ∞−∞

exp(−(a+ j2πf)t)dt

=a

a+ j2πf[1− e−aT (cos 2πfT − j sin 2πfT )].

The squared magnitude response is:

|H(f)|2 =a2

a2 + 4π2f2(1− 2e−aT cos 2πfT + e−2aT

)And the required PSD follows.

4. [1, Problem 4.32].The information sequence {an} is a sequence of i.i.d random variables, each taking values +1 and−1 with equal probability. This sequence is to be transmitted at baseband by a biphase codingscheme, described by

s(t) =∑n

ang(t− nT )

where g(t) is defined by

g(t) =

{1, 0 ≤ t ≤ T/2,−1, T/2 ≤ t ≤ T.

4

Page 5: Dig Comm Ex

(a) Find the power spectral density of s(t).

(b) Assume that it is desirable to have a zero in the power spectrum at f = 1/T . To this endwe use precoding scheme by introducing bn = an + kan−1, where k is some constant, andthen transmit the {bn} sequence using the same g(t). Is it possible to choose k to produce afrequency null at f = 1/T? If yes, what are the appropriate value and the resulting powerspectrum?

(c) Now assume we want to to have zeros at all multiples of f0 = 1/4T . Is it possibl to havethese zeros with an appropriate choice of k in the previous part? If not then what kind ofprecoding do you suggest to result in the desired nulls?

Solution:

(a) Since µ = 0, σ2a = 1, we have SS(f) = 1

T |G(f)|2.

G(f) =T

2

sin(πfT/2)

πfT/2e−j2πfT/4 − T

2

sin(πfT/2)

πfT/2e−j2πf3T/4

=T

2

sin(πfT/2)

πfT/2e−j2πfT (2j sin(πfT/2))

= jTsin2(πfT/2)

πfT/2e−j2πfT ⇒

|G(f)|2 = T 2

(sin2(πfT/2)

πfT/2

)2

SS(f) = T

(sin2(πfT/2)

πfT/2

)2

(b) For non-independent information sequence the power spectrum of s(t) is given by SS(f) =1T |G(f)|2

∑∞m=−∞ CB(m)e−j2πfmT .

CB(m) = E{bn+mbn}= E{an+man}+ kE{an+m−1an}+ kE{an+man−1}+ k2E{an+m−1an−1}

=

1 + k2, m = 0,

k, m = ±1

0, otherwise.

Hence:∞∑

m=−∞CB(m)e−j2πfmT = 1 + k2 + 2k cos 2πfT

We want:

SS(1/T ) = 0⇒∞∑

m=−∞CB(m)e−j2πfmT

∣∣f=1/T

= 0⇒ 1 + k2 + 2k = 0⇒ k = −1

and the resulting power spectrum is:

SS(f) = 4T

(sin2 πfT/2

πfT/2

)2

sin2 πfT

5

Page 6: Dig Comm Ex

(c) The requirement for zeros at f = l/4T, l = ±1,±2, . . . means 1 + k2 + 2k cosπl/2 = 0,which cannot be satisfied for all l. We can avoid that by using precoding in the form:bn = an + kan−4. Then

CB(m) =

1 + k2, m = 0,

k, m = ±4

0, otherwise.∞∑

m=−∞CB(m)e−j2πfmT = 1 + k2 + 2k cos 2πf4T

and a value of k = −1 will zero this spectrum in all multiples of 1/4T .

5. [1, Problem 4.29].Show that 16-QAM on {±1,±3} × {±1,±3} can be represented as a superposition of two 4PSKsignals where each component is amplified separately before summing. i.e., let

s(t) = G[An cos 2πft+Bn sin 2πft] + [Cn cos 2πft+Dn sin 2πft]

where {An}, {Bn}, {Cn} and {Dn} are statically independent binary sequences with element fromthe set {+1,−1} and G is the amplifier gain. You need to show s(t) an also be written as

s(t) = In cos 2πft+Qn sin 2πft

and determine In and Qn in terms of An, Bn, Cn and Dn.

Solution:

The 16-QAM signal is represented as s(t) = In cos 2πft+Qn sin 2πft where In = {±1,±3}, Qn ={±1,±3}. A superposition of two 4-QAM (4-PSK) signals is:

s(t) = G[An cos 2πft+Bn sin 2πft] + [Cn cos 2πft+Dn sin 2πft]

where An, Bn, Cn, Dn = {±1}. Clearly In = GAn +Cn, In = GBn +Dn. From these equations itis easy to see that G = 2 gives the required equivalence.

6

Page 7: Dig Comm Ex

2 Decision Criteria and Hypothesis Testing

Remark 1. Hypothesis testing is another common name for decision problem: You have to decidebetween two or more hypothesis, say H0, H1, H2, . . . where Hi can be interpreted as ”the unknownparameter has value i”. Decoding a constellation with K symbols can be interpreted as selecting thecorrect hypothesis from H0, H1, . . . ,HK−1 where Hi is the hypothesis that Si was transmitted.

1. Consider an equal probability binary source p(0) = p(1) = 1/2, and a continuous output channel:

fR|M (r|”1”) = ae−ar r ≥ 0

fR|M (r|”0”) = be−br r ≥ 0 b > a > 0

(a) Find a constant K such that the optimal decision rule is r1≷0K.

(b) Find the respective error probability.

Solution:

(a) Optimal decision rule:

p(0)fR|M (r|”0”)0≷1p(1)fR|M (r|”1”)

Using the defined channel distributions:

be−br0≷1

ae−ar

10≷1

a

be−(a−b)r

00≷1

ln(a

b) + (b− a)r

r1≷0

ln(ab )

a− b= K

(b)

p(e) = p(0) Pr{r > K|0}+ p(1) Pr{r < K|1}

=1

2

[ ∫ ∞K

be−btdt+

∫ K

0

ae−atdt]

=1

2[e−bK + 1− e−aK ]

2. Consider a binary source: Pr{x = −2} = 2/3,Pr{x = 1} = 1/3, and the following channel

y = A · x, A ∼ N(1, 1)

where x and A are independent.

7

Page 8: Dig Comm Ex

(a) Find the optimal decision rule.

(b) Calculate the respective error probability.

Solution:

(a) First we will find the conditional distribution of y given x:

(Y | − 2) ∼ N(−2, 4), (Y |1) ∼ N(1, 1)

Hence the decision rule will be:

2

3

1√8π

exp(− (y + 2)2

8

) −2≷1

1

3

1√2π

exp(− (y − 1)2

2

)

− (y + 2)2

8

−2≷1− (y − 1)2

2

3y(y − 4)−2≷1

0

⇒ x(y) =

{−2, y < 0, 4 < y

1, otherwise.

(b)

p(e) =2

3

∫ 4

0

f(y| − 2)dy +1

3

[ ∫ 0

−∞f(y|1)dy +

∫ ∞4

f(y|1)dy

]=

2

3

[Q

(0 + 2

2

)−Q

(4 + 2

2

)]+

1

3

[1−Q

(0− 1

1

)−Q

(4− 1

1

)]= Q(1)− 1

3Q(3) ∼= 0.15821

3. Decision rules for binary channels.

(a) The Binary Symmetric Channel (BSC) has binary (0 or 1) inputs and outputs. Itoutputs each bit correctly with probability 1− p and incorrectly with probability p. Assume0 and 1 are equally likely inputs. State the MAP and ML decision rules for the BSC whenp < 1

2 . How are the decision rules different when p > 12?

(b) The Binary Erasure Channel (BEC) has binary inputs as with the BSC. However thereare three possible outputs. Given an input of 0, the output is 0 with probability 1−p1 and 2with probability p1. Given an input of 1, the output is 1 with probability 1− p2 and 2 withprobability p2. Assume 0 and 1 are equally likely inputs. State the MAP and ML decisionrules for the BEC when p1 < p2 <

12 . How are the decision rules different when p2 < p1 <

12?

Solution:

(a) For equally likely inputs the MAP and ML decision rules are identical. In each case we wishto maximize py|x(y|xi) over the possible choices for xi. The decision rules are shown below,

p <1

2⇒ X = Y

p >1

2⇒ X = 1− Y

8

Page 9: Dig Comm Ex

(b) Again, since we have equiprobable signals, the MAP and ML decision rules are the same.The decision rules are as follows,

p1 < p2 <1

2⇒ X =

{Y, Y = 0, 1.

1, Y = 2.

p2 < p1 <1

2⇒ X =

{Y, Y = 0, 1.

0, Y = 2.

4. In a binary hypothesis testing problem, the observation Z is Rayleigh distributed under bothhypotheses with different parameter, that is,

f(z|Hi) =z

σ2i

exp

(− z2

2σ2i

)z ≥ 0, i = 0, 1

You need to decide if the observed variable Z was generated with σ20 or with σ2

1 , namely choosebetween H0 and H1.

(a) Obtain the decision rule for the minimum probability of error criterion. Assume that the H0

and H1 are equiprobable.

(b) Extend your results to N independent observations, and derive the expressions for the re-sulting probability of error.Note: If R ∼ Rayleigh(σ) then

∑Ni=1R

2i has a gamma distribution with parameters N and

2σ2: Y =∑Ni=1R

2i ∼ Γ(N, 2σ2).

Solution:

(a)

log(f(z|Hi)) = log z − log σ2i −

z2

2σ2i

⇒ log f(z|H1)− log f(z|H0) = logσ20

σ21

+ z2(

1

2σ20

− 1

2σ21

)H1

≷H0

0

⇒ z2H1

≷H0

2 log

(σ21

σ20

)·(

σ21σ

20

σ21 − σ2

0

)= γ

Since z ≥ 0, the following decision rule is obtained:

H =

{H1, z ≥ √γ,H0, z <

√γ.

(b) Let f(z|H1)f(z|H0)

be denoted as Likelihood Ration Test (LRT) 2, hence

log LRT = log f(z|H1)− log f(z|H0)

2LRT , f(z|H1)f(z|H0)

9

Page 10: Dig Comm Ex

For N i.i.d observations:

log(f(z|Hi)) =

N−1∑n=0

log(f(zn|Hi))

=

N−1∑n=0

log zn −N log σ2i −

z2n2σ2

i

= −N log σ2i +

N−1∑n=0

log zn −z2n2σ2

i

The log LRT will be:

log LRT : −N log

(σ21

σ20

)+

(1

2σ20

− 1

2σ21

)N−1∑n=0

z2n

H1

≷H0

0

⇒N−1∑n=0

z2n

H1

≷H0

2N log

(σ21

σ20

)·(

σ21σ

20

σ21 − σ2

0

)= γ

Define Y =∑N−1n=0 z

2n, then Y |Hi ∼ Γ(N, 2σ2

i ).

PFA = Pr{decoding H1 if H0 was transmitted} = Pr{Y > γ|H0} = 1− γ(N, γ/2σ20)

Γ(N)

PM = Pr{decoding H0 if H1 was transmitted} = 1− Pr{Y > γ|H1} =γ(N, γ/2σ2

1)

Γ(N)

where γ(K,x/θ) is the lower incomplete gamma function 3.

3γ(s, x) =∫ x0 ts−1e−tdt.

10

Page 11: Dig Comm Ex

3 Generalized Decision Criteria

1. Bayes decision criteria.Consider an equiprobable binary symmetric source m ∈ {0, 1}. For the observation, R, conditionalprobability density function is

fR|M (r|M = 0) =

{12 , |r| < 1,

0, otherwise.

fR|M (r|M = 1) =1

2e−|r|

(a) Obtain the decision rule for the minimum probability of error criterion and the correspond-ingly minimal probability of error.

(b) For the cost matrix C =

[0 2αα 0

], obtain the optimal generalized decision rule and the error

probability.

Solution:

(a)

|r| > 1 : fR|M (r|M = 0)⇒ m = 1.

|r| < 1 :12e−|r|

12

1≷0

1⇒ −|r|1≷0

0⇒ m = 0

The probability of error

p(e) = p(0) · 0 + p(1) ·∫ 1

−1

1

2e−|r|dr =

1

2[1− e−1]

(b) The decision rule

fR|M (r|M = 1)

fR|M (r|M = 0)

1≷0

p(0)

p(1)· C10 − C00

C01 − C11=

α

2α=

1

2

|r| > 1 : fR|M (r|M = 0) = 0

|r| < 1 :12e−|r|

12

1≷0

1

2⇒ −|r|

1≷0− ln 2

⇒ m =

{1, |r| < ln 2, |r| > 1

0, ln 2 < |r| < 1.

Probability of error

PFA = Pr{m = 1|m = 0} =

∫ ln 2

− ln 2

1

2dr = ln 2

PM = Pr{m = 0|m = 1} =

∫ln 2<|r|<1

1

2− e−|r|dr =

1

2− e−1 ∼=

p(e) = p(0)PFA + p(1)PM =1

2[ln 2 +

1

2− e−1]

11

Page 12: Dig Comm Ex

2. Non Gaussian additive noise.

Consider the source m ∈ {1,−1},Pr{m = 1} = 0.9,Pr{m = −1} = 0.1. The observation, y, obeys

y = m+N, N ∼ U [−2, 2]

(a) Obtain the decision rule for the minimum probability of error criterion and the minimalprobability of error.

(b) For the cost matrix C =

[0 1

100 0

], obtain the optimal Bayes decision rule and the error

probability.

Solution:

(a)

f(y|1) =

{14 , −1 < y < 3,

0, otherwise.

f(y| − 1) =

{14 , −3 < y < 1,

0, otherwise.

⇒ m =

{−1, −3 < y < −1,

1, −1 < y < 3.

The probability of error

p(e) = p(1) · 0 + p(−1) ·∫ 1

−1

1

4dy = 0.05

(b) The decision rule

f(y|1)

f(y| − 1)

1≷−1

p(−1)

p(1)· 100

1

⇒ p(1)f(y|1)1≷−1

100p(−1)f(y| − 1)

⇒ m =

{−1, −3 < y < 1,

1, 1 < y < 3.

The probability of error

p(e) = p(−1) · 0 + p(1) ·∫ 1

−1

1

4dy = 0.45

12

Page 13: Dig Comm Ex

4 Vector Communication Channels

Remark 2. Vectors are denoted with boldface letters, e.g. x, y.

1. General Gaussian vector channel.Consider the Gaussian vector channel with the sources p(m0) = q, p(m1) = 1−q, s0 = [1, 1]T , s1 =[−1,−1]T . For sending m0 the transmitter sends s0 and for sending m1 the transmitter sends s1.The observations, ri, obeys

r = si + n n = [n1, n2],n ∼ N(0,Λn),Λn =

[σ21 0

0 σ22

]The noise vector, n, and the messages mi are independent.

(a) Obtain the optimal decision rule using MAP criterion, and examine it for the following cases:

i. q = 12 , σ1 = σ2.

ii. q = 12 , σ

21 = 2σ2

2 .

iii. q = 13 , σ

21 = 2σ2

2 .

(b) Derive the error probability for the obtained decision rule.

Solution:

(a) The conditional probability distribution function R|Si ∼ N(Si,Λn):

f(r|si) =1√

(2π)2 det Λnexp

{1

2(r− si)

TΛ−1n (r− si)

}The MAP optimal decision rule

p(m0)f(r|s0)m0

≷m1

p(m1)f(r|s1)

q√(2π)2 det Λn

exp

{1

2(r− s0)TΛ−1n (r− s0)

} m0

≷m1

1− q√(2π)2 det Λn

exp

{1

2(r− s1)TΛ−1n (r− s1)

}

q exp

{1

2(r− s0)TΛ−1n (r− s0)

} m0

≷m1

(1− q) exp

{1

2(r− s1)TΛ−1n (r− s1)

}

(r− s1)TΛ−1n (r− s1)− (r− s0)TΛ−1n (r− s0)m0

≷m1

2 ln1− qq

Assign rT = [x, y]

(x+ 1)2

σ21

+(y + 1)2

σ22

− (x− 1)2

σ21

− (y − 1)2

σ22

m0

≷m1

2 ln1− qq

⇒ x

σ21

+y

σ22

m0

≷m1

1

2ln

1− qq

13

Page 14: Dig Comm Ex

i. For the case q = 12 , σ1 = σ2 the decision rule becomes

x+ ym0

≷m1

0

ii. For the case q = 12 , σ

21 = 2σ2

2 the decision rule becomes

x+ 2ym0

≷m1

0

iii. For the case q = 13 , σ

21 = 2σ2

2 the decision rule becomes

x+ 2ym0

≷m1

ln 2

(b) Denote K , 12 ln 1−q

q , and define z = xσ21

+ yσ22. The conditional distribution of Z is

Z|si ∼ N((−1)iσ21 + σ2

2

σ21σ

22

,σ21 + σ2

2

σ21σ

22

), i = 0, 1

The decision rule in terms of z,K

zm0

≷m1

K

The error probability

p(e) = p(m0) Pr{z < K|m0}+ p(m1) Pr{z > K|m1}

Assigning the conditional distribution

Pr{z < K|m0} = 1−Q(K − σ2

1+σ22

σ21σ

22√

σ21+σ

22

σ21σ

22

)

Pr{z > K|m1} = Q

(K +σ21+σ

22

σ21σ

22√

σ21+σ

22

σ21σ

22

)

For the case q = 12 , σ1 = σ2 the error probability equals Q

(√2σ21

).

2. Non Gaussian additive vector channel.Consider a binary hypothesis testing problem in which the sources s0 = [1, 2, 3], s1 = [1,−1,−3]are equiprobable. The observations, ri, obeys

r = si + n, n = [n0, n1, n2]

where n elements are i.i.d with the following probability density function

fNK (nk) =1

2e−|nk|

14

Page 15: Dig Comm Ex

Obtain the optimal decision rule using MAP criteria.

Solution:

The optimal decision rule using MAP criteria

p(s0)f(r|s0)0≷1

p(s1)f(r|s1)

f(r|s0)0≷1

f(r|s1)

The conditional probability distribution function

f(r|si) = fN(r− si) =

2∏k=0

fN (nk = rk − sik)

=1

2e−|r0−si,0|

1

2e−|r1−si,1|

1

2e−|r2−si,2| =

1

8e−[|r0−si,0|+|r1−si,1|+|r2−si,2|]

An assignment of the si elements yield

|r0 − 1|+ |r1 − 2|+ |r2 − 3|1≷0|r0 − 1|+ |r1 + 1|+ |r2 + 3|

|r1 − 2|+ |r2 − 3|1≷0|r1 + 1|+ |r2 + 3|

Note that the above decision rule compares the distance from the axis in both hypotheses, unlikein the Gaussian vector channel in which the Euclidean distance is compared.

3. Gaussian two-channel.Consider the following two-channel problem, in which the observations under the two hypothesesare

H0 :

[Z1

Z2

]=

[1 00 1

2

] [V1V2

]+

[−1− 1

2

]H1 :

[Z1

Z2

]=

[1 00 1

2

] [V1V2

]+

[112

]where V1 and V2 are independent, zero-mean Gaussin variables with variance σ2.

(a) Find the minimum probability of error receiver if both hypotheses are equally likely. Simplifythe receiver structure.

(b) Find the minimum probability of error.

Solution:

Let Z =

[Z1

Z2

]. The conditional distribution of Z is

Z|H0 ∼ N(µ0,Λ), Z|H1 ∼ N(µ1,Λ),

µ0 =

[−1− 1

2

], , µ1 =

[112

]Λ = σ2

[1 00 1

4

]

15

Page 16: Dig Comm Ex

(a) The decision rule

f(z|H1)

f(z|H1)

H1

≷H0

p(H0)

p(H1)

log f(z|H1)− log f(z|H0)H1

≷H0

0

2

σ2(z1 + 2z2)

H1

≷H0

0

⇒ z1 + 2z2

H1

≷H0

0

(b) Define X = Z1 + 2Z2. Since V1, V2 are independent Z1, Z2 are independent as well. A linearcombination of Z1, Z2 yia Gaussian R.V with the following parameters

E{X|H0} = −2, E{X|H1} = 2,

V ar{X|H0} = V ar{X|H1} = 2σ2

And the probability of error events

PFA = Pr{H = H1|H = H0} =

∫ ∞0

f(x|H1)dx,

PM = Pr{H = H0|H = H1} = 1−∫ ∞0

f(x|H0)dx

16

Page 17: Dig Comm Ex

5 Signal Space Representation

1. [1, Problem 4.9].Consider a set of M orthogonal signal waveforms sm(t), 1 ≤ m ≤ M, 0 ≤ t ≤ T 4, all of whichhave the same energy ε 5. Define a new set of waveforms as

s′m(t) = sm(t)− 1

M

M∑k=1

sk(t), 1 ≤ m ≤M, 0 ≤ t ≤ T

Show that the M signal waveforms {s′m(t)} have equal energy, given by

ε′ = (M − 1)ε

M

and are equally correlated, with correlation coefficient

ρmn =1

ε′

∫ T

0

s′m(t)s′n(t)dt = − 1

M − 1

Solution:

The energy of the signal waveform s′m(t) is:

ε′ =

∫ ∞−∞|s′m(t)|2 dt =

∫ ∞−∞

∣∣∣∣sm(t)− 1

M

M∑k=1

sk(t)

∣∣∣∣2dt=

∫ ∞−∞

s2m(t) +1

M2

M∑k=1

M∑l=1

∫ ∞−∞

sk(t)sl(t)dt−2

M

M∑k=1

∫ ∞−∞

sm(t)sk(t)dt

= ε+1

M2

M∑k=1

M∑l=1

εδkl −2

= ε1

Mε− 2

Mε =

M − 1

The correlation coefficient is given by:

ρmn =1

ε′

∫ ∞−∞

s′m(t)s′n(t)dt =

∫ ∞−∞

(sm(t)− 1

M

M∑k=1

sk(t)

)(sn(t)− 1

M

M∑l=1

sl(t)

)dt

=1

ε′

(∫ ∞−∞

sm(t)sn(t)dt+1

M2

M∑k=1

M∑l=1

∫ ∞−∞

sk(t)sl(t)dt

)− 1

ε′2

M

M∑k=1

∫ ∞−∞

sm(t)sk(t)dt

=1M2Mε− 2

M εM−1M ε

= − 1

M − 1

2. [1, Problem 4.10].

4〈sj(t), sk(t)〉 = 0, ∀j 6= k, j, k ∈ {1, 2, . . . ,M}.5The energy of the signal waveform sm(t) is: ε =

∫∞−∞ |sm(t)|2 dt

17

Page 18: Dig Comm Ex

Consider the following three waveforms

f1(t) =

12 , 0 ≤ t < 2,

− 12 , 2 ≤ t < 4,

0, otherwise.

f2(t) =

{12 , 0 ≤ t < 4,

0, otherwise.

f3(t) =

12 , 0 ≤ t < 1, 2 ≤ t < 3

− 12 , 1 ≤ t < 2, 3 ≤ t < 4

0, otherwise.

(a) Show that these waveforms are orthonormal.

(b) Check if you can express x(t) as a weighted linear combination of fn(t), n = 1, 2, 3, if

x(t) =

−1, 0 < t < 1

1, 1 ≤ t < 3

−1, 3 ≤ t < 4

0, otherwise.

and if you can determine the weighting coefficients, otherwise explain.

Solution:

(a) To show that the waveforms fn(t), n = 1, 2, 3 are orthogonal we have to prove that:∫ ∞−∞

fn(t)fm(t)dt = 0. m 6= n

For n = 1,m = 2:∫ ∞−∞

f1(t)f2(t)dt =

∫ 4

0

f1(t)f2(t)dt

=

∫ 2

0

f1(t)f2(t)dt+

∫ 4

2

f1(t)f2(t)dt

=1

4

∫ 2

0

dt− 1

4

∫ 4

2

dt = 0

For n = 1,m = 3:∫ ∞−∞

f1(t)f3(t)dt =

∫ 4

0

f1(t)f3(t)dt

=1

4

∫ 1

0

dt− 1

4

∫ 2

1

dt− 1

4

∫ 3

2

dt+1

4

∫ 4

3

dt = 0

For n = 2,m = 3:∫ ∞−∞

f2(t)f3(t)dt =

∫ 4

0

f2(t)f3(t)dt

=1

4

∫ 1

0

dt− 1

4

∫ 2

1

dt+1

4

∫ 3

2

dt− 1

4

∫ 4

3

dt = 0

18

Page 19: Dig Comm Ex

Thus, the signals fn(t) are orthogonal. It is also straightforward to prove that the signalshave unit energy: ∫ ∞

−∞|fn(t)|2 dt = 1, n = 1, 2, 3

Hence, they are orthonormal.

(b) We first determine the weighting coefficients

xn =

∫ ∞−∞

x(t)fn(t)dt, n = 1, 2, 3

x1 =

∫ 4

0

x(t)f1(t)dt = −1

2

∫ 1

0

dt+1

2

∫ 2

1

dt− 1

2

∫ 3

2

dt+1

2

∫ 4

3

dt = 0

x2 =

∫ 4

0

x(t)f2(t)dt =1

2

∫ 4

0

x(t)dt = 0

x1 =

∫ 4

0

x(t)f1(t)dt = −1

2

∫ 1

0

dt− 1

2

∫ 2

1

dt+1

2

∫ 3

2

dt+1

2

∫ 4

3

dt = 0

As it is observed, x(t) is orthogonal to the signal waveforms fn(t), n = 1, 2, 3 and thus it cannot represented as a linear combination of these functions.

3. [1, Problem 4.11].Consider the following four waveforms

s1(t) =

2, 0 ≤ t < 1,

−1, 1 ≤ t < 4,

0, otherwise.

s2(t) =

−2, 0 ≤ t < 1,

1, 1 ≤ t < 3,

0, otherwise.

s3(t) =

1, 0 ≤ t < 1, 2 ≤ t < 3,

−1, 1 ≤ t < 2, 3 ≤ t < 4,

0, otherwise.

s4(t) =

1, 0 ≤ t < 1,

−2, 1 ≤ t < 3,

2, 3 ≤ t < 4,

0, otherwise.

(a) Determine the dimensionality of the waveforms and a set of basis functions.

(b) Use the basis functions to present the four waveforms by vectors s1, s2, s3 and s4.

(c) Determine the minimum distance between any pair of vectors.

Solution:

(a) As an orthonormal set of basis functions we consider the set

f1(t) =

{1, 0 ≤ t < 1,

0, otherwise.f2(t) =

{1, 1 ≤ t < 2,

0, otherwise.

f3(t) =

{1, 2 ≤ t < 3,

0, otherwise.f4(t) =

{1, 3 ≤ t < 4,

0, otherwise.

19

Page 20: Dig Comm Ex

In matrix notation, the four waveforms can be represented ass1(t)s2(t)s3(t)s4(t)

=

2 −1 −1 −1−2 1 1 01 −1 1 −11 −2 −2 2

f1(t)f2(t)f3(t)f4(t)

Note that the rank of the transformation matrix is 4 and therefore, the dimensionality of thewaveforms is 4.

(b) The representation vectors are

s1 =[2 −1 −1 −1

]s2 =

[−2 1 1 0

]s3 =

[1 −1 1 −1

]s4 =

[1 −2 −2 2

](c) The distance between the first and the second vector is:

d1,2 =

√|s1 − s2|2 =

√∣∣[4 −2 −2 −1]∣∣2 =

√25

Similarly we find that:

d1,3 =

√|s1 − s3|2 =

√∣∣[1 0 −2 0]∣∣2 =

√5

d1,4 =

√|s1 − s4|2 =

√∣∣[1 1 1 −3]∣∣2 =

√12

d2,3 =

√|s2 − s3|2 =

√∣∣[−3 2 0 1]∣∣2 =

√14

d2,4 =

√|s2 − s4|2 =

√∣∣[−3 3 3 −2]∣∣2 =

√31

d3,4 =

√|s3 − s4|2 =

√∣∣[0 1 3 −3]∣∣2 =

√19

Thus, the minimum distance between any pair of vectors is dmin =√

5.

4. [2, Problem 5.4].

(a) Using Gram-Schmidt orthogonalization procedure, find a set of orthonormal basis functionsto represent the following signals

s1(t) =

{2, 0 ≤ t < 1,

0, otherwise.s2(t) =

{−4, 0 ≤ t < 2,

0, otherwise.s3(t) =

{3, 0 ≤ t < 3,

0, otherwise.

(b) Express each of the signals si(t), i = 1, 2, 3 in terms of the basis functions found in (4a).

Solution:

(a) The energy of s1(t) and the first basis are

E1 =

∫ 1

0

|s1(t)|2 dt =

∫ 1

0

22dt = 4

⇒ φ1(t) =s1(t)√E1

=

{1, 0 ≤ t < 1,

0, otherwise.

20

Page 21: Dig Comm Ex

Define

s21 =

∫ T

0

s2(t)φ1(t)dt =

∫ 1

0

−4 · 1dt = 4

g2(t) = s2(t)− s21φ1(t) =

{−4, 1 ≤ t < 2,

0, otherwise.

Hence, the second basis function is

φ2(t) =g2(t)√∫ T0g22(t)dt

=

{−1, 1 ≤ t < 2,

0, otherwise.

Define

s31 =

∫ T

0

s3(t)φ1(t)dt =

∫ 1

0

3 · 1dt = 3

s32 =

∫ 2T

T

s3(t)φ2(t)dt =

∫ 2

1

3 · −1dt = −3

g3(t) = s3(t)− s31φ1(t)− s32φ2(t) =

{3, 2 ≤ t < 3,

0, otherwise.

Hence, the third basis function is

φ3(t) =g3(t)√∫ T0g23(t)dt

=

{1, 2 ≤ t < 3,

0, otherwise.

(b)

s1(t) = 2φ1(t)

s2(t) = −4φ1(t) + 4φ2(t)

s3(t) = 3φ1(t)− 3φ2(t) + 3φ3(t)

5. Optimum receiver.Suppose one of M equiprobable signals xi(t), i = 0, . . . ,M−1 is to be transmitted during a periodof time T over an AWGN channel. Moreover, each signal is identical to all others in the subinterval[t1, t2] where 0 < t1 < t2 < T .

(a) Show that the optimum receiver may ignore the subinterval [t1, t2].

(b) Equivalently, show that if x0, . . . ,xM−1 all have the same projection in one dimension6, thenthis dimension may be ignored.

(c) Does this result necessarily hold true if the noise is Gaussian but not white? Explain.

Solution:

(a) The data signals xi(t) being equiprobable, the optimum decision rule is the Maximum Like-

lihood (ML) rule, given by, (in vector form) mini |y − xi|2. From the invariance of the innerproduct, the ML rule is equivalent to,

mini

∫ T

0

|y(t)− xi(t)|2 dt

6xTi =

[xi1 xi2 . . . xiN

]are vectors of length N , ∃k : xik = xjk, ∀i, j ∈ {0, . . . ,M − 1}.

21

Page 22: Dig Comm Ex

The integral is then written as a sum of three integrals,∫ T

0

|y(t)− xi(t)|2 dt =

∫ t1

0

|y(t)− xi(t)|2 dt+

∫ t2

t1

|y(t)− xi(t)|2 dt+

∫ T

t2

|y(t)− xi(t)|2 dt

Since the second integral over the interval [t1, t2] is constant as a function of i, the optimumdecision rule reduces to,

mini

{∫ t1

0

|y(t)− xi(t)|2 dt+

∫ T

t2

|y(t)− xi(t)|2 dt}

And therefore, the optimum receiver may ignore the interval [t1, t2].

(b) In an appropriate orthonormal basis of dimension N ≤ M , the vectors xi and y are givenby,

xTi =[xi1 xi2 . . . xiN

]yT =

[y1 y2 . . . yN

]Assume that xim = x1m for all i, the optimum decision rule becomes,

mini

M∑k=1

|yk − xik|2 ⇔ mini

M∑k=1,k 6=m

|yk − xik|2 + |ym − xim|2

Since |ym − xim|2 is constant for all i, the optimum decision rule becomes,

mini

M∑k=1,k 6=m

|yk − xik|2

Therefore, the projection xm might be ignored by the optimum receiver.

(c) The result does not hold true if the noise is colored Gaussian noise. This is due to the factthat the noise along one component is correlated with other components and hence mightnot be irrelevant. In such a case, all components turn out to be relevant. Equivalently, byduality, the same result holds in the time domain.

22

Page 23: Dig Comm Ex

6 Optimal Receiver for the Waveform Channel

1. [1, Problem 5.4].A binary digital communication system employs the signals

s0(t) =

{0, 0 ≤ t < T,

0, otherwise.s1(t) =

{A, 0 ≤ t < T,

0, otherwise.

for transmitting the information. This is called on-off signaling. The demodulator cross-correlatesthe received signal r(t) with si(t), i = 0, 1 and samples the output of the correlator at t = T .

(a) Determine the optimum detector for an AWGN channel and the optimum threshold, assumingthat the signals are equally probable.

(b) Determine the probability of error as a function of the SNR. How does on-off signallingcompare with antipodal signaling?

Solution:

(a) The correlation type demodulator employs a filter:

f(t) =

{1√T, 0 ≤ t < T,

0, otherwise.

Hence, the sampled outputs of the cross-correlators are:

r = si + n, i = 0, 1

where s0 = 0, s1 = A√T and the noise term n is a zero-mean Gaussian random variable with

variance σ2n = N0

2 . The probability density function for the sampled output is:

f(r|s0) =1√πN0

e−r2

N0 f(r|s1) =1√πN0

e−(r−A

√T )2

N0

The minimum error decision rule is:

f(r|s1)

f(r|s0)

s1≷s0

1

⇒ rs1≷s0

1

2A√T

(b) The average probability of error is:

p(e) =1

2

∫ ∞12A√T

f(r|s0)dr +1

2

∫ 12A√T

−∞f(r|s1)dr

=1

2

∫ ∞12A√T

1√πN0

e−r2

N0 dr +1

2

∫ 12A√T

−∞

1√πN0

e−(r−A

√T )2

N0 dr

=1

2

∫ ∞12

√2N0A√T

1√2πe−

x2

2 dx+1

2

∫ − 12

√2N0A√T

−∞

1√2πe−

x2

2 dx

= Q

(1

2

√2

N0A√T

)= Q(

√SNR)

23

Page 24: Dig Comm Ex

where

SNR =12A

2T

N0

Thus, the on-off signaling requires a factor of two more energy to achieve the same probabilityof error as the antipodal signaling.

2. [2, Problem 5.11].Consider the optimal detection of the sinusoidal signal

s(t) = sin

(8πt

T

), 0 ≤ t ≤ T

in additive white Gaussian noise.

(a) Determine the correlator output (at t = T ) assuming a noiseless input.

(b) Determine the corresponding match filter output, assuming that the filter includes a delayT to make it casual.

(c) Hence show that these two outputs are the same at time instant t = T .

Solution:

For the noiseless case, the received signal r(t) = s(t), 0 ≤ t ≤ T .

(a) The correlator output is:

y(T ) =

∫ T

0

r(τ)s(τ)dτ =

∫ T

0

s2(τ)dτ =

∫ T

0

sin2

(8πτ

T

)dτ =

T

2

(b) The matched filter is defined by the impulse response h(t) = s(T − t). The matched filteroutput is therefore:

y(t) =

∫ ∞−∞

r(λ)h(t− λ)dλ =

∫ ∞−∞

s(λ)s(T − t+ λ)dλ

=

∫ T

0

sin

(8πλ

T

)sin

(8π(T − t+ λ)

T

)dλ

=1

2

∫ T

0

cos

(8π(T − t)

T

)dλ− 1

2

∫ T

0

cos

(8π(T − t+ λ)

T

)dλ

=T

2cos

(8π(t− T )

T

)− T

16πsin

(8π(T − t)

T

)− T

16πsin

(8πt

T

).

(c) When the matched filter output is sampled at t = T , we get

y(T ) =T

2

which is exactly the same as the correlator output determined in item (2a).

3. SNR Maximization with a Matched Filter.Prove the following theorem:For the real system shown in Figure 1, the filter h(t) that maximizes the signal-to-noise ratio atsample time Ts is given by the matched filter h(t) = x(Ts − t).

24

Page 25: Dig Comm Ex

+ x(t)

n(t)

h(t) s T t =

) y(T s

Figure 1: SNR maximization by matched filter.

solution:

Compute the SNR at sample time t = Ts as follows:

Signal Energy = [x(t) ∗ h(t)|t=Ts ]2

=

[ ∫ ∞−∞

x(t)h(Ts − t)dt]2

= [〈x(t), h(Ts − t)〉]2

The sampled noise at the matched filter output has energy or mean-square

Noise Energy = E

{∫ ∞−∞

n(t)h(Ts − t)dt∫ ∞−∞

n(s)h(Ts − s)ds}

=

∫ ∞−∞

∫ ∞−∞

N0

2δ(t− s)h(Ts − t)h(Ts − s)dtds

=N0

2

∫ ∞−∞

h2(Ts − t)dt

=N0

2‖h‖2

The signal-to-noise ratio, defined as the ratio of the signal power in to the noise power, equals

SNR =2

N0

[〈x(t), h(Ts − t)〉]2

‖h‖2

The Cauchy-Schwarz Inequality states that

[〈x(t), h(Ts − t)〉]2 ≤ ‖x‖2 ‖h‖2

with equality if and only if x(t) = kh(Ts − t) where k is some arbitrary constant. Thus, byinspection, the SNR is maximized over all choices for h(t) when h(t) = x(Ts − t). The filter h(t)is matched to x(t), and the corresponding maximum SNR (for any k) is

SNRmax =2

N0‖x‖2

4. The optimal receiver.Consider the signals s0(t), s1(t) with the respective probabilities p0, p1.

s0(t) =

ET , 0 ≤ t < aT,

−√

ET , aT ≤ t < T,

0, otherwise.

s1(t) =

{√2ET cos

(2πtT

), 0 ≤ t < T,

0, otherwise.

25

Page 26: Dig Comm Ex

The observation, r(t), obeys

r(t) = si(t) + n(t), i = 0, 1

E{n(t)n(τ)} =N0

2δ(t− τ), n(t) ∼ N(0,

N0

2δ(t− τ)).

(a) Find the optimal receiver for the above two signals, write the solution in terms of s0(t) ands1(t).

(b) Find the error probability of the optimal receiver for equiprobable signals.

(c) Find the parameter a, which minimizes the error probability.

Solution:

(a) We will use a type II, which uses filters matched to the signals si(t), i = 0, 1. The optimalreceiver is depicted in Figure 2.

r(t)

(t) h 0

(t) h 1

T t =

T t =

+

+

E p 2

1 ln 0 - 2

N 0

E p 2

1 ln 1 - 2

N 0

Max

0 y

1 y

Figure 2: Optimal receiver - II.

where h0(t) = s0(T − t), h1(t) = s1(T − t).

The Max block in Figure 2 can be implemented as follows

y = y0 − y1s0(t)≷s1(t)

0

The R.V y obeys

y = [h0(t) ∗ r(t)]∣∣t=T

+N0

2ln p0 −

E

2− [h1(t) ∗ r(t)]

∣∣t=T− N0

2ln p1 +

E

2

=N0

2lnp0p1

+ [(h0(t)− h1(t)) ∗ r(t)]∣∣t=T

Hence the optimal receiver can be implemented using one convolution operation instead oftwo convolution operations, as depicted in Figure 3.

(b) For an equiprobable binary constellation, in an AWGN channel, the probability of error isgiven by

p(e) = Q

(d/2

σ

), d = ‖s0 − s1‖

d2 = ‖s0 − s1‖2 = ‖s0‖2 + ‖s1‖2 − 2 〈s0, s1〉

26

Page 27: Dig Comm Ex

r(t) (t) h - (t) h 1 0

T t = +

1

0 ln p

p

2 N 0

Decision Rule

Figure 3: Optimal receiver - II.

where σ2 is the noise variance.

The correlation coefficient between the two signals, ρ, equals

ρ =〈s0, s1〉‖s0‖ ‖s1‖

=〈s0, s1〉E

and for equal energy signals

d2 = 2E − 2 〈s0, s1〉⇒ d =

√2E(1− ρ)

⇒ p(e) = Q

(√E(1− ρ)

N0

)(c) ρ is the only parameter, in p(e), affected by a. An explicit calculation of ρ yields

〈s0, s1〉 =

∫ T

0

s0(t)s1(t)dt

=

∫ aT

0

√E

T

√2E

Tcos

2πt

Tdt−

∫ T

aT

√E

E

√2E

Ecos

2πt

Tdt

=√

2E

2πsin 2πa+

√2E

2πsin 2πa

⇒ ρ =

√2

πsin 2πa

⇒ p(e) = Q

(√E(1−

√2π sin 2πa)

N0

)In order to minimize the probability of error, we will maximize the Q function argument:

sin 2πa = −1

⇒ a =3

4

27

Page 28: Dig Comm Ex

7 The Probability of Error

1. [1, Problem 5.10].A ternary communication system transmits one of three signals, s(t), 0, or −s(t), every T seconds.The received signal is one either r(t) = s(t) + z(t), r(t) = z(t) or r(t) = −s(t) + z(t), where z(t) iswhite Gaussian noise with E{z(t)} = 0 and Φzz(τ) = 1

2E{z(t)z∗(τ)} = N0δ(t−τ) . The optimum

receiver computes the correlation metric

U = Re

{∫ T

0

r(t)s∗(t)dt

}and compares U with a threshold A and a threshold −A. If U > A the decision is made that s(t)was sent. If U < A, the decision is made in favor of −s(t). If −A ≤ U ≤ A, the decision is madein favor of 0.

(a) Determine the three conditional probabilities of error p(e|s(t)), p(e|0)) and p(e| − s(t)).(b) Determine the average probability of error p(e) as a function of the threshold A, assuming

that the three symbols are equally probable a priori.

(c) Determine the value of A that minimizes p(e).

Solution:

(a) U = Re

{∫ T0r(t)s(t)dt

}, where r(t) =

s(t) + z(t)−s(t) + z(t)

z(t)

depending on which signal was

sent. If we assume that s(t) was sent:

U = Re

{∫ T

0

s(t)s∗(t)dt

}+ Re

{∫ T

0

z(t)s∗(t)dt

}= 2E +N

where E = 12

∫ T0s(t)s∗(t)dt is a constant, and N = Re

{∫ T0z(t)s∗(t)dt

}is a Gaussian

random variable with zero mean and variance 2EN0. Hence, given that s(t) was sent, theprobability of error is:

p1(e) = Pr{N < A− 2E} = Q

(2E −A√

2EN0

)When −s(t) is transmitted: U = −2E +N , and the corresponding conditional error proba-bility is:

p2(e) = Pr{N > −A+ 2E} = Q

(2E −A√

2EN0

)and finally, when 0 is transmitted: U = N , and the corresponding error probability is:

p3(e) = Pr{N > A or N < −A} = 2Q

(A√

2EN0

)(b)

p(e) =1

3[p1(e) + p2(e) + p3(e)] =

2

3

[Q

(2E −A√

2EN0

)+Q

(A√

2EN0

)]

28

Page 29: Dig Comm Ex

(c) In order to minimize p(e):dp(e)

dA= 0⇒ A = E

where we differentiate Q(x) =∫∞x

1√2πe−

t2

2 dt with respect to x, using the Leibnitz rule:ddx

( ∫∞f(x)

g(a)da)

= − dfdxg(f(x)).

Using this threshold:

p(e) =4

3Q

(√E

2N0

)2. [1, Problem 5.19].

Consider a signal detector with an input

r = ±A+ n, A > 0

where +A and −A occur with equal probability and the noise variable n is characterized by theLaplacian p.d.f:

f(n) =1√2σe−√

2|n|σ

(a) Determine the probability of error as a function of the parameters A and σ.

(b) Determine the SNR required to achieve an error probability of 10−5. How does the SNRcompare with the result for Gaussian p.d.f?

Solution:

(a) Let λ =√2σ . The optimal receiver uses the criterion:

f(r|A)

f(r| −A)= e−λ[|r−A|−|r+A|]

A≷−A

1

⇒ rA≷−A

0

The average probability of error is:

p(e) =1

2Pr{Error|A}+

1

2Pr{Error| −A}

=1

2

∫ 0

−∞f(r|A)dr +

1

2

∫ ∞0

f(r| −A)dr

=1

2

∫ 0

−∞

λ

2e−λ|r−A|dr +

1

2

∫ ∞0

λ

2e−λ|r+A|dr

4

∫ −A−∞

e−λ|x|dx+λ

4

∫ ∞A

e−λ|x|dx

=1

2e−λA =

1

2e−√

2Aσ

(b) The variance of the noise is σ2, hence the SNR is:

SNR =A2

σ2

29

Page 30: Dig Comm Ex

and the probability of error is given by:

p(e) =1

2e−√2SNR

For p(e) = 10−5 we obtain:

ln(2 · 10−5) = −√

2SNR⇒ SNR = 17.674 dB

If the noise was Gaussian, then the probability of error for antipodal signalling is:

p(e) = Q

(√SNR

)where SNR is the signal to noise ratio at the output of the matched filter. With p(e) = 10−5

we find√

SNR = 4.26 and therefore SNR = 12.594 dB. Thus the required signal to noiseratio is 5 dB less when the additive noise is Gaussian.

3. [1, Problem 5.38].The discrete sequence

rk =√Ebck + nk, k = 1, 2, . . . , n

represents the output sequence of samples from a demodulator, where ck = ±1 are elements ofone of two possible code words, C1 = [1 1 . . . 1] and C2 = [1 1 . . . 1 −1 . . . −1]. The code wordC2 has w elements that are +1 and n − w elements that are −1, where w is a positive integer.The noise sequence {nk} is white Gaussian with variance σ2.

(a) What is the optimum ML detector for the two possible transmitted signals?

(b) Determine the probability of error as a function of the parameters σ2, Eb, w.

(c) What is the value of w that minimizes the the error probability?

Solution:

(a) The optimal ML detector selects the sequence Ci that minimizes the quantity:

D(r,Ci) =

n∑k=1

(rk −√Ebcik)2

The metrics of the two possible transmitted sequences are

D(r,C1) =

w∑k=1

(rk −√Eb)

2 +

n∑k=w+1

(rk −√Eb)

2

D(r,C2) =

w∑k=1

(rk −√Eb)

2 +

n∑k=w+1

(rk +√Eb)

2

Since the first term of the right side is common for the two equations, we conclude that theoptimal ML detector can base its decisions only on the last n − w received elements of r.That is

w∑k=w+1

(rk −√Eb)

2 −w∑

k=w+1

(rk +√Eb)

2C2

≷C1

0

or equivalentlyw∑

k=w+1

rk

C1

≷C2

0

30

Page 31: Dig Comm Ex

(b) Since rk =√Ebcik + nk the probability of error Pr{Error|C1} is

Pr{Error|C1} = Pr

{√Eb(n− w) +

n∑k=w+1

nk < 0

}

= Pr

{ n∑k=w+1

nk < −(n− w)√Eb

}

The R.V u =∑nk=w+1 nk is zero-mean Gaussian with variance σ2

u = (n− w)σ2. Hence

Pr{Error|C1} = p1(e) =1√

2πσ2u

∫ −(n−w)√Eb

−∞exp

(− x2

σ2u

)dx = Q

(√Eb(n− w)

σ2

)Similarly we find that Pr{Error|C1} = Pr{Error|C2} and since the two sequences areequiprobable

p(e) = Q

(√Eb(n− w)

σ2

)(c) The probability of error p(e) is minimized when Eb(n−w)

σ2 is maximized, that is for w = 0. Thisimplies that C1 = −C2 and thus the distance between the two sequences is the maximumpossible.

4. Sub optimal receiver.Consider a binary system transmitting the signals s0(t), s1(t) with equal probability.

s0(t) =

{√2ET sin 2πt

T , 0 ≤ t ≤ T,0, otherwise.

s1(t) =

{√2ET cos 2πt

T , 0 ≤ t ≤ T,0, otherwise.

The observation, r(t), obeys

r(t) = si(t) + n(t), i = 0, 1

where n(t) is white Gaussian noise with E{n(t)} = 0 and E{n(t)n(τ)} = N0

2 δ(t− τ).

(a) Sketch an optimal and efficient (in the sense of minimal number of filters) receiver. What isthe error probability when this receiver is used?

(b) What is the error probability of the following receiver?

∫ T2

0

r(t)dts0≷s1

0

(c) Consider the following receiver

∫ aT

0

r(t)dts0≷s1

K, 0 ≤ a ≤ 1

where K is the optimal threshold for∫ aT0

r(t)dt. Find a which minimizes the probability oferror. Numerical solution may be used.

31

Page 32: Dig Comm Ex

r(t)

t) - (T s 0

t) - (T s 1

T t =

T t =

Max

Figure 4: Optimal receiver type II.

Solution:

(a) The signals are equiprobable and have equal energy. We will use type II receiver, depictedin Figure 4.

The distance between the signals is

d2 =

∫ T

0

2E

T

(sin

(2πt

T

)− cos

(2πt

T

))2

= 2E ⇒ d =√

2E

The receiver depicted in Figure 4 is equivalent to the the following (and more efficient)receiver, depicted in Figure 5.

r(t) t) - (T s - t) - (T s 1 0

T t = 0

1

0

s

s

< >

Figure 5: Efficient optimal receiver.

For a binary system with equiprobable signals s0(t) and s1(t) the probability of error is givenby

p(e) = Q( d

)= Q

(d

2√

N0

2

)= Q

(d√2N0

)where d, the distance between the signals, is given by

d = ‖s0(t)− s1(t)‖ = ‖s0 − s1‖

Hence, the probability of error is

p(e) = Q

(d√2N0

)⇒ p(e) = Q

(√E

N0

)(b) Let us define the random variable, Y =

∫ T2

0r(t)dt. Y obeys

Y |s0 =

∫ T2

0

s0(t)dt+

∫ T2

0

n(t)dt

Y |s1 =

∫ T2

0

s1(t)dt+

∫ T2

0

n(t)dt

32

Page 33: Dig Comm Ex

Let us define the random variable N =∫ T

2

0n(t)dt. N is a zero mean Gaussian random

variable, and variance

Var{N} = E

{∫ T2

0

∫ T2

0

n(τ)n(λ)dτdλ

}=

∫ T2

0

∫ T2

0

N0

2δ(τ − λ)dτdλ =

NoT

4

Y |si is a Gaussian random variable (note that Y is not gaussian, but a Gaussin Mixture!)with mean:

E{Y |s0} =

∫ T2

0

s0(t)dt =

√2ET

π

E{Y |s1} =

∫ T2

0

s1(t)dt = 0

The variance of Y |si is identical under both cases, and equal to the variance of N . For thegiven decision rule the error probability is:

p(e) = p(s0) Pr{Y < 0|s0}+ p(s1) Pr{Y > 0|s1}

=1

2Q

(2

π

√2E

N0

)+

1

4

(c) We will use the same derivation procedure as in the previous item.Define the random variables Y,N as follows:

Y =

∫ aT

0

r(t)dt, N =

∫ aT

0

n(t)dt

E{N} = 0, Var{N} =aTN0

2

E{Y |s0} =

√2E

T

∫ aT

0

s0(t)dt =

√2ET

2π(1− cos 2πa)

E{Y |s1} =

√2E

T

∫ aT

0

s1(t)dt =

√2ET

2πsin 2πa

Var{Y |s0} = Var{Y |s1} = Var{N}

The distance between Y |s0 and Y |s1 equals

d =

∣∣∣∣∣√

2ET

2π(1− cos(2πa)− sin(2πa))

∣∣∣∣∣For an optimal decision rule the probability of error equals Q

(d2σ

). Hence the probability of

error equals

p(e) = Q

(1

√E

N0

1√a|(1− cos(2πa)− sin(2πa))|

)which is minimized when 1√

a|(1− cos 2πa− sin 2πa)| is maximized.

Let aopt denote the a which maximizes the above expression. Numerical solution yields that

aopt ∼= 0.5885

33

Page 34: Dig Comm Ex

8 Bit Error Probability

1. [3, Example 6.2].Compare the probability of bit error for 8PSK and 16PSK, in an AWGN channel, assumingγb = 15dB = Eb

N0and equal a-priori probabilities. Use the following approximations:

• Nearest neighbor approximation given in class.

• γb ≈ γslog2M

.

• The approximation for Pe,bit given in class.

Solution:

The nearest neighbor approximation for the probability of error, in an AWGN channel, for anM-PSK constellation is

Pe ≈ 2Q(√

2γs sin(π

M)).

The approximation for Pe,bit (under Gray mapping at high enough SNR) is

Pe,bit ≈Pe

log2M.

For 8PSK we have γs = (log2 8) · 1015/10 = 94.87. Hence

Pe ≈ 2Q(√

189.74 sin(π/8))

= 1.355 · 10−7.

Using the approximation for Pe,bit we get

Pe,bit =Pe3

= 4.52 · 10−8.

For 16PSK we have γs = (log2 16) · 1015/10 = 126.49. Hence

Pe ≈ 2Q(√

252.98 sin(π/16))

= 1.916 · 10−3.

Using the approximation for Pe,bit we get

Pe,bit =Pe4

= 4.79 · 10−4.

Note that Pe,bit is much larger for 16PSK than for 8PSK for the same γb. This result is expected,since 16PSK packs more bits per symbol into a given constellation, so for a fixed energy-per-bitthe minimum distance between constellation points will be smaller.

2. Bit error probability for rectangular constellation.Let p0(t) and p1(t) be two orthonormal functions, different from zero in the time interval [0, T ].The equiprobable signals defined in Figure 6 are transmitted through a zero-mean AWGN channelwith noise PSD equals N0/2.

(a) Calculate Pe for the optimal receiver.

(b) Calculate Pe,bit for the optimal receiver (optimal in the sense of minimal Pe).

(c) Approximate Pe,bit for high SNR (d2 �√

N0

2 ). Explain.

34

Page 35: Dig Comm Ex

(t) p 1

(t) p 0

) 010 ( ) 011 ( ) 001 ( ) 000 (

) 110 ( ) 111 ( ) 101 ( ) 100 (

2 d

2 d -

2 d 2 3d 2 d - 2 3d -

Figure 6: 8 signals in rectangular constellation.

Solution:

Let n0 denote the noise projection on p0(t) and n1 the noise projection on p1(t). Clearly ni ∼N(0, N0/2), i = 0, 1.

(a) Let Pc denote the probability for correct symbol decision; hence Pe = 1− Pc.

Pr{correct decision |(000) was transmitted} =

(1−Q

(d/2√N0/2

))2

(a)= Pr{correct decision |(100) was transmitted}(b)= Pr{correct decision |(010) was transmitted}(c)= Pr{correct decision |(110) was transmitted}= P1.

where (a), (b) and (c) are due to the constellation symmetry.

Pr{correct decision |(001) was transmitted} =

(1−Q

(d/2√N0/2

))(1− 2Q

(d/2√N0/2

))(a)= Pr{correct decision |(001) was transmitted}(b)= Pr{correct decision |(011) was transmitted}(c)= Pr{correct decision |(111) was transmitted}= P2.

35

Page 36: Dig Comm Ex

Hence

Pc =1

2

((1−Q

(d/2√N0/2

))2

+

(1−Q

(d/2√N0/2

))(1− 2Q

(d/2√N0/2

)))Pe = 1− Pc

⇒ Pe =1

2

(5Q

(d/2√N0/2

)− 3Q

(d/2√N0/2

)2).

(b) Let b0 denote the MSB, b2 denote the LSB and b1 denote the middle bit7. Let bi(s), i = 0, 1, 2denote the ith bit of the constellation point s.

Pr{error in b2|(000) was transmitted} =∑

s:b2(s)6=0

Pr{s was received|(000) was transmitted}

= Pr

{− 5d

2< N0 < −

d

2

}= Pr

{d

2< N0 <

5d

2

}= Q

(d/2√N0/2

)−Q

(5d/2√N0/2

)(a)= Pr{error in b2|(100) was transmitted}(b)= Pr{error in b2|(010) was transmitted}(c)= Pr{error in b2|(110) was transmitted}= P1.

where (a), (b) and (c) are due to the constellation symmetry.

Pr{error in b2|(001) was transmitted} =∑

s:b2(s) 6=1

Pr{s was received|(001) was transmitted}

= Pr

{N0 < −

3d

2

}+ Pr

{d

2< N0

}= Q

(d/2√N0/2

)+Q

(3d/2√N0/2

)= Pr{error in b2|(101) was transmitted}= Pr{error in b2|(011) was transmitted}= Pr{error in b2|(111) was transmitted}= P2.

7For the top left constellation point in Figure 6 (b0, b1, b2) = (010).

36

Page 37: Dig Comm Ex

Using similar arguments we can calculate the bit error probability for b1

Pr{error in b1|(000) was transmitted} = Q

(3d/2√N0/2

)= Pr{error in b1|(100) was transmitted}= Pr{error in b1|(010) was transmitted}= Pr{error in b1|(110) was transmitted}= P3.

Pr{error in b1|(001) was transmitted} = Q

(d/2√N0/2

)= Pr{error in b1|(101) was transmitted}= Pr{error in b1|(011) was transmitted}= Pr{error in b1|(111) was transmitted}= P4.

The bit error probability for b0 equals

Pr{error in b0|(000) was transmitted} = Q

(d/2√N0/2

)= P5.

Due to the constellation symmetry and the bits mapping, the bit error probability for b0 isequal for all the constellation points.

Let Pe,bi , i = 0, 1, 2 denote the averaged (over all signals) bit error probability of the ith bit,then

Pe,b0 = P5.

Pe,b1 =1

2(P3 + P4).

Pe,b2 =1

2(P1 + P2).

The averaged bit error probability, Pe,bit, is given by

Pe,bit =1

3

2∑i=0

Pe,bi

=5

6Q

(d/2√N0/2

)+

1

3Q

(3d/2√N0/2

)− 1

6Q

(5d/2√N0/2

)

(c) For d2 �

√N0

2

Pe,bit ∼=5

6Q

(d/2√N0/2

)Pe ∼=

5

2Q

(d/2√N0/2

)⇒ Pe,bit →

Pe3.

37

Page 38: Dig Comm Ex

Note that Pelog2M

is the lower bound for Pe,bit.

38

Page 39: Dig Comm Ex

9 Connection with the Concept of Capacity

1. [2, Problem 9.29].A voice-grade channel of the telephone network has a bandwidth of 3.4 KHz. Assume real-valuedsymbols.

(a) Calculate the capacity of the telephone channel for signal-to-noise ratio of 30 dB.

(b) Calculate the minimum signal-to-noise ratio required to support information transmission

through the telephone channel at the rate of 4800

[bitssec

].

Solution:

(a) The channel bandwidth is W = 3.4 KHz. The received signal-to-noise ratio is SNR = 103 =30 dB. Hence the channel capacity is

C = W log2(1 + SNR) = 3.4 · 103 · log2(1 + 103) = 33.9 · 103[bits

sec

].

(b) The required SNR is the solution of the following equation

4800 = 3.4 · 103 · log2(1 + SNR)⇒ SNR = 1.6 = 2.2 dB.

2. [1, Problem 7.17].Channel C1 is an additive white Gaussian noise channel with a bandwidth W , average transmitterpower P , and noise power spectral density N0

2 . Channel C2 is an additive white Gaussian noisechannel with the same bandwidth and average power as channel C1 but with noise power spectraldensity Sn(f). It is further assumed that the total noise power for both channels is the same;that is ∫ W

−WSn(f)df =

∫ W

−W

N0

2df = N0W.

Which channel do you think has larger capacity? Give an intuitive reasoning.

Solution:

The capacity of the additive white Gaussian channel is:

C = W log2

(1 +

P

N0W

)For the nonwhite Gaussian noise channel, although the noise power is equal to the noise power inthe white Gaussian noise channel, the capacity is higher. The reason is that since noise samplesare correlated, knowledge of the previous noise samples provides partial information on the futurenoise samples and therefore reduces their effective variance.

3. Capacity of ISI channel.Consider a channel with Inter Symbol Interference (ISI) defined as follows

yk =

L−1∑i=0

hixk−i + zk.

The channel input obeys an average power constraint E{x2k} ≤ P , and the noise zk is i.i.dGaussian distributed: zk ∼ N(0, σ2

z). Assume that H(ej2πf ) has no zeros and show that thechannel capacity is

C =1

2

∫ W

−Wlog

{1 +

[∆− σ2

z/|H(ej2πf )|2]+

σ2z/|H(ej2πf )|2

}df,

39

Page 40: Dig Comm Ex

where ∆ is a constant selected such that∫ W

−W

[∆− σ2

z

|H(ej2πf )|2

]+df = P.

You may use the following theorem

Theorem 1. Let the transmitter have a maximum average power constraint of P [Watts]. Thecapacity of an additive Gaussian noise channel with noise power spectrum N(f)

[WattsHz

]is given

by

C =1

2

∫ π

−πlog2

{1 +

[ν −N(f)

]+N(f)

}df

[bits

sec

].

where ν is chosen so that∫ [ν −N(f)

]+df = P .

Solution:

Since H(ej2πf ) has no zeros the ISI ”filter” is invertible. Inverting the chennel results in

Y (ej2πf ) =Y (ej2πf )

H(ej2πf )

= X(ej2πf ) +Z(ej2πf )

H(ej2πf )

= X(ej2πf ) + Z(ej2πf ).

This is a problem of colored Gaussian channel with no ISI. The channel PSD is

SZZ(ej2πf ) =σ2z

|H(ej2πf )|2.

The capacity of this channel, using Theorem 1 is given by

C =1

2

∫ W

−Wlog

{1 +

[∆− σ2

z/|H(ej2πf )|2]+

σ2z/|H(ej2πf )|2

}df,

where ∆ is a constant selected such that∫ W

−W

[∆− σ2

z

|H(ej2πf )|2

]+df = P.

40

Page 41: Dig Comm Ex

References[1] J. G. Proakis, Digital Communications, 4th Edition, John Wiley and Sons, 2000.

[2] S. Haykin, Communication Systems, 4th Edition, John Wiley and Sons, 2000.

[3] A. Goldsmith, Wireless Communications, Cambridge University Press, 2006.

41