ma chap 1

29
Convolutional Code Performance ECEN 5682 Theory and Practice of Error Control Codes Convolutional Code Performance Peter Mathys University of Colorado Spring 2007 Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

description

mo phong

Transcript of ma chap 1

Page 1: ma chap 1

Convolutional Code Performance

ECEN 5682 Theory and Practice of Error ControlCodes

Convolutional Code Performance

Peter Mathys

University of Colorado

Spring 2007

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 2: ma chap 1

Convolutional Code Performance Performance Measures

Performance Measures

Definition: A convolutional encoder which maps one or more datasequences of infinite weight into code sequences of finite weight iscalled a catastrophic encoder.

Example: Encoder #5. The binary R = 1/2, K = 3 convolutionalencoder with transfer function matrix

G(D) =[1 + D 1 + D2

],

has the encoder state diagram shown in Figure 15, with statesS0 = 00, S1 = 10, S2 = 01, and S3 = 11.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 3: ma chap 1

Convolutional Code Performance Performance Measures

S0

S1

S2

S3

Fig.15 Encoder State Diagram for Catastrophic R = 1/2, K = 3 Encoder

1/11 1/01

0/110/01

0/101/100/00 1/00

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 4: ma chap 1

Convolutional Code Performance Performance Measures

• • • • • • • • •

• •

S0

S1

S2

S3

· · ·

Fig.16 A Detour of Weight w = 7 and i = 3, Starting at Time t = 0

00 00 00 00 00 00 00 00

11 11 11 11 11 11 11 11

01 01 01 01 01 01 0110 10 10 10 10 10 10

11 11 11 11 11 1100 00 00 00 00 00

10 10 10 10 10 10

01 01 01 01 01 01

11

01

00

10

10

11

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 5: ma chap 1

Convolutional Code Performance Performance Measures

Definition: The complete weight distribution {A(w , i , `)} of aconvolutional code is defined as the number of detours (orcodewords), beginning at time 0 in the all-zero state S0 of theencoder, returning again for the first time to S0 after ` time units,and having code (Hamming) weight w and data (Hamming)weight i .

Definition: The extended weight distribution {A(w , i)} of aconvolutional code is defined by

A(w , i) =∞∑

`=1

A(w , i , `) .

That is, {A(w , i)} is the number of detours (starting at time 0)from the all-zero path with code sequence (Hamming) weight wand corresponding data sequence (Hamming) weight i .

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 6: ma chap 1

Convolutional Code Performance Performance Measures

Definition: The weight distribution {Aw} of a convolutional codeis defined by

Aw =∞∑i=1

A(w , i) .

That is, {Aw} is the number of detours (starting at time 0) fromthe all-zero path with code sequence (Hamming) weight w .

Theorem: The probability of an error event (or decoding error) PE

for a convolutional code with weight distribution {Aw}, decodedby a ML decoder, at any given time t (measured in frames) isupper bounded by

PE ≤∞∑

w=dfree

Aw Pw (E) ,

where

Pw (E) = P{ML decoder makes detour with weight w} .

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 7: ma chap 1

Convolutional Code Performance Performance Measures

Theorem: On a memoryless BSC with transition probabilityε < 0.5, the probability of error Pd(E) between two detours orcodewords distance d apart is given by

Pd(E) =

8>>>>><>>>>>:

dXe=(d+1)/2

d

e

!εe (1− ε)d−e , d odd ,

1

2

d

d/2

!εd/2 (1− ε)d/2 +

dXe=d/2+1

d

e

!εe (1− ε)d−e , d even .

Proof: Under the Hamming distance measure, an error betweentwo binary codewords distance d apart is made if more than d/2 ofthe bits in which the codewords differ are in error. If d is even andexactly d/2 bits are in error, then an error is made with probability1/2. QED

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 8: ma chap 1

Convolutional Code Performance Performance Measures

Note: A somewhat simpler but less tight bound is obtained bydropping the factor of 1/2 in the first term for d even as follows

Pd(E) ≤d∑

e=dd/2e

(d

e

)εe (1− ε)d−e .

A much simpler, but often also much more loose bound is theBhattacharyya bound

Pd(E) ≤ 1

2

[4ε (1− ε)

]d/2.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 9: ma chap 1

Convolutional Code Performance Performance Measures

Probability of Symbol Error. Suppose now that Aw =P∞

i=1 A(w , i) issubstituted in the bound for PE . Then

PE ≤∞X

w=dfree

∞Xi=1

A(w , i) Pw (E) .

Multiplying A(w , i) by i and summing over all i then yields the total number of

data symbol errors that result from all detours of weight w asP∞

i=1 i A(w , i).

Dividing by k, the number of data symbols per frame, thus leads to the

following theorem.

Theorem: The probability of a symbol error Ps(E) at any giventime t (measured in frames) for a convolutional code with rateR = k/n and extended weight distribution {A(w , i)}, whendecoded by a ML decoder, is upper bounded by

Ps(E) ≤ 1

k

∞∑w=dfree

∞∑i=1

i A(w , i) Pw (E) ,

where Pw (E) is the probability of error between the all-zero pathand a detour of weight w .

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 10: ma chap 1

Convolutional Code Performance Performance Measures

The graph on the next slide shows different bounds for theprobability of a bit error on a BSC for a binary rate R = 1/2,K = 3 convolutional encoder with transfer function matrix

G(D) =[1 + D2 1 + D + D2

].

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 11: ma chap 1

Convolutional Code Performance Performance Measures

−6 −5.5 −5 −4.5 −4 −3.5 −3 −2.5 −2 −1.5 −110

−25

10−20

10−15

10−10

10−5

100

Binary R=1/2, K=3, dfree

=5, Convolutional Code, Bit Error Probability

log10

(ε)

Pb(E

)

Pb(E) BSC

Pb(E) BSC Bhattcharyya

Pb(E) AWGN soft

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 12: ma chap 1

Convolutional Code Performance Performance Measures

−4 −3.5 −3 −2.5 −2 −1.5 −110

−10

10−9

10−8

10−7

10−6

10−5

10−4

10−3

10−2

10−1

100

Upper Bounds on Pb(E) for Convolutional Codes on BSC (Hard Decisions)

log10

(ε) for BSC

Pb(E

)R=1/2,K=3,d

free=5

R=2/3,K=3,dfree

=5R=3/4,K=3,d

free=5

R=1/2,K=5,dfree

=7R=1/2,K=7,d

free=10

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 13: ma chap 1

Convolutional Code Performance Performance Measures

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 14: ma chap 1

Convolutional Code Performance Performance Measures

Transmission Over AWGN Channel

The following figure shows a “one-shot” model for transmitting adata symbol with value a0 over an additive Gaussian noise (AGN)waveform channel using pulse amplitude modulation (PAM) of apulse p(t) and a matched filter (MF) receiver. The main reason forusing a “one-shot” model for performance evaluation with respectto channel noise is that it avoids intersymbol interference (ISI).

+

Noise n(t), Sn(f)

FilterhR(t)

• •↓t = 0

︸ ︷︷ ︸

Channel︸ ︷︷ ︸

Receiver

b(t)s(t) = a0p(t) r(t) b0

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 15: ma chap 1

Convolutional Code Performance Performance Measures

If the noise is white with power spectral density (PSD)Sn(f ) = N0/2 for all f , the channel model is called additive whiteGaussian noise (AWGN) model. In this case the matched filter(which maximizes the SNR at its output at t = 0) is

hR(t) =p∗(−t)∫∞

−∞ |p(µ)|2dµ⇐⇒ HR(f ) =

P∗(f )∫∞−∞ |P(ν)|2dν

,

where ∗ denotes complex conjugation. If the PAM pulse p(t) isnormalized so that Ep =

∫∞−∞ |p(µ)|2dµ = 1 then the symbol

energy at the input of the MF is

Es = E[ ∫ ∞

−∞|s(µ)|2 dµ

]= E

[|a0|2

],

where the expectation is necessary since a0 is a random variable.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 16: ma chap 1

Convolutional Code Performance Performance Measures

When the AWGN model with Sn(f ) = N0/2 is used and a0 = α istransmitted, the received symbol b0 at the sampler after theoutput of the MF is a Gaussian random variable with mean α andvariance σ2

b = N0/2. For antipodal binary signaling (e.g., usingBPSK) a0 ∈ {−

√Es ,+

√Es} where Es is the (average) energy per

symbol. Thus, b0 is characterized by the conditional pdf’s

fb0(β|a0=−√

Es) =e−(β+

√Es)2/N0

√πN0

,

and

fb0(β|a0=+√

Es) =e−(β−

√Es)2/N0

√πN0

.

These pdf’s are shown graphically on the following slide.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 17: ma chap 1

Convolutional Code Performance Performance Measures

β

a0 = −√Es ←→ a0 = +√

Es

0−√Es +√

Es

fb0(β|a0=−√

Es) fb0(β|a0=+√

Es)

2√

Es

If the two values of a0 are equally likely or if a ML decoding rule isused, then the (hard) decision threshold per symbol is to decidea0 = +

√Es if β > 0 and a0 = −

√Es otherwise.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 18: ma chap 1

Convolutional Code Performance Performance Measures

The probability of a symbol error when hard decisions are used is

P(E|A0=−√

Es) =1√πN0

∫ ∞

0e−(β+

√Es)2/N0dβ =

1

2erfc

(√Es

N0

),

where erfc(x) = 2√π

∫∞x e−γ2

dγ ≈ e−x2. Because of the symmetry

of antipodal signaling, the same result is obtained forP(E|a0= +

√Es) and thus a BSC derived from an AWGN channel

used with antipodal signaling has transition probability

ε =1

2erfc

(√Es

N0

),

where Es is the energy received per transmitted symbol.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 19: ma chap 1

Convolutional Code Performance Performance Measures

To make a fair comparison in terms of signal-to-noise ratio (SNR)of the transmitted information symbols between coded anduncoded systems, the energy per code symbol of the coded systemneeds to be scaled by the rate R of the code. Thus, when harddecisions and coding are used in a binary system, the transitionprobability of the BSC model becomes

εc =1

2erfc

(√R

Es

N0

),

where R = k/n is the rate of the code.

The figure on the next slide compares Pb(E) versus Eb/N0 for anuncoded and a coded binary system. The coded system uses aR = 1/2 K = 3 convolutional encoder withG(D) =

[1 + D2 1 + D + D2

].

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 20: ma chap 1

Convolutional Code Performance Performance Measures

0 2 4 6 8 10 1210

−12

10−10

10−8

10−6

10−4

10−2

100

102

Binary R=1/2, K=3, dfree

=5, Convolutional Code, Hard decisions AWGN channel

Eb/N

0 [dB], E

b: info bit energy

Pb(E

)P

b(E) uncoded

Pb(E) union bound

Pb(E) Bhattacharyya

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 21: ma chap 1

Convolutional Code Performance Performance Measures

Definition: Coding Gain. Coding gain is defined as the reductionin Es/N0 permissible for a coded communication system to obtainthe same probability of error (Ps(E) or PB(E) as an uncodedsystem, both using the same average energy per transmittedinformation symbol.

Definition: Coding Threshold. The value of Es/N0 (where Es isthe energy per transmitted information symbol) for which thecoding gain becomes zero is called the coding threshold.

The graphs on the following slide show Pb(E) (computed using theunion bound) versus Eb/N0 for a number of different binaryconvolutional encoders.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 22: ma chap 1

Convolutional Code Performance Performance Measures

0 2 4 6 8 10 1210

−12

10−10

10−8

10−6

10−4

10−2

100

Upper Bounds on Pb(E) for Convolutional Codes on AWGN Channel, Hard Decisions

Eb/N

0 [dB], E

b: info bit energy

Pb(E

)

UncodedR=1/2,K=3,d

free=5

R=2/3,K=3,dfree

=5R=3/4,K=3,d

free=5

R=1/2,K=5,dfree

=7R=1/2,K=7,d

free=10

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 23: ma chap 1

Convolutional Code Performance Performance Measures

Soft Decisions and AWGN Channel

Assuming a memoryless channel model used without feedback, theML decoding rule after the MF and the sampler is: Output codesequence estimate c = ci iff i maximizes

fb(β|a=ci ) =N−1∏j=0

fbj(βj |aj=cij) ,

over all code sequences ci = (ci0, ci1, ci2, . . .) for i = 0, 1, 2, . . ..

If the mapping 0 → −1 and 1 → +1 is used so that cij ∈ {−1,+1}then fbj

(βj |aj=cij) can be written as

fbj(βj |aj=cij) =

e−(βj−cij

√Es)2/N0

√πN0

.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 24: ma chap 1

Convolutional Code Performance Performance Measures

Taking (natural) logarithms and defining vj = βj/√

Es yields

ln fb(β|a=ci ) = lnN−1∏j=0

fbj(βj |aj=cij) =

N−1∑j=0

ln fbj(βj |aj=cij)

= −N−1∑j=0

(βj − cij

√Es)

2

N0− N

2ln(πN0)

= − Es

N0

N−1∑j=0

(v2j − 2 vj cij + c2

ij )−N

2ln(πN0)

=2Es

N0

N−1∑j=0

vj cij −( |β|2 + NEs

N0+

N

2ln(πN0)

)= K1

N−1∑j=0

vj cij − K2 ,

where K1 and K2 are constants independent of the codeword ci

and thus irrelevant for ML decoding.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 25: ma chap 1

Convolutional Code Performance Performance Measures

Example: Suppose the convolutional encoder withG(D) =

[1 1 + D

]is used and the received data is

v = -0.4, -1.7, 0.1, 0.3, -1.1, 1.2, 1.2, 0.0, 0.3, 0.2, -0.2, 0.7, . . .

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 26: ma chap 1

Convolutional Code Performance Performance Measures

Soft Decisions versus Hard Decisions

To compare the performance of coded binary systems on a AWGNchannel when the decoder performs either hard or soft decisions,the energy Ec per coded bit is fixed and Pb(E) is plotted versus εof the hard decision BSC model where ε = 1

2erfc(√

Ec/N0

)as

before. For soft decisions the expression

Pw (E) =1

2erfc

(√w Ec

N0

)is used for the probability that the ML decoder makes a detourwith weight w from the correct path. Thus, for soft decisions withfixed SNR per code symbol

Pb(E) ≤ 1

2k

∞∑w=dfree

Dw erfc(√

w Ec

N0

).

Examples are shown on the next slide.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 27: ma chap 1

Convolutional Code Performance Performance Measures

−4 −3.5 −3 −2.5 −2 −1.5 −110

−15

10−10

10−5

100

Upper Bounds on Pb(E) for Convolutional Codes with Soft Decisons (Dashed: Hard Decisions)

log10

(ε) for BSC

Pb(E

)R=1/2,K=3,d

free=5

R=2/3,K=3,dfree

=5R=3/4,K=3,d

free=5

R=1/2,K=5,dfree

=7R=1/2,K=7,d

free=10

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 28: ma chap 1

Convolutional Code Performance Performance Measures

Coding Gain for Soft Decisions

To compare the performance of uncoded and coded binary systemswith soft decisions on a AWGN channel, the energy Eb perinformation bit is fixed and Pb(E) is plotted versus thesignal-to-noise ratio (SNR) Eb/N0. For an uncoded system

Pb(E) =1

2erfc

(√Eb

N0

), (uncoded) .

For a coded system with soft decision ML decoding on a AWGNchannel

Pb(E) ≤ 1

2k

∞∑w=dfree

Dw erfc(√

wR Eb

N0

),

where R = k/n is the rate of the code.

Examples are shown in the graph on the next slide.

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes

Page 29: ma chap 1

Convolutional Code Performance Performance Measures

0 2 4 6 8 10 1210

−12

10−10

10−8

10−6

10−4

10−2

100

Upper Bounds on Pb(E) for Convolutional Codes on AWGN Channel, Soft Decisions

Eb/N

0 [dB], E

b: info bit energy

Pb(E

)

UncodedR=1/2,K=3,d

free=5

R=2/3,K=3,dfree

=5R=3/4,K=3,d

free=5

R=1/2,K=5,dfree

=7R=1/2,K=7,d

free=10

Peter Mathys ECEN 5682 Theory and Practice of Error Control Codes