Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

30
Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 38 Chapter 4. Iterative Turbo Code Decoder This chapter describes the basic turbo code decoder. The turbo code decoder is based on a modified Viterbi algorithm that incorporates reliability values to improve decoding performance. First, this chapter introduces the concept of reliability for Viterbi decoding. Then, the metric that will be used in the modified Viterbi algorithm for turbo code decoding is described. Finally, the decoding algorithm and implementation structure for a turbo code are presented. 4.1 Principle of the General Soft-Output Viterbi Decoder The Viterbi algorithm produces the ML output sequence for convolutional codes. This algorithm provides optimal sequence estimation for one stage convolutional codes. For concatenated (multistage) convolutional codes, there are two main drawbacks to conventional Viterbi decoders. First, the inner Viterbi decoder produces bursts of bit errors which degrades the performance of the outer Viterbi decoders [Hag89]. Second, the inner Viterbi decoder produces hard decision outputs which prohibits the outer Viterbi decoders from deriving the benefits of soft decisions [Hag89]. Both of these drawbacks can be reduced and the performance of the overall concatenated decoder can be significantly improved if the Viterbi decoders are able to produce reliability (soft-output) values [Ber93a]. The reliability values are passed on to subsequent Viterbi decoders as a- priori information to improve decoding performance. This modified Viterbi decoder is referred to as the soft-output Viterbi algorithm (SOVA) decoder. Figure 4.1 shows a concatenated SOVA decoder. SOVA 1 SOVA 2 u 2 L 2 y L=0 u 1 L 1 Overall Concatenated SOVA Decoder Figure 4.1: A concatenated SOVA decoder where y represents the received channel values, u represents the hard decision output values, and L represents the associated reliability values.

Transcript of Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Page 1: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 38

Chapter 4. Iterative Turbo Code Decoder

This chapter describes the basic turbo code decoder. The turbo code decoder isbased on a modified Viterbi algorithm that incorporates reliability values to improvedecoding performance. First, this chapter introduces the concept of reliability for Viterbidecoding. Then, the metric that will be used in the modified Viterbi algorithm for turbocode decoding is described. Finally, the decoding algorithm and implementationstructure for a turbo code are presented.

4.1 Principle of the General Soft-Output Viterbi Decoder

The Viterbi algorithm produces the ML output sequence for convolutional codes.This algorithm provides optimal sequence estimation for one stage convolutional codes.For concatenated (multistage) convolutional codes, there are two main drawbacks toconventional Viterbi decoders. First, the inner Viterbi decoder produces bursts of biterrors which degrades the performance of the outer Viterbi decoders [Hag89]. Second,the inner Viterbi decoder produces hard decision outputs which prohibits the outer Viterbidecoders from deriving the benefits of soft decisions [Hag89]. Both of these drawbackscan be reduced and the performance of the overall concatenated decoder can besignificantly improved if the Viterbi decoders are able to produce reliability (soft-output)values [Ber93a]. The reliability values are passed on to subsequent Viterbi decoders as a-priori information to improve decoding performance. This modified Viterbi decoder isreferred to as the soft-output Viterbi algorithm (SOVA) decoder. Figure 4.1 shows aconcatenated SOVA decoder.

SOVA 1 SOVA 2

u2

L2

y

L=0

u1

L1

Overall Concatenated SOVA Decoder

Figure 4.1: A concatenated SOVA decoder where y represents the received channel values, u represents the hard decision output values, and L represents the associated reliability values.

Page 2: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 39

4.2 Reliability of the General SOVA Decoder

The reliability of the SOVA decoder is calculated from the trellis diagram asshown in Figure 4.2.

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

Sta

tes

MemorizationLevel (MEM)

Time (t)

012345

tt-1t-2t-3t-4t-5

0

0

1

0 0

1

00 1

1

1

S1,t

S0,t

S2,t

S3,t

Vs(S1,t)

Vc(S1,t)

Figure 4.2: Example of survivor and competing paths for reliability estimation at time t [Ber93a].

In Figure 4.2, a 4-state trellis diagram is shown. The solid line indicates the survivor path(assumed here to be part of the final ML path) and the dashed line indicates thecompeting (concurrent) path at time t for state 1. For the sake of brevity, survivor andcompeting paths for other nodes are not shown. The label S1,t represents state 1 and timet. Also, the labels {0,1} shown on each path indicate the estimated binary decision forthe paths. The survivor path for this node is assigned an accumulated metric Vs(S1,t) andthe competing path for this node is assigned an accumulated metric Vc(S1,t). Thefundamental information for assigning a reliability value L(t) to node S1,t’s survivor pathis the absolute difference between the two accumulated metrics, L(t)=| Vs(S1,t) - Vc(S1,t) |[Ber93a]. The greater this difference, the more reliable is the survivor path. For thisreliability calculation, it is assumed that the survivor accumulated metric is always“better” than the competing accumulated metric. Furthermore, to reduce complexity, thereliability values only need to be calculated for the ML survivor path (assume it is knownfor now) and are unnecessary for the other survivor paths since they will be discardedlater.

Page 3: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 40

To illustrate the concept of reliability, two examples are given below. In theseexamples, the Viterbi algorithm selects the survivor path as the path with the smalleraccumulated metric. In the first example, assume that at node S1,t the accumulatedsurvivor metric Vs(S1,t)=50 and that the accumulated competing metric Vc(S1,t)=100. Thereliability value associated with the selection of this survivor path is L(t)=|50-100|=50. Inthe second example, assume that the accumulated survivor metric does not change,Vs(S1,t)=50, and that the accumulated competing metric Vc(S1,t)=75. The resultingreliability value is L(t)=|50-75|=25. Although in both of these examples the survivor pathhas the same accumulated metric, the reliability value associated with the survivor path isdifferent. The reliability value in the first example provides more confidence (twice asmuch confidence) in the selection of the survivor path than the value in the secondexample.

Figure 4.3 illustrates a problem with the use of the absolute difference betweenaccumulated survivor and competing metrics as a measure of the reliability of thedecision.

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

Sta

tes

MemorizationLevel (MEM)

Time (t)

012345

tt-1t-2t-3t-4t-5

0

0

1

0 0

1

00 1

1

1

S1,t

Vs(S1,t)=100

Vc(S1,t)=100

Paths diverge L(t)=0

Vs(S0,t-2)=50Vc(S0,t-2)=75

L(t-2)=25Vs(S0,t-4)=10Vc(S0,t-4)=20

L(t-4)=10

Figure 4.3: Example that shows the weakness of reliability assignment using metric values directly.

Page 4: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 41

In Figure 4.3, the survivor and competing paths at S1,t have diverged at time t-5. Thesurvivor and competing paths produce opposite estimated binary decisions at times t, t-2,and t-4 as shown in bold labels. For the purpose of illustration, let us suppose that thesurvivor and competing accumulated metrics at S1,t are equal, Vs(S1,t) = Vc(S1,t) = 100.This means that both the survivor and competing paths have the same probability of beingthe ML path. Furthermore, let us assume that the survivor accumulated metric is “better”than the competing accumulated metric at time t-2 and t-4 as shown in Figure 4.3. Toreduce the figure complexity, these competing paths for times t-2 and t-4 are not shown.From this argument, it can be seen that the reliability value assigned to the survivor pathat time t is L(t)=0, which means that there is no reliability associated with the selection ofthe survivor path. At times t-2 and t-4, the reliability values assigned to the survivor pathwere greater than zero (L(t-2)=25 and L(t-4)=10) as a result of the “better” accumulatedmetrics from the survivor path. However, at time t, the competing path could also havebeen the survivor path because they have the same metric. Thus, there could have beenopposite estimated binary decisions at times t, t-2, and t-4 without reducing the associatedreliability values along the survivor path.

To improve the reliability values of the survivor path, a trace back operation toupdate the reliability values has been suggested [Hag89], [Ber93a]. This updatingprocedure is integrated into the Viterbi algorithm as follows [Hag89]:For node Sk,t in the trellis diagram (corresponding to state k at time t),

1. Store L(t) = | Vs(Sk,t) - Vc(Sk,t) |. (This is also denoted as ∆ in other papers.) If there is more than one competing path, then multiple reliability values must be calculated and the smallest reliability value is then set to L(t).2. Initialize the reliability value of Sk,t to +∞ (most reliable).3. Compare the survivor and competing paths at Sk,t and store the memorization levels (MEMs) where the estimated binary decisions of the two paths differ.4. Update the reliability values at these MEMs with the following procedure:

a. Find the lowest MEM>0, denoted as MEMlow, whose reliability valuehas not been updated.

b. Update MEMlow’s reliability value L(t-MEMlow) by assigning thelowest reliability value between MEM = 0 and MEM = MEMlow.

Continuing from the example, the opposite bit estimations between the survivor andcompeting bit paths for S1,t are located and stored as MEM={0, 2, 4}. With this MEMinformation, the reliability updating process is accomplished as shown in Figure 4.4 andFigure 4.5. In Figure 4.4, the first reliability update is shown. The lowest MEM>0,whose reliability value has not been updated, is determined to be MEMlow=2. The lowestreliability value between MEM=0 and MEM=MEMlow=2 is found to be L(t)=0. Thus, theassociated reliability value is updated from L(t-2)=25 to L(t-2)=L(t)=0. The next lowestMEM>0, whose reliability value has not been updated, is determined to be MEMlow=4.The lowest reliability value between MEM=0 and MEM=MEMlow=4 is found to beL(t)=L(t-2)=0. Thus, the associated reliability value is updated from L(t-4)=10 to L(t-4)=L(t)=L(t-2)=0. Figure 4.5 shows the second reliability update.

Page 5: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 42

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

Sta

tes

MemorizationLevel (MEM)

Time (t)

012345

tt-1t-2t-3t-4t-5

0

0

1

0 0

1

00 1

1

1

S1,t

L(t)=0

L(t-2)=25Updated L(t-2)=0

L(t-4)=10

L(t-1)=300

L(t-3)=200L(t-5)=100

Figure 4.4: Updating process for time t-2 (MEMlow=2).

Page 6: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 43

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

Sta

tes

MemorizationLevel (MEM)

Time (t)

012345

tt-1t-2t-3t-4t-5

0

0

1

0 0

1

00 1

1

1

S1,t

L(t)=0

L(t-2)=0L(t-4)=10Updated L(t-4)=0

L(t-1)=300

L(t-3)=200L(t-5)=100

Figure 4.5: Updating process for time t-4 (MEMlow=4).

It has been suggested that the final reliability values should be “normalized” orlogarithmically compressed before passing to the next concatenated decoder to offsetpossible defects of this updating operation [Ber93a].

4.3 Introduction to SOVA for Turbo Codes

The SOVA for turbo codes is implemented with a modified Viterbi metric. Aclose examination of log-likelihood algebra and soft channel outputs is required beforeattempting to derive this modified Viterbi metric. Figure 4.6 shows the system modelthat is used to describe the above concepts.

ChannelEncoder

ChannelDecoder

Channelu x y u’

Figure 4.6: System model for SOVA derivation.

Page 7: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 44

4.3.1 Log-Likelihood Algebra [Hag94], [Hag95], [Hag96]

The log-likelihood algebra used for SOVA decoding of turbo codes is based on abinary random variable u in GF(2) with elements {+1, -1}, where +1 is the logic 0element (“null” element) and -1 is the logic 1 element under ⊕ (modulo 2) addition.Table 4.1 shows the outcome of adding two binary random variables under thesegoverning factors.

Table 4.1: Outcome of Adding Two Binary Random Variables u1 and u2

u1⊕u2 u2=+1 u2=-1u1=+1 +1 -1u1=-1 -1 +1

The log-likelihood ratio L(u) for a binary random variable u is defined to be

L uP u

P u( ) ln

( )

( )=

= += −

1

1(4.1)

L(u) is often denoted as the “soft” value or L-value of the binary random variable u. Thesign of L(u) is the hard decision of u and the magnitude of L(u) is the reliability of thisdecision. Table 4.2 shows the characteristics of the log-likelihood ratio L(u).

Table 4.2: Characteristics of the Log-likelihood Ratio L(u)

P(u=+1) P(u=-1) L(u)1 0 +∞

0.9999 0.0001 9.21020.9 0.1 2.19720.6 0.4 0.40550.5 0.5 00.4 0.6 -0.40550.1 0.9 -2.1972

0.0001 0.9999 -9.21020 1 -∞

Clearly from Table 4.2, as L(u) increase toward +∞, the probability of u=+1 alsoincreases. Furthermore, as L(u) decreases toward -∞, the probability of u=-1 increases.As it can be seen, L(u) provides a form of reliability for u. This will be exploited forSOVA decoding as described later in the chapter.

The probability of the random variable u may be conditioned on another randomvariable z. This forms the conditioned log-likelihood ratio L(u|z) and is defined to be

Page 8: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 45

L u zP u z

P u z( | ) ln

( | )

( | )=

= += −

1

1 (4.2)

The probability of the sum of two binary random variables, say P(u1⊕u2=+1), isfound fromP u u P u P u P u P u( ) ( ) ( ) ( ) ( )1 2 1 2 1 21 1 1 1 1⊕ = + = = + = + + = − = − (4.3)With the following relation,P u P u( ) ( )= − = − = +1 1 1 (4.4)

the probability P(u1⊕u2=+1) becomesP u u P u P u P u P u( ) ( ) ( ) ( ( ))( ( ))1 2 1 2 1 21 1 1 1 1 1 1⊕ = + = = + = + + − = + − = + (4.5)Using the following relation shown in [Hag96]

P ue

e

L u

L u( )

( )

( )= + =

+1

1(4.6)

it can be shown that

P u ue e

e e

L u L u

L u L u( )

( )( )

( ) ( )

( ) ( )1 2 11

1 1

1 2

1 2⊕ = + = +

+ +(4.7)

The probability P u u( )1 2 1⊕ = − can then be calculated asP u u P u u( ) ( ))1 2 1 21 1 1⊕ = − = − ⊕ = + (4.8)

= ++ +

e e

e e

L u L u

L u L u

( ) ( )

( ) ( )( )( )

1 2

1 21 1(4.9)

From the definition of log-likelihood ratio (4.1), it follows directly that

L u uP u u

P u u( ) ln

( )

( )1 21 2

1 2

1

1⊕ =

⊕ = +⊕ = −

(4.10)

Using (4.7) and (4.9), L(u1⊕u2) is found to be

L u ue e

e e

L u L u

L u L u( ) ln( ) ( )

( ) ( )1 2

1 1 2

1 2⊕ =

++

(4.11)

This result is approximated in [Hag94] asL u u sign L u sign L u L u L u( ) ( ( )) ( ( )) min(| ( )|,| ( )|)1 2 1 2 1 2⊕ ≈ (4.12)Table 4.3 shows the accuracy of this approximation compared to the exact solution.From Table 4.3, it can be seen that the deviation between the exact and the approximatedsolutions becomes larger as the reliability of the decision approaches to zero for bothvariables.

Page 9: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 46

Table 4.3: Comparison of L(u1⊕u2) Between Exact and Approximated Solutions L(u1) L(u2) Exact L(u1⊕u2)

(4.11)Approximated L(u 1⊕u2)

(4.12)0 -1000 0 00 -100 0 00 -10 0 00 -1 0 00 0 0 00 1 0 00 10 0 00 100 0 00 1000 0 01 -1000 -1 -11 -100 -1 -11 -10 -0.9999 -11 -1 -0.4338 -11 0 0 01 1 0.4338 11 10 0.9999 11 100 1 11 1000 1 110 -1000 -10 -1010 -100 -10 -1010 -10 -9.3069 -1010 -1 -0.9999 -110 0 0 010 1 0.9999 110 10 9.3069 1010 100 10 1010 1000 10 10100 -1000 -100 -100100 -100 -99.3069 -100100 -10 -10 -10100 -1 -1 -1100 0 0 0100 1 1 1100 10 10 10100 100 99.3069 100100 1000 100 1001000 -1000 -999.3069 -10001000 -100 -100 -1001000 -10 -10 -101000 -1 -1 -11000 0 0 01000 1 1 11000 10 10 101000 100 100 1001000 1000 999.3069 1000

The addition of two “soft” or L-values is denoted by [+] and is defined asL u L u L u u( )[ ] ( ) ( )1 2 1 2+ = ⊕ (4.13)with the following three properties

Page 10: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 47

L u L u( )[ ] ( )+ ∞ = (4.14)L u L u( )[ ]( ) ( )+ −∞ = − (4.15)L u( )[ ]+ =0 0 (4.16)By induction, it can be shown that

L u L uj

j

J

j

j

J

( )[ ]+=

⊕=

∑ ∑=

1 1

(following 4.13) (4.17)

=

= +

= −

⊕=

⊕=

∑ln

P u

P u

j

j

J

j

j

J

1

1

1

1

(following 4.10) (4.18)

=+ + −

+ − −

==

==

∏∏

∏∏ln

( ) ( )

( ) ( )

( ) ( )

( ) ( )

e e

e e

L u L u

j

J

j

J

L u L u

j

J

j

J

j j

j j

1 1

1 1

11

11

(4.19)

By using the relation

tanh( )x e

e

x

x2

1

1= −

+(4.20)

the induction can be simplified to

L u

L u

L uj

j

j

J

j

j

J

j

J

( ) ln

tanh(( )

)

tanh(( )

)[ ]

=+

=

=

+=

∏∑

12

12

1

1

1

(4.21)

=

=∏2

21

1

tanh tanh(( )

)L uj

j

J

(4.22)

This value is very tedious to compute. Thus, it can be approximated as before to

L u L uj j

j

J

j

J

( )[ ]

=

=+=

∑∑11

(4.23)

= =∏ sign L u L ujj

J

j Jj( ( )) min {| ( )|}

,...,1 1(following 4.12) (4.24)

Page 11: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 48

It can been seen from (4.24) that the reliability of the sum of “soft” or L-values is mainlydetermined by the smallest “soft” or L-value of the terms.

4.3.2 Soft Channel Outputs [Hag95], [Hag96]

From the system model in Figure 4.6, the information bit u is mapped to theencoded bits x. The encoded bits x are transmitted over the channel and received as y.From this system model, the log-likelihood ratio of x conditioned on y is calculated as

L x yP x y

P x y( | ) ln

( | )

( | )=

= += −

1

1 (4.25)

By using Bayes’ Theorem, this log-likelihood ratio is equivalent to

L x yp y x

p y x

P x

P x( | ) ln

( | )

( | )

( )

( )=

= += −

= += −

1

1

1

1(4.26)

= = += −

+= += −

ln( | )

( | )ln

( )

( )

p y x

p y x

P x

P x

1

1

1

1(4.27)

The channel model is assumed to be flat fading with Gaussian noise. By using theGaussian pdf f(z),

f z ez m

( )( )

=−

−1

2

2

22

πσσ (4.28)

where m is the mean and the σ2 is the variance, it can be shown that

ln( | )

( | )ln

( )

( )

p y x

p y x

e

e

E

Ny a

E

Ny a

b

o

b

o

= += −

=− −

− +

1

1

2

2(4.29)

=−

lne

e

E

Nay

E

Nay

b

o

b

o

2

2(4.30)

= 4E

Nayb

o

(4.31)

where E

Nb

o

is the signal to noise ratio per bit (directly related to the noise variance) and a

is the fading amplitude. For nonfading Gaussian channel, a=1.

The log-likelihood ratio of x conditioned on y, L(x|y), is equivalent toL x y L y L xc( | ) ( )= + (following 4.27 and 4.31) (4.32)where Lc is defined to be the channel reliability

LE

Nac

b

o

= 4 (4.33)

Page 12: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 49

Thus, L(x|y) is just the weighted received value (Lcy) summed with the log-likelihoodvalue of x (L(x)).

4.4 SOVA Component Decoder for a Turbo Code [Hag94], [Hag95], [Hag96]

The SOVA component decoder estimates the information sequence using one ofthe two encoded streams produced by the turbo code encoder. Figure 4.7 shows theinputs and outputs of the SOVA component decoder.

SOVA

u’

L(u’)

L(u)

Lcy

Figure 4.7: SOVA component decoder.

The SOVA component decoder processes the (log-likelihood ratio) inputs L(u) and Lcy,where L(u) is the a-priori sequence of the information sequence u and Lcy is theweighted received sequence. The sequence y is received from the channel. However, thesequence L(u) is produced and obtained from the preceding SOVA component decoder.If there is no preceding SOVA component decoder then there are no a-priori values.Thus, the L(u) sequence is initialized to the all-zero sequence. A similar concept is alsoshown at the beginning of the chapter in Figure 4.1. The SOVA component decoderproduces u’ and L(u’) as outputs where u’ is the estimated information sequence andL(u’) is the associated log-likelihood ratio (“soft” or L-value) sequence.

The SOVA component decoder operates similarly to the Viterbi decoder exceptthe ML sequence is found by using a modified metric. This modified metric, whichincorporates the a-priori value, is derived below.

The fundamental Viterbi algorithm searches for the state sequence S(m) or theinformation sequence u(m) that maximizes the a-posteriori probability P(S(m)|y). Forbinary (k=1) trellises, m can be either 1 or 2 to denote the survivor and the competingpaths respectively. By using Bayes’ Theorem, the a-posteriori probability can beexpressed as

P pP

pm m

m

( | ) ( | )( )

( )( ) ( )

( )

S y = y SS

y(4.34)

Since the received sequence y is fixed for metric computation and does not depend on m,it can be discarded. Thus, the maximization results to

Page 13: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 50

max ( | ) ( )( ) ( )

m

m mp Py S S (4.35)

The probability of a state sequence terminating at time t is P(St). This probability can becalculated asP P P St t t( ) ( ) ( )S S= −1 (4.36) = −P P ut t( ) ( )S 1 (4.37)where P(St) and P(ut) denote the probability of the state and the bit at time t respectively.The maximization can then be expanded to

max ( | ) ( ) max ( | , ) ( )( ) ( ) ( ) ( ) ( )

m

m m

mi

i

t

im

im

tmp P p S S Py S S y S=

=

−∏0

1 (4.38)

where ( , )( ) ( )S Sim

im

−1 denotes the state transition between time i-1 and time i and yi denotesthe associated received channel values for the state transition.After substituting and rearranging,

max ( | ) ( ) max ( ) ( | , ) ( ) ( | , )( ) ( ) ( ) ( ) ( ) ( ) ( ) ( )

m

m m

mtm

ii

t

im

im

tm

t tm

tmp P P p S S P u p S Sy S S S y y=

−=

− −∏10

1

1 1 (4.39)

Note that

p S S p y xt tm

tm

t j t jm( | , ) ( | )( ) ( )

, ,( )y − = ∏1

j=1

N

(4.40)

Thus, the maximization becomes

max ( ) ( | , ) ( ) ( | )( ) ( ) ( ) ( ), ,

( )

mtm

ii

t

im

im

tm

t j t jm

j

N

P p S S P u p y xS y−=

−=

∏ ∏

10

1

11

(4.41)

This maximization is not changed if logarithm is applied to the whole expression,multiplied by 2, and added two constants that are independent of m. This leads to

{ }max max [ ln ( ) ] [ ln ( | ) ]( ) ( ) ( ), ,

( )

mt

m

mt

mtm

u t j t jm

yj

N

M M P u C p y x C= + − + −

−=

∑11

2 2 (4.42)

whereM

P p S Stm

tm

i im

im

i

t−

− −=

=

∏1

1 10

1

2

( )( ) ( ) ( )ln ( ) ( | , )S y (4.43)

and for convenience, the two constants areC P u P uu t t= = + + = −ln ( ) ln ( )1 1 (4.44)C p y x p y xy t j t j t j t j= = + + = −ln( ( | )) ln( ( | )), , , ,1 1 (4.45)

After substitution of these two constants, the SOVA metric is obtained as

M M xp y x

p y xu

P u

P utm

tm

t jm t j t j

t j t jtm t

tj

N( ) ( )

,( ) , ,

, ,

( )ln( | )

( | )ln

( )

( )= +

= += −

+= += −−

=∑1

1

1

1

1

1(4.46)

and is reduced to

M M x L y u L utm

tm

t jm

c t j tm

tj

N( ) ( )

,( )

,( ) ( )= + +−

=∑1

1

(4.47)

Page 14: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 51

For systematic codes, this can be modified to become

M M u L y x L y u L utm

tm

tm

c t t jm

ct j t j tm

tj

N( ) ( ) ( )

, ,( )

, ,( ) ( )= + + +−

=∑1 1

2

(4.48)

As seen from (4.47) and (4.48), the SOVA metric incorporates values from the pastmetric, the channel reliability, and the source reliability (a-priori value).

Figure 4.8 shows the source reliability as used in SOVA metric computation.

Sa

Sb

Sa

Sb

A dd (+1)(L (ut))

A dd (-1 )(L (ut))

A dd (+1)(L (ut))

transition resu lting in ut=+1transition resu lting in ut= -1

tim e t-1 tim e t

Figure 4.8: Source reliability for SOVA metric computation.

Figure 4.8 shows a trellis diagram with two states Sa and Sb and a transition periodbetween time t-1 and time t. The solid line indicates that the transition will produce aninformation bit ut=+1 and the dash line indicates that the transition will produce aninformation bit ut=-1. The source reliability L(ut), which may be either a positive or anegative value, is from the preceding SOVA component decoder. The “add on” value isincorporated into the SOVA metric to provide a more reliable decision on the estimatedinformation bit. For example, if L(ut) is a “large” positive number, then it would berelatively more difficult to change the estimated bit decision from +1 to -1 betweendecoding stages (based on assigning max{ }( )

mt

mM to the survivor path). However, if L(ut)

is a “small” positive number, then it would be relatively easier to change the estimated bitdecision from +1 to -1 between decoding stages. Thus, L(ut) is like a buffer which triesto prevent the decoder from choosing the opposite bit decision to the preceding decoder.

Figure 4.9 shows the weighting properties of the SOVA metric.

Page 15: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 52

Channel Source

Old Metric (time t-1)

New Metric (time t)

ChannelReliabilityLc

Source ReliabilityL(u)

Figure 4.9: Weighting properties of the SOVA metric.

As it is illustrated in Figure 4.9, the balance between channel and source reliability is veryimportant for the SOVA metric. This does not mean that the channel and sourcereliability values should have the same magnitude but rather that their relative valuesshould reflect the channel and source conditions. For instance, if the channel is verygood, Lc will be larger than |L(u)| and decoding relies mostly on the received channelvalues. However, if the channel is very bad, decoding relies mostly on the a-prioriinformation L(u). If this balance is not achieved, catastrophic effects may result anddegrade the performance of the channel decoder.

At time t, the reliability value (magnitude of the log-likelihood ratio) assigned to anode in the trellis is determined from

∆ t t tM M0 1 21

2= −| |( ) ( ) (4.49)

where ∆ tMEM denotes the reliability value at memorization level MEM relative to time t.

This notation is similar to the notation L(t-MEM) as used before and is shown inFigure 4.10 for discussion.

Page 16: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 53

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

0

1

2

3

Sta

tes

MemorizationLevel (MEM)

Time (t)

012345

tt-1t-2t-3t-4t-5

0

0

1

0 0

1

00 1

1

1

S1,t

S0,t

S2,t

S3,t

Mt(1)

Mt(2)

∆ t0

∆ t1

∆ t2∆ t

3∆ t4

∆ t5 Require Updates

Figure 4.10: Example of SOVA survivor and competing paths for reliability estimation.

The probability of path m at time t and the SOVA metric are stated in [Hag94] tobe related asP path m P t

m( ( )) ( )( )= S (4.50)

= eMt

m( )

2 (4.51)At time t, let us suppose that the survivor metric of a node is denoted as Mt

( )1 and the

competing metric is denoted as Mt( )2 . Thus, the probability of selecting the correct

survivor path is

P correctP path

P path P path( )

( ( ))

( ( )) ( ( ))=

+1

1 2(4.52)

=

+

e

e e

M

M M

t

t t

( )

( ) ( )

1

1 2

2

2 2

(4.53)

=+e

e

t

t

0

0

1(4.54)

The reliability of this path decision is calculated as

Page 17: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 54

log( )

( )log

P correct

P correct

e

ee

e

t

t

t

t

11

11

0

0

0

0

−= +

−+

(4.55)

= ∆ t0 (4.56)

The reliability values along the survivor path for a particular node at time t aredenoted as ∆ t

MEM , where MEM = 0, .., t. For this node at time t, if the bit on the survivorpath at MEM=k (or equivalently at time t-MEM) is the same as the associated bit on thecompeting path, then there would be no bit error if the competing path was chosen. Thus,the reliability value at this bit position remains unchanged. However, if the bits differ onthe survivor and competing path at MEM=k, then there is a bit error. The reliability valueat this bit error position must then be updated using the same updating procedure asdescribed at the beginning of the chapter. As shown in Figure 4.10, reliability updates arerequired for MEM=2 and MEM=4.

The reliability updates are performed to improve the “soft” or L-values. It isshown in [Hag95] that the “soft” or L-value of a bit decision is

L u ut MEM t MEM tk

k

MEM

( )' '

[ ]− −

+=

= ∑∆0

(4.57)

and can be approximated by (4.24) to becomeL u ut MEM t MEM

k MEMtk( ) min { }' '

,...,− − =

≈0

∆ (4.58)

The soft output Viterbi algorithm (along with its reliability updating procedure)can be implemented as follows:1. (a) Initialize time t = 0.

(b) Initialize M m0 0( ) = only for the zero state in the trellis diagram and all other

states to -∞.2. (a) Set time t = t +1.

(b) Compute the metric M M u L y x L y u L utm

tm

tm

c t t jm

c t j tm

tj

N( ) ( ) ( )

, ,( )

,( ) ( )= + + +−

=∑1 1

2

for each state in the trellis diagram wherem denotes allowable binary trellis branch/transition to a state (m= 1, 2).Mt

m( ) is the accumulated metric for time t on branch m.

utm( ) is the systematic bit (1st bit of N bits) for time t on branch m.

xt jm,

( ) is the j-th bit of N bits for time t on branch m (2≤j≤N).

yt jm,

( ) is the received value from the channel corresponding toxt jm,

( ) .

Page 18: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 55

LE

Ncb

o

= 4 is the channel reliability value.

L ut( ) is the a-priori reliability value for time t. This value is from thepreceding decoder. If there is no preceding decoder, then thisvalue is set to zero.

3. Find max ( )

mtmM for each state. For simplicity, let Mt

( )1 denote the survivor path

metric andMt( )2 denote the competing path metric.

4. Store Mt( )1 and its associated survivor bit and state paths.

5. Compute ∆ t t tM M0 1 21

2= −| |( ) ( ) .

6. Compare the survivor and competing paths at each state for time t and store the MEMs where the estimated binary decisions of the two paths differ.

7. Update ∆ ∆tMEM

k MEMtk≈

=min { },...,0

for all MEMs from smallest to largest MEM.

8. Go back to Step (2) until the end of the received sequence.9. Output the estimated bit sequence u’ and its associated “soft” or L-value sequence

L(u’)=u’ •∆∆, where • operator defines element by element multiplicationoperation and ∆ is the final updated reliability sequence. L(u’) is thenprocessed (to be discussed later) and passed on as the a-priori sequenceL(u) for the succeeding decoder.

4.5 SOVA Implementation

The SOVA decoder can be implemented in various ways. The straightforwardimplementation of the SOVA decoder may become computationally intensive for largeconstraint length K codes and long frame sizes because of the need to update all of thesurvivor paths. Because the update procedure is meaningful only for the ML path, animplementation of the SOVA decoder that only performs the update procedure for theML path is shown in Figure 4.11.

Page 19: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 56

SOVAWithoutUpdatingProcedure

SOVAL(u)

Lcy

ML State SequenceL(u’)

u’

Shift Register

Shift Register

SOVA DECODER

Figure 4.11: SOVA decoder implementation.

The SOVA decoder inputs L(u) and Lcy, the a-priori values and the weighted receivedvalues respectively and outputs u’ and L(u’) , the estimated bit decisions and itsassociated “soft” or L-values respectively. This implementation of the SOVA decoder iscomposed of two separate SOVA decoders. The first SOVA decoder computes themetrics for the ML path only and does not compute (suppresses) the reliability values.The shift registers are used to buffer the inputs while the first SOVA decoder isprocessing the ML path. The second SOVA decoder (with the knowledge of the MLpath) recomputes the ML path and also calculates and updates the reliability values. As itcan be seen, this implementation method reduces the complexity in the updating process.Instead of keeping track and updating 2m survivor paths, only the ML path needs to beprocessed.

4.6 SOVA Iterative Turbo Code Decoder [Hag94], [Hag94a], [Hag96]

The iterative turbo code decoder is composed of two concatenated SOVAcomponent decoders. Figure 4.12 shows the turbo code decoder structure.

Page 20: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 57

Ch

ann

el R

elia

bilit

y 4

E b/N

oy2

y3

I

I

SOVA 2

SOVA 1

+

--

--

Le2(u’)

Le1(u’)

I{L 2(u’)}

L1(u’)

u’I-1

2 ParallelShift Registers

2 ParallelShift Registers

y1

CS Register

CS Register

CS

Reg

iste

rC

S R

egis

ter

CS = Circular ShiftI = InterleaverI-1 = Deinterleaver

+

I-1

Figure 4.12: SOVA iterative turbo code decoder.

The turbo code decoder processes the received channel bits on a frame basis. As shownin Figure 4.12, the received channel bits are demultiplexed into the systematic stream y1

and two parity check streams y2 and y3 from component encoders 1 and 2 respectively.These bits are weighted by the channel reliability value and loaded on to the CS registers.The registers shown in the figure are used as buffers to store sequences until they areneeded. The switches are placed in the open position to prevent the bits from the nextframe from being processed until the present frame has been processed.

The SOVA component decoder produces the “soft” or L-valueL ut( )' for the

estimated bit ut' (for time t). The “soft” or L-valueL ut( )' can be decomposed into three

distinct terms as stated in [Hag94]L u L u L y L ut t c t e t( ) ( ) ( )'

,'= + +1 (4.59)

L ut( ) is the a-priori value and is produced by the preceding SOVA component decoder.

L yc t,1 is the weighted received systematic channel value. L ue t( )' is the extrinsic value

produced by the present SOVA component decoder. The information that is passedbetween SOVA component decoders is the extrinsic valueL u L u L u L ye t t t c t( ) ( ) ( )' '

,= − − 1 (4.60)

Page 21: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 58

The a-priori valueL ut( ) is subtracted out from the “soft” or L-valueL ut( )' to preventpassing information back to the decoder from which it was produced. Also, the weightedreceived systematic channel value L yc t,1 is subtracted out to remove “common”

information in the SOVA component decoders.

Figure 4.12 shows that the turbo code decoder is a closed loop serialconcatenation of SOVA component decoders. In this closed loop decoding scheme, eachof the SOVA component decoders estimates the information sequence using a differentweighted parity check stream. The turbo code decoder further implements iterativedecoding to provide more dependable reliability/a-priori estimations from the twodifferent weighted parity check streams, hoping to achieve better decoding performance.The iterative turbo code decoding algorithm for the n-th iteration is as follows:

1. The SOVA1 decoder inputs sequences 4 1

E

Nyb

o

(systematic), 4 2

E

Nyb

o

(parity

check), and L ue2 ( ' ) and outputs sequenceL u1( ' ) . For the first iteration, sequenceL ue2 0( ' ) = because there is no initial a-priori value (no extrinsic values fromSOVA2).

2. The extrinsic information from SOVA1 is obtained by

L u L u L u L ye e c1 1 2 1( ) ( ' ) ( ' )' = − − where LE

Ncb

o

= 4 .

3. The sequences 4 1

E

Nyb

o

and L ue1( ' ) are interleaved and denoted as IE

Nyb

o

4 1

and { }I L ue1( ' ) .

4. The SOVA2 decoder inputs sequences IE

Nyb

o

4 1

(systematic), IE

Nyb

o

4 3

(parity check that was already interleaved by the turbo code encoder), and

{ }I L ue1( ' ) (a-priori information) and outputs sequences{ }I L u2 ( ' ) and { }I u' .

5. The extrinsic information from SOVA2 is obtained by

{ } { } { } { }I L u I L u I L u I L ye e c2 2 1 1( ) ( ' ) ( ' )' = − − .

6. The sequences{ }I L ue2 ( ' ) and { }I u' are deinterleaved and denoted as L ue2 ( ' ) and

u' . L ue2 ( ' ) is fed back to SOVA1 as a-priori information for the next iteration and u' is the estimated bits output for the n-th iteration.

The SOVA component decoder and the SOVA iterative turbo code decoder areboth complicated. An example is shown below to aid in the understanding of thesedecoders. Figure 4.13 shows the turbo code encoder structure used in the example.

Page 22: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 59

Recursive Encoder 1

Recursive Encoder 2

Interleaver (Size L)

Shift Register 1(Size L)

Shift Register 2(Size L)

Complete RSC 2

u x2

x1

x3

Figure 4.13: Turbo code encoder structure for the example.

In Figure 4.13, the systematic bit stream (with its termination bits) is associated withrecursive encoder 2. The input information bit stream u is loaded into shift register 1 toform a data frame. This data frame is passed to the interleaver and its output is fed intoshift register 2. The two component encoders then encode their respective inputs. Theoutput encoded bit streams x1, x2, and x3 are multiplexed together to form a singletransmission bit stream. Figure 4.14 shows the RSC component encoder used in theexample.

D D

+

+

+

A

B

u

x1

x2

Figure 4.14: RSC component encoder for the example.

In Figure 4.14, the switch is turned on to position A for encoding the input sequence andis turned on to position B for terminating the trellis. Figure 4.15 shows the state diagramof the RSC component encoder.

Page 23: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 60

1 0

0 1

0 0 1 1

0/00

1/11

1/10

0/01

1/10

0/01

0/00

1/11

Label: u / x1x2 {0,1} Domain

Figure 4.15: State diagram of the RSC component encoder for the example.

The encoded bits need to mapped from the {0,1} domain to the {-1,+1} domain fortransmission and Figure 4.16 shows this modified state diagram.

1 0

0 1

0 0 1 1

-1/-1 -1

1/11

1/1 -1

-1/-1 1

1/1 -1

-1/-1 1

-1/-1 -1

1/11

Label: u / x1x2 {-1,+1} Domain

Figure 4.16: Transmission state diagram of the RSC component encoder for the example.

For the example, the input sequence is u={01101}. The interleaver (of size L=5)“inverses” the input sequence as its output. For the input sequence, the interleaveroutputs I{u}= {10110}. From Figure 4.13 and Figure 4.14, the encoded sequences arex1={1011010}, x2={0100010}, and x3={1100110}. Coincidentally, the encodedsequences have the same tail bits (10). The encoded sequences are mapped to the {-1,+1}domain for transmission as x1={1 -1 1 1 -1 1 -1}, x2={-1 1 -1 -1 -1 1 -1}, and

Page 24: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 61

x3={1 1 -1 -1 1 1 -1}. The corresponding received sequences are y1={1 -1 1 1 -1 1 -1},y2={0 1 -1 -1 -1 1 -1}, and y3={0 1 -1 -1 1 1 -1} where errors are underlined. A “0” isreceived to represent an erasure (to indicate the reception of a signal whose correspondingsymbol value is in doubt) [Wic95]. Assuming Eb/No=1, the weighted received sequencesare Lcy1={4 -4 4 4 -4 4 -4}, Lcy2={0 4 -4 -4 -4 4 -4}, and Lcy3={0 4 -4 -4 4 4 -4}.

Turbo code decoding is shown below for the first (initial) decoding iteration.From the transmission state diagram shown in Figure 4.16, the trellis legend (statetransition diagram) is obtained and is shown in Figure 4.17. The trellis legend is requiredfor decoding the RSC component codes.

S t a t e s

0 0

0 1

1 0

1 1

T i m e

i i + 1

- 1 / - 1 - 1

1 / 1 1 1 / 1 1

- 1 / - 1 - 1

1 / 1 - 1

- 1 / - 1 1 - 1 / - 1 1

1 / 1 - 1

L E G E N D

S t a t e s

0 0

0 1

1 0

1 1

L a b e l : u / x 1 x 2 { - 1 , + 1 } D o m a i n

Figure 4.17: Trellis legend (state transition diagram) of the RSC component encoder for the example. The first coded bit is the systematic bit

and the second coded bit is the parity check bit.

Page 25: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 62

Figure 4.18 shows the SOVA iterative turbo code decoder for the example.

Ch

ann

el R

elia

bilit

y 4

E b/N

oy2

y3

I

SOVA 2

SOVA 1

+

--

--

Le2(u’)

Le1(u’)

I{L 2(u’)}

L1(u’)

u’I-1

2 ParallelShift Registers

2 ParallelShift Registers

y1

CS Register

CS Register

CS

Reg

iste

rC

S R

egis

ter

CS = Circular ShiftI = InterleaverI-1 = Deinterleaver

+

I-1

I-1

Figure 4.18: SOVA iterative turbo code decoder for the example.

The SOVA1 component decoder is used to decode the RSC1 code. The SOVA1component decoder’s input sequences are I-1{ Lcy1}, Lcy2, and Le2(u’) . The systematicsequence Lcy1 is deinterleaved to decode the RSC1 code. The input sequences areI-1{ Lcy1}={-4 4 4 -4 4 4 -4}, Lcy2={0 4 -4 -4 -4 4 -4}, and Le2(u’)={0 0 0 0 0 0 0} (no a-priori knowledge). The SOVA1 component decoder is implemented with two separateSOVA decoders (Figure 4.11). The first SOVA decoder (of the SOVA1 componentdecoder) computes the SOVA metric (4.48) for the ML path. (Note that the notation fromthe iterative turbo code decoder Le(u’) is equivalent to the notation from the SOVAmetric L(u)). Figure 4.19 shows the first SOVA decoder’s (of the SOVA1 componentdecoder) ML path.

Page 26: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 63

States

0 0

0 1

1 0

1 1

Time

0 1 2 3 4

TRELLIS DIAGRAM

5 6 7

0 4

-4

-4

12

-4

-4

-4

20

-4

TIE

TIE

4

TIE

12

4

28

4

12

36

20

12

44

20

52

Metric Used: M M u I L y x L y u L utm

tm

tm

c t tm

c t tm

e t( ) ( ) ( )

, ,( )

,( ) '{ } ( )= + + +−

−1

11 2 2 2

I-1{ Lcy1} -4 4 4 -4 4 4 -4Lcy2 0 4 -4 -4 -4 4 -4Le2(u’) 0 0 0 0 0 0 0

Figure 4.19: The first SOVA decoder’s (of the SOVA1 component decoder) ML path.

In Figure 4.19, the bold partial path metrics correspond to the ML path. Survivor pathsare represented by bold solid lines and competing paths are represented by simple solidlines. For metric “ties”, the first branch is always chosen. Table 4.4 shows the survivor(larger) and competing (smaller) partial path metrics for the trellis diagram in Figure 4.19.

Table 4.4: Survivor and Competing Partial Path Metrics for the Trellis Diagram in Figure 4.19

Time 0 Time 1 Time 2 Time 3 Time 4 Time 5 Time 6 Time 7State 00 0 4 -4 -4 / -4 4 / 12 12 / 4 4 / 44 52 / 12State 01 4 20 / -12 -4 / 4 36 / -4 12 / 20State 10 -4 12 -4 / -4 -12 / 28 12 / 4State 11 -4 4 / 4 -4 / 4 20 / 12

With the knowledge of the ML path, the second SOVA decoder (of the SOVA1component decoder) recomputes the SOVA metric and also calculates and updates thereliability values. From the trellis legend (Figure 4.17) and the ML path (Figure 4.19),the SOVA1 component decoder produces the estimated bit sequence

Page 27: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 64

u’={-1 1 1 -1 1 1 -1} and state sequence s={00, 10, 01, 10, 01, 00, 00}. Table 4.5,obtained from Figure 4.19, shows the competing path (bit and state sequences) forreliability updates.

Table 4.5: SOVA1’s Competing Path (Bit and State Sequences) for Reliability Updates

TimeIndex

0

TimeIndex

1

TimeIndex

2

TimeIndex

3

TimeIndex

4

TimeIndex

5

TimeIndex

6

TimeIndex

7Time 3 1/10 -1/11 -1/01Time 4 -1/00 -1/00 -1/00 1/10Time 5 -1/00 1/10 -1/11 1/11 -1/01Time 6 -1/00 1/10 1/01 1/00 -1/00 -1/00Time 7 -1/00 1/10 1/01 -1/10 -1/11 -1/01 1/00

Table 4.6, obtained from Figure 4.19, Table 4.4, and Table 4.5, shows the calculated andupdated (in bold) reliability values.

Table 4.6: SOVA1’s Calculated and Updated (In Bold) Reliability Values

TimeIndex

0

TimeIndex

1

TimeIndex

2

TimeIndex

3

TimeIndex

4

TimeIndex

5

TimeIndex

6

TimeIndex

7Time 3 16 16 16Time 4 16 16 16 20Time 5 16 16 16 20 20Time 6 16 16 16 20 20 24Time 7 16 16 16 20 20 24 32

From Table 4.6, the SOVA1 component decoder produces the final reliability sequence∆={16, 16, 16, 20, 20, 24, 32}. The SOVA1 component decoder outputs the “soft” or L-value sequence L1(u’)={-16, 16, 16, -20, 20, 24, -32}. The extrinsic value sequence,obtained by subtracting SOVA1’s inputs from the “soft” or L-value sequence, isLe1(u’)=L1(u’) -I{ Lcy1} -Le2(u’)={-12, 12, 12, -16, 16, 20, -28}.

The SOVA2 component decoder is used to decode the RSC2 code. The SOVA2component decoder’s input sequences are Lcy1, Lcy3, and I{Le1(u’) }. The extrinsic valuesequence Le1(u’) is interleaved to decode the RSC2 code. These input sequences areLcy1={4 -4 4 4 -4 4 -4}, Lcy3={0 4 -4 -4 4 4 -4}, andI{ Le1(u’) }={16, -16, 12, 12, -12, 20, -28}. The SOVA2 component decoder isimplemented with two separate SOVA decoders (Figure 4.11). The first SOVA decoder

Page 28: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 65

(of the SOVA2 component decoder) computes the SOVA metric (4.48) for the ML path.(Note that the notation from the iterative turbo code decoder Le(u’) is equivalent to thenotation from the SOVA metric L(u)). Figure 4.20 shows the first SOVA decoder’s (ofthe SOVA2 component decoder) ML path.

States

0 0

0 1

1 0

1 1

Time

0 1 2 3 4

TRELLIS DIAGRAM

5 6 7

0 -20

20

-4

-36

-4

44

8

24

8

64

36

44

20

84

48

104

32

64

132

52

168

Metric Used: M M u L y x L y u I L utm

tm

tm

c t tm

c t tm

e t( ) ( ) ( )

, ,( )

,( ) '{ ( )}= + + +−1 1 2 3 1

Lcy1 4 -4 4 4 -4 4 -4Lcy3 0 4 -4 -4 4 4 -4I{ Le1(u’) } 16 -16 12 12 -12 20 -28

Figure 4.20: The first SOVA decoder’s (of the SOVA2 component decoder) ML path.

In Figure 4.20, the bold partial path metrics correspond to the ML path. Survivor pathsare represented by bold solid lines and competing paths are represented by simple solidlines. Table 4.7 shows the survivor (larger) and competing (smaller) partial path metricsfor the trellis diagram in Figure 4.20.

Page 29: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 66

Table 4.7: Survivor and Competing Partial Path Metrics for the Trellis Diagram in Figure 4.20

Time 0 Time 1 Time 2 Time 3 Time 4 Time 5 Time 6 Time 7State 00 0 -20 -4 -16 / 8 -4 / 36 48 / 32 20 / 132 168 / 16State 01 -4 -16 / 24 28 / 44 0 / 104 52 / 44State 10 20 -36 8 / -16 20 / 12 24 / 32State 11 44 -56 / 64 -12 / 84 40 / 64

With the knowledge of the ML path, the second SOVA decoder (of the SOVA2component decoder) recomputes the SOVA metric and also calculates and updates thereliability values. From the trellis legend (Figure 4.17) and the ML path (Figure 4.20),the SOVA2 component decoder produces the estimated bit sequenceI{ u’ }={1 -1 1 1 -1 1 -1} and state sequence I{s}={10, 11, 11, 11, 01, 00, 00}. Table 4.8,obtained from Figure 4.20, shows the competing path (bit and state sequences) forreliability updates.

Table 4.8: SOVA2’s Competing Path (Bit and State Sequences) for Reliability Updates

TimeIndex

0

TimeIndex

1

TimeIndex

2

TimeIndex

3

TimeIndex

4

TimeIndex

5

TimeIndex

6

TimeIndex

7Time 3 -1/00 1/10 -1/11Time 4 -1/00 -1/00 1/10 -1/11Time 5 1/10 1/01 1/00 1/10 1/01Time 6 1/10 -1/11 -1/01 1/00 -1/00 -1/00Time 7 1/10 -1/11 1/11 -1/01 -1/10 1/01 1/00

Table 4.9, obtained from Figure 4.20, Table 4.7, and Table 4.8, shows the calculated andupdated (in bold) reliability values.

Table 4.9: SOVA2’s Calculated and Updated (In Bold) Reliability Values

TimeIndex

0

TimeIndex

1

TimeIndex

2

TimeIndex

3

TimeIndex

4

TimeIndex

5

TimeIndex

6

TimeIndex

7Time 3 60 60 60Time 4 48 60 60 48Time 5 48 48 60 48 52Time 6 48 48 48 48 52 56Time 7 48 48 48 52 52 56 76

Page 30: Chapter 4. Iterative Turbo Code Decoder 4.1 Principle of ...

Fu-hua Huang Chapter 4. Iterative Turbo Code Decoder 67

From Table 4.9, the SOVA2 component decoder produces the final reliability sequenceI{ ∆}={48, 48, 48, 52, 52, 56, 76}. The SOVA2 component decoder outputs the “soft” orL-value sequence I{L2(u’) }={48, -48, 48, 52, -52, 56, -76}. The extrinsic valuesequence, obtained by subtracting SOVA2’s inputs from the “soft” or L-value sequence,is I{Le2(u’) }=I{ L2(u’) }- Lcy1-I{ Le1(u’) }={28, -28, 32, 36, -36, 32, -44}. The extrinsicvalue is deinterleaved (I-1{I{ Le2(u’) }}= Le2(u’)={-36, 36, 32, -28, 28, 32, -44}) and isused to decode the RSC1 code for the next decoding iteration. The estimated bitsequence I{u’ }={1 -1 1 1 -1 1 -1} is also deinterleaved (I-1{I{ u’ }}= u’={-1 1 1 -1 1}, notincluding the tail bits) to produce the estimated information sequence. After mappingfrom the {-1,+1} domain to the {0,1} domain, the estimated information sequence isu’=u={01101}.