Error Control Coding3

download Error Control Coding3

of 24

Transcript of Error Control Coding3

  • 8/2/2019 Error Control Coding3

    1/24

    Convolutional encoding

    Figure shows how in

    memory depthL=v-1kinput bits are

    encoded to

    n output bits in a

    (n,k,L) code

    This figure

    shows a general

    structure of a

    convolutionalencoder

  • 8/2/2019 Error Control Coding3

    2/24

    Example of using generator matrix

    1

    2

    [1 0 11]

    [111 1]

    = =

    g

    g

    Verify that you can obtain the result shown!

    11 10

    01

    11 00 01 11 01 =

  • 8/2/2019 Error Control Coding3

    3/24

    State diagram of a convolutional code

    Each new block ofkbits causes a transition into new state (see -2 slides)

    Hence there are 2kbranches leaving each state

    Assuming encoder zero initial state, encoded word for any input kbits

    can thus be obtained. For instance, below for u=(1 1 1 0 1) the encoded

    word v=(1 1, 1 0, 0 1, 0 1, 1 1, 1 0, 1 1, 1 1) is produced:

    Encoder state diagram for an (n,k,L)=(2,1,3) coder

    Verify that you have the same result!

    Input state

    Output state

  • 8/2/2019 Error Control Coding3

    4/24

    Extracting the generating function by

    splitting and labeling the state diagram

    The state diagram can be modified to yield information on code distance

    properties Rules:

    (1) Split S0 into initial and final state, remove self-loop

    (2) Label each branch by the branch gainXi. Here i is the weight of

    the n encoded bits on that branch

    (3) Each path connecting the initial state and the final state

    represents a nonzero code word that diverges and re-emerges with

    S0 only once

    The path gain is the product of the branch gains along a path, and the

    code weight is the power ofXin the path gain

    Code weight distribution is obtained by using a weighted gain formula

    to compute its generating function (input-output equation)

    whereAiis the number of encoded words of weight i

    ( ) ii

    i

    T X A X =

  • 8/2/2019 Error Control Coding3

    5/24

    The path representing the statesequence S0S1S3S7S6S5S2S4S0 has

    path gain X2X1X1X1X2X1X2X2=X12

    and the corresponding code word

    has the weight 12. The generating

    function is:

    6 7 8

    9 10

    ( )

    3 5

    11 25 ....

    i

    ii

    T X A X

    X X X

    X X

    =

    = + +

    + + +

    Where these terms come from?

    weight: 1weight: 2

    Example of splitting

    and labeling the

    state diagram

  • 8/2/2019 Error Control Coding3

    6/24

    Distance properties of convolutional codes

    Code strength is measured by the minimum free distance:

    where w(X) is weight of the entire encodedsequenceXgenerated by a

    message sequence

    The minimum free distance denotes:

    The minimum weight of all the paths in the state diagram thatdiverge from and remerge with the all-zero state S0

    The lowest power of the Generating Function T(X):

    { }min ( )free

    d w X=

    6 7 8

    9 10

    ( )

    3 511 25 ....

    i

    ii

    T X A X

    X X X X X

    =

    = + ++ + +

    6free

    d =

    ( )/ 2 1

    c free

    G kd n= >Coding gain:

  • 8/2/2019 Error Control Coding3

    7/24

    Decoding convolutional codes

    Maximum likelihood decoding of convolutional codes means finding the

    code branch in the code trellis that was most likely transmitted Therefore maximum likelihood decoding is based on calculating code

    Hamming distances dfree for each branch forming encoded word

    Assume that information symbols applied into a AWGN channel are

    equally alike and independent

    Lets denote by x the message bits (no errors) and by y the received bits:

    Probability to decode the sequence y provided x was

    transmitted is then

    The most likely path through the trellis will maximize this metric

    Also, the following metric is maximized (prob.

  • 8/2/2019 Error Control Coding3

    8/24

    Example of exhaustive maximal likelihood detection

    Assume a three bit message is to transmitted. To clear the encoder two

    zero-bits are appended after message. Thus 5 bits are inserted intoencoder and 10 bits produced. Assume channel error probability is

    p=0.1. After the channel 10,01,10,11,00 is produced. What comes after

    decoder, e.g. what was most likely the transmitted sequence?

  • 8/2/2019 Error Control Coding3

    9/24

    0

    ( , ) ( | )m j mj

    j

    p p y x

    =

    =y x

    0ln ( , ) ln ( | )jm j mjp p y x

    ==y x

    errors correct

    weight for prob. to

    receive bit in-error

  • 8/2/2019 Error Control Coding3

    10/24

    Note also the Hamming distances!

    correct:1+1+2+2+2=8;8 ( 0.11) 0.88

    false:1+1+0+0+0=2;2 ( 2.30) 4.6

    total path metric: 5.48

    =

    =

    The largest metric, verify

    that you get the same result!

  • 8/2/2019 Error Control Coding3

    11/24

    Soft and hard decoding

    Regardless whether the channel outputs hard or soft decisions the decoding rule

    remains the same: maximize the probability

    However, in soft decoding decision region energies must be accounted for, and

    hence Euclidean metric dE, rather that Hamming metric dfree is used

    Transition for Pr[3|0] is indicated

    by the arrow

    E

    fre be Cd d E R=

    0ln ( , ) ln ( | )jm j mjp p y x

    ==y x

  • 8/2/2019 Error Control Coding3

    12/24

    Decision regions

    Coding can be realized by soft-decoding or hard-decoding principle

    For soft-decoding reliability (measured by bit-energy) of decision regionmust be known

    Example: decoding BPSK-signal: Matched filter output is a continuos

    number. In AWGN matched filter output is Gaussian

    For soft-decoding

    several decisionregion partitions

    are used

    Transition probability

    for Pr[3|0], e.g. prob.that transmitted 0

    falls into region no: 3

  • 8/2/2019 Error Control Coding3

    13/24

    The Viterbi algorithm

    Exhaustive maximum likelihood method must search all paths in phase

    trellis for 2k

    bits for a (n,k,L) code By Viterbi-algorithm search depth can be decreased to comparing

    surviving paths where 2L is the number of nodes and 2kis the number

    of branches coming to each node (see the next slide!)

    Problem of optimum decoding is to find the minimum distance path

    from the initial stage back to initial stage (below from S0to S0). Theminimum distance is the sum of all path metrics

    that is maximized by the correct path

    The Viterbi algorithm gets its

    efficiency via concentrating into

    survivor paths of the trellis

    0ln ( , ) ln ( | )jm j mjp p y x

    ==y x

    Channel output sequence

    at the RX

    TX Encoder output sequence

    for the m:th path

    2 2k L

  • 8/2/2019 Error Control Coding3

    14/24

    The survivor path Assume for simplicity a convolutional code with k=1, and up to 2k= 2

    branches can enter each stage in trellis diagram

    Assume optimal path passes S. Metric comparison is done by adding the

    metric of S into S1 and S2. At the survivor path the accumulated metric

    is naturally smaller (otherwise it could not be the optimum path)

    For this reason the non-survived path can

    be discarded -> all path alternatives need not

    to be considered

    Note that in principle whole transmittedsequence must be received before decision.

    However, in practice storing of states for

    input length of 5L is quite adequate

    2 branches enter each nodek

    2 nodesL

  • 8/2/2019 Error Control Coding3

    15/24

    Example of using the Viterbi algorithm

    Assume received sequence is

    and the (n,k,L)=(2,1,2) encoder shown below. Determine the Viterbi

    decoded output sequence!

    01101111010001y =

    (Note that for this encoder code rate is 1/2 and memory depthL = 2)

    states

  • 8/2/2019 Error Control Coding3

    16/24

    The maximum likelihood path

    The decoded ML code sequence is 11 10 10 11 00 00 00 whose Hamming

    distance to the received sequence is 4 and the respective decoded

    sequence is 1 1 0 0 0 0 0 (why?). Note that this is the minimum distance path.

    (Black circles denote the deleted branches, dashed lines: '1' was applied)

    (1)

    (1)

    (0)

    (2)

    (1)

    (1)

    1

    1

    Smaller accumulated

    metric selected

    First depth with two entries to the node

    After register length L+1=3

    branch pattern begins to repeat

    (Branch Hamming distance

    in parenthesis)

  • 8/2/2019 Error Control Coding3

    17/24

    How to end-up decoding?

    In the previous example it was assumed that the register was finally

    filled with zeros thus finding the minimum distance path In practice with long code words zeroing requires feeding of long

    sequence of zeros to the end of the message bits: wastes channel

    capacity & introduces delay

    To avoid thispath memory truncation is applied:

    Trace all the surviving paths to thedepth where they merge

    Figure right shows a common point

    at a memory depth J

    Jis a random variable whose

    magnitude shown in the figure (5L)has been experimentally tested for

    negligible error rate increase

    Note that this also introduces the

    delay of 5L! 5 stages of the trellisJ L>

  • 8/2/2019 Error Control Coding3

    18/24

    Error rate of convolutional codes:

    Weight spectrum and error-event probability

    Error rate depends on

    channel SNR input sequence length, number

    of errors is scaled to sequence length

    code trellis topology

    These determine which path in trellis was followed while decoding

    An error eventhappens when an erroneous path is followed by the

    decoder

    All the paths producing errors must have a distance that is larger than

    the path having distance dfree, e.g. there exists the upper bound for

    following all the erroneous paths (error-event probability):

    2( )

    free

    e dd d

    p a p d

    =

    Number of paths

    (the weight spectrum) at

    the Hamming distance d

    Probability of the path at

    the Hamming distance d

  • 8/2/2019 Error Control Coding3

    19/24

    Selected convolutional code gains

    Probability to select a path at the Hamming distance ddepends on

    decoding method. For antipodal (polar) signaling in AWGN channel it is

    that can be further simplified for low error probability channels by

    remembering that then the following bound works well:

    Here is a table of selected convolutional

    codes and their associative code gains Gc

    Gc=RCdf/2 (df= dfree)

    2

    0

    2( ) b

    C

    Ep d Q R d

    N

    =

    ( )21

    ( ) exp / 22

    Q x x

    ( 0)x

    2( )

    free

    e dd d

    p a p d

    =

    21( ) exp( / 2)2 x

    Q x d

    =

    /C

    R k n=

  • 8/2/2019 Error Control Coding3

    20/24

    The error-weighted distance spectrum and

    the bit-error rate BER is obtained by multiplying the error-event probability by the number of

    data bit errors associated with the each error event Therefore the BER is upper bounded (for instance for polar signaling) by

    where edis the error-weighted distance spectrum

    where

    adis the number of paths (the weight spectrum) at the Hamming distance d

    is the number of data-bit errors for the path at the Hamming distance d

    Note: This bound is very loose for low SNR channels. It has been found by simulations that partial bounds, eg taking 3 - 10 terms of

    the summation ofpb expression above yields good estimate to around

    BER

  • 8/2/2019 Error Control Coding3

    21/24

    Punctured Convolutional Codes

    Puncturing is the process of systematically deleting, or not sending,

    some output bits of a low-rate encoder. Since the trellis structure of the low-rate encoder remains the same, the

    number of information bits per sequence does not change the outputsequences belong to a higher-rate punctured convolutional (PC) code.

    Apuncturing matrix P specifies the rules of deletion of output bits. P isa knp binary matrix, with binary symbolspijthat indicate whether the

    corresponding output bit is transmitted (pij= 1) or not (pij= 0).

    A rate k/np PC encoder based on a rate-l/n encoder, has a puncturing

    matrix P that contains l zero entries, where np = kn - l,0 l< kn

  • 8/2/2019 Error Control Coding3

    22/24

    Punctured Convolutional Codes

    Ex: A rate-2/3 memory-2 convolutional code can be constructed by

    puncturing the output bits of the rate-1/2 memory-2 convolutional encoder,according to the puncturing matrix

    1 1

    1 0P

    =

  • 8/2/2019 Error Control Coding3

    23/24

    Punctured Convolutional Codes

    One of the goals of puncturing is that the same decoder can be used for a

    variety of high-rate codes. One way to achieve decoding of a PC code using the Viterbi decoder of

    the low-rate code, is by the insertion of "deleted" symbols in the positions

    that were not sent.

    The "deleted" symbols are marked by a special flag (i.e., bit 1).

    If a position is flagged, then the corresponding received symbol is nottaken into account in the branch metric computation.

  • 8/2/2019 Error Control Coding3

    24/24

    Punctured Convolutional Codes

    Puncturing matrices are employed with the memory-6 rate-1/2

    convolutional code with generators (g0, g1) = (171,133): vi(m)

    indicates theoutput, at time m, associated with generatorgi, i = 0, 1.