Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding...

18
1 Maximum-likelihood (ML) decoding Decoder: Must determine v’ to minimize P(E|r)=P(vv|r) The probability of error is P(E) = r P(E|r)P(r) P(r) is independent of decoding optimum decoding must minimize P(vv|r) for all r maximize P(v’=v|r) for all r choose v’ as the codeword v that maximizes P(v|r) = P(r|v)P(v) / P(r) i.e. (if P(v) is the same for all v) that maximizes P(r|v) ECC encoder ECC decoder Channel u v r = v + n v’, u’

Transcript of Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding...

Page 1: Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding ... • Alternatively, choose v to maximize log P(r|v) = ... (BPSK modulation and

1

Maximum-likelihood (ML) decoding

• Decoder: Must determine v’ to minimize P(E|r)=P(v’≠v|r)

• The probability of error is P(E) = ∑r P(E|r)P(r)

• P(r) is independent of decoding ⇒ optimum decoding must

• minimize P(v’≠v|r) for all r

• maximize P(v’=v|r) for all r

• choose v’ as the codeword v that maximizes

P(v|r) = P(r|v)P(v) / P(r)

• i.e. (if P(v) is the same for all v) that maximizes P(r|v)

ECC encoder

ECC decoderChannel

u v r = v + n v’, u’

Page 2: Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding ... • Alternatively, choose v to maximize log P(r|v) = ... (BPSK modulation and

2

ML decoding (cont.)Memoryless channel:

• ML decoder: Maximize P(r|v) = Πj P(rj|vj)

• Alternatively, choose v to maximize log P(r|v) = ∑j log P(rj|vj)

• The ML decoder is optimal if and only if all v are equally probable as input vectors. Otherwise, P(r|v) must be weighted by the codeword probabilities P(v)

Page 3: Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding ... • Alternatively, choose v to maximize log P(r|v) = ... (BPSK modulation and

3

ML decoding on the BSCBSC:

• P(rj|vj) = 1-p if rj=vj and p otherwise

• log P(r|v) = ∑j log P(rj|vj)

• Hamming distance: Let r and v differ in d(r,v) positions

● ∑j log P(rj|vj) = d(r,v) log p + (n-d(r,v))log(1-p)= d(r,v) log (p/(1-p)) + nlog(1-p)

• log (p/(1-p)) < 0 for p < 0.5, so an ML decoder for a BSC must choose v to minimize d(r,v)

Page 4: Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding ... • Alternatively, choose v to maximize log P(r|v) = ... (BPSK modulation and

4

Channel capacityShannon (1948)

• Every channel has a capacity C (determined by the noise and the input power and bandwidth constraints)

• Eb(R) and Ec(R) are positive functions of R for R<C

• There exists a block code of length n such that with ML decoding

P(E) ≤ 2–nEb(R)

• Similar for convolutional codes; P(E) ≤ 2–(m+1)nEc(R)

• In fact, the average code performs like this. Non-constructive proof using random coding

• But ML decoding for long random codes is infeasible!

Page 5: Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding ... • Alternatively, choose v to maximize log P(r|v) = ... (BPSK modulation and

5

Performance measuresTrade-off of main parameters:

• Code rate

• Error probability

• Word error rate (WER, FER, BLER)

• Bit error rate (BER)

for a given channel and channel quality

• Decoding complexity

Performance is often displayed as a curve of an error rate as a function of channel quality

Page 6: Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding ... • Alternatively, choose v to maximize log P(r|v) = ... (BPSK modulation and

6

Error rate curves

SNR : Eb/N0 (dB)

Log ER

Eb= Es/R

Uncoded

Shan

no

n lim

it

Coding gainC

oding threshold

Page 7: Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding ... • Alternatively, choose v to maximize log P(r|v) = ... (BPSK modulation and

7

Asymptotic coding gain

SNR : Eb/N0 (dB)

Log ER

Uncoded

Coding gain

Asymptotic coding gain

Page 8: Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding ... • Alternatively, choose v to maximize log P(r|v) = ... (BPSK modulation and

8

Asymptotic coding gain

For high SNR (BPSK modulation and AWGN channel) with soft-decision decoding:

puncoded≈12

e−Eb /N 0

pcoded, SD≈K code e−d min REb/N 0

Eb /N 0 uncoded

Eb /N 0 coded

=Rd min

Asymptotic coding gain:

or 10log10(Rdmin). For HD decoding: 10log10(Rdmin/2). Thus, SD gives 10log10 2 = 3 dB better ACG than HD

Page 9: Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding ... • Alternatively, choose v to maximize log P(r|v) = ... (BPSK modulation and

9

Performance close to the Shannon limit

SNR : Eb/N0 (dB)

Log ER

Uncoded

Shan

no

n lim

it

Classical code

Turbo code or LDPC code

Page 10: Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding ... • Alternatively, choose v to maximize log P(r|v) = ... (BPSK modulation and

10

Coded modulation

• Encoding + modulation:

• Need a distance-preserving modulation mapping that preserves distance between different codewords

• Thus, we can view codes also in the modulation domain

• Combined coding and modulation: Design codes specifically to increase distance

• Exploit that in a large signal constellation some points are further apart

• No bandwidth expansion with coding, but the constellation is expanded compared to uncoded modulation

Page 11: Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding ... • Alternatively, choose v to maximize log P(r|v) = ... (BPSK modulation and

11

Coded modulation

• Some schemes work on a signal constellation by

1. Encode some input bits by an ECC. Let the output bits determine a subconstellation

2. Let the remaining input bits determine a point in the subconstellation

• TCM – coded modulation based on a convolutional ECC

• BCM – coded modulation based on block ECC

• Also, coded modulation with turbo codes and LDPC codes

Page 12: Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding ... • Alternatively, choose v to maximize log P(r|v) = ... (BPSK modulation and

12

Trellises of linear block codes (CH 9)

A representation that facilitates soft-decision decoding

Recall:

• A linear block code is the row space of a generator matrix

• Example:

Sender Receiver

0 0 00 1 11 0 11 1 0

EE

OO

EE

Page 13: Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding ... • Alternatively, choose v to maximize log P(r|v) = ... (BPSK modulation and

13

Trellises A trellis is a directed graph:

• A set Γ of depths or time instances, ordered (usually) from 0 to n

• At each time instant, a set of nodes, vertices, representing the (code) state at that time instant. Usually (in an ordinary block code) one initial state s0 at time 0 and one final state sf at time n

• Edges can go from a state at time i to a state at time i+1

• Each edge is labeled by one (or more) symbol(s) from the code alphabet (usually binary)

• A sequence of edge labels obtained by traversing the trellis from s0 to sf is a codeword

Page 14: Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding ... • Alternatively, choose v to maximize log P(r|v) = ... (BPSK modulation and

14

Linear trellises• Necessary (but not sufficient) conditions for the

corresponding code to be linear:

• There exists an output function Oi = fi(si, Ii) where

• fi(si, Ii) ≠ fi(si, I'i) for Ii ≠ I'i

• Oi is the output block from time i to time i+1

• Ii is the input block from time i to time i+1

• si is the state at time i

• There exists a state transition function si+1= gi(si, Ii)

Page 15: Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding ... • Alternatively, choose v to maximize log P(r|v) = ... (BPSK modulation and

15

More properties of linear trellises• In the trellis of a linear code, the set of states Σi at time i is

called the state space

• A trellis is time-invariant iff ∃

• A finite period initial delay ν∈Γ

• An output function f and a state transition function g

• A ”template” state space Σ

such that

● Σi ⊂ Σ for 0 ≤ i < ν, and Σi = Σ for i ≥ ν

• fi = f and gi = g for all i ∈Γ

Page 16: Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding ... • Alternatively, choose v to maximize log P(r|v) = ... (BPSK modulation and

16

Bit-level trellises of linear block codes• [n,k] linear block code C

• Bit-level trellis: n+1 time instants and n trellis sections

• One initial state s0 at time 0 and one final state sf at time n

• For each time i > 0, there is a fixed number Incoming(i) of incoming branches. For all i, Incoming(i) is 1 or 2. Two branches going to the same state have different labels

• For each time i < n, there is a fixed number Outgoing(i) of outgoing branches. For all i, Outgoing(i) is 1 or 2. Two branches coming from the same state have different labels

• Each codeword corresponds to a distinct path from s0 to sf

Page 17: Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding ... • Alternatively, choose v to maximize log P(r|v) = ... (BPSK modulation and

17

Bit-level trellises of linear block codes• The number |Σi| is (sometimes) called the state space

complexity at time instant i. For a linear code, we will show that |Σi|, for all i ∈Γ, is a power of 2

• Thus, we can define the state space dimension

ρi = log2|Σi|

• The sequence(ρ0 = 0, ρ1 ,..., ρi ,..., ρn = 0)

is called the state space dimension profile, and determines the complexity of an ML (soft-decision) decoder for the code

Page 18: Maximum-likelihood (ML) decodingeirik/INF244/Lectures/Lecture02.pdfMaximum-likelihood (ML) decoding ... • Alternatively, choose v to maximize log P(r|v) = ... (BPSK modulation and

18

Generator matrix: TO form

G=[1 1 1 1 1 1 1 10 1 0 1 0 1 0 10 0 1 1 0 0 1 10 0 0 0 1 1 1 1

]

G '=[1 1 1 1 0 0 0 00 1 0 1 1 0 1 00 0 1 1 1 1 0 00 0 0 0 1 1 1 1

]