Conv Codes
Transcript of Conv Codes
-
7/31/2019 Conv Codes
1/53
Channel Coding 1
Dr.-Ing. Dirk Wbben
Institute for Telecommunications and High-Frequency TechniquesDepartment of Communications Engineering
Room: N2300, Phone: 0421/218-62385
www.ant.uni-bremen.de/courses/cc1/
Lecture
Tuesday, 08:30 10:00 in S1270
Exercise
Wednesday, 14:00 16:00 in N2420Dates for exercises will be announced
during lectures.
Tutor
Carsten Bockelmann
Room: N2370
Phone [email protected]
-
7/31/2019 Conv Codes
2/53
2
Outline Channel Coding I
1. Introduction
Declarations and definitions, general principle of channel coding
Structure of digital communication systems
2. Introduction to Information Theory Probabilities, measure of information
SHANNONs channel capacity for different channels
3. Linear Block Codes
Properties of block codes and general decoding principles
Bounds on error rate performance
Representation of block codes with generator and parity check matrices
Cyclic block codes (CRC-Code, Reed-Solomon and BCH codes)
4. Convolutional Codes
Structure, algebraic and graphical presentation
Distance properties and error rate performance
Optimal decoding with Viterbi algorithm
-
7/31/2019 Conv Codes
3/53
3
Convolutional Codes
Basics
Implementation of encoder and algebraic description
Graphical presentation in finite state diagram and trellis diagram
Classification of convolutional codes
Non-recursive and recursive convolutional encoders
Catastrophic convolutional codes
Truncated, terminated and tailbiting convolutional codes Optimal decoding
MAP and ML criterion
Viterbi algorithm
Puncturing of convolutional codes
Distance properties of convolutional codes
Error rate performance
-
7/31/2019 Conv Codes
4/534
Basics of Convolutional Codes (Faltungscodes)
Shift register(Schieberegister)structure withLckmemory elements memory leads to statistical dependence of successive code words
In each cycle kbits are shifted each bit affects the output wordLc times
Lc is called constraint length(Einflusslnge), memory depthm= Lc-1 Coded symbols are calculated by modulo 2 additions of memory contents
generators Code word contains n bit code rateRc = k/n
Our further investigation is restricted to codes with rate
Rc = 1/
n
, i.e.k=
1!
u21 k 21 k 21 k
2 1n n-1x
Lc
-
7/31/2019 Conv Codes
5/53
5
Example: (n, k,Lc)=(2,1,3)-convolutional code with generators g1= 78 and g2= 58Encoder is non-systematic and non-recursive (NSC-Code)
Rc = 1/2, Lc = 3 m = 2 Information word u = [1 0 0 1 1 | 0 0] tail bits to finish in state 0
Structure and Encoding
g10 =1
g20 =1
g12 =1
g22 =1
g11 =1
g21 =0
u(l) state following state output
1 00 10 11
0 10 01 10
0 01 00 11
1 00 10 11
1 10 11 01
0 11 01 01
0 01 00 11
1
1
1
0
1
1
1
1
0
1
0
1
1
1
u(l-1) u(l-2)u(l)
0 00011001
1 0001100
0 100110
0 00011
1 0001
1 100
0 10
0 0
-
7/31/2019 Conv Codes
6/53
6
Equivalence of block codes and convolutional codes
Convolutional codes:
kinformation bits are mapped onto code word x including n bits
Code words x are interdependent due to the memory
Block codes:
Code words x are independent
Block codes are convolutional codes without memory
Only convolutional encoded sequences of finite length are viewed in practice
Finite convolutionally encoded sequence can be viewed as single code wordgenerated by a block code
Convolutional codes are a special case of block codes
-
7/31/2019 Conv Codes
7/53
7
Properties of Convolutional Codes
Only a small number of simple convolutional codes are of practical interest
Convolutional codes are not constructed by algebraic methods but by computer
search(with the advantage of a simple mathematical description)
Convolutional decoders can easily process soft-decision input and compute
soft-decision output(only hard-decision decoding has been considered for block codes)
Similar to block codes, systematic and non-systematic encoders are
distinguished for convolutional codes
(mostly non-systematic convolutional encoders are of practical interest)
-
7/31/2019 Conv Codes
8/53
8
Algebraic Description (1)
Description by generators (Generatoren)gj (octal)
Example code withLc=3 andRc = 1/2
Encoding by discrete convolution in GF(2)
generally for=1,,n
1 1,0 1,1 1,2 8
2 2,0 2,1 2,2 8
1 1 1 7
1 0 1 5
g g g
g g g
g
g
,0
mod2m
i i
i
x g u
1 1
2 2
x u g
x u g
Octal description: Left-MSB (not always in literature)
ForLc3 append zeros from the left Example: g = [1 0 0 1 1] 010 011 238Problem: In literature you will sometimes find Right-MSB
Sometimes zeros are appended from the right
g10 =1
g20 =1
g12 =1
g22 =1
g11 =1
g21 =0
u()u(l-2)u(l-1)
x1()
x2()
-
7/31/2019 Conv Codes
9/53
9
Algebraic Description (2)
z-Transform (delay:z-1) D-Transform (delay:D)
Generator polynomials
Encoding (polynomial multiplication)
Encoded sequence
with generator matrix
Code space
0
i
i
i
z x zX
0
i
i
iDX D x
2 2
1 1,0 1,1 1,2
2 2
2 2,0 2,1 2,2
1
1
G D g g D g D D D
G D g g D g D D
,
0
mi
i
i
G D g D
X D U D G D
1 2( ) ( ) ( )nD X D X D X D U D D X G
Example
1 2( ) ( ) ( )nD G D G D G DG
, GF(2)iU D D U D U G
for=1,,n
-
7/31/2019 Conv Codes
10/53
10
Algebraic Description (3)
Example: u = [1 0 0 1 1]
Generator polynomials
Encoding
1 2
1 2
3 4 2 3 4 2
2 3 6 2 3 4 5 6
( ) ( )
1 1 1 1
1 1
11 10 11 11 01 01 11
D X D X D
U D G D U D G D
D D D D D D D
D D D D D D D D D
X
3 41U D D D
2 21 21 1G D D D G D D u(-1) u(-2)u()
x1()
x2()
-
7/31/2019 Conv Codes
11/53
11
Interpretation as a Block Code
Encoding can (alternatively) be described by matrix multiplication
The input sequence is not necessarily finite, thus the generator matrix G of
convolutional codes is semi-infinite (here forn = 2)
Example from previous slide
x u G1,0 2,0 1,1 2,1 1,2 2,2 1, 2,
1,0 2,0 1,1 2,1 1, 1 2, 1 1, 2,
1,0 2,0 1, 2 2, 2 1, 1 2, 1 1, 2,
m m
m m m m
m m m m m m
g g g g g g g g
g g g g g g g g
g g g g g g g g
G
11 10 11
11 10 111 0 0 1 1 11 10 11 11 01 01 1111 10 11
11 10 11
11 10 11
x
G is a convolution matrix Toeplitz structure
-
7/31/2019 Conv Codes
12/53
12
Convolutional encoder can be interpreted as Mealy state machine Output signal depends on current state and current input signal x() =fx( u( ), S( ) )
Next state S(+1) depends on current state and current input S(+1) =fS( u( ), S( ) )
Description by state transition diagram (Zustandsdiagramm) with 2m states
Example: (2,1,3)-NSC-code with generators g1 = 78 and g2 = 58
Graphical Presentation in Finite State Diagram
0/00
1/11
0/10
1/00
1/01
1/10
0/01
0/110 0
1 0 0 1
1 1
g10 =1
g20 =1
g12 =1
g22 =1
g11 =1
g21 =0
u()
u(
-2)u(
-1)
x1()
x2()
u/[x1x2]
-
7/31/2019 Conv Codes
13/53
13
Finite state diagram does not contain any information in time
State diagram expanded by temporal component results in Trellis diagram Trellis starts in S0 and is fully developed afterLc times
Example: (2,1,3)-NSC-code with generators g1 = 78 and g2 = 58
Graphical Presentation in the Trellis Diagram
0 0
1 0 0 1
1 1
0/00
1/11
0/10
1/00
1/01
1/10
0/01
0/11
00
10
01
11
0/00
0/10
1/01
1/00
0/01
0/111/11
1/10=0 =4 =3 =2 =1
-
7/31/2019 Conv Codes
14/53
14
Classification of Convolutional Codes
Non-recursive, non-systematic Convolutional Encoders (NSC-Encoders)
Non-systematic encoders
No separation between information bits and parity bits within the code word
Higher performance than systematic encoders
Usually applied in practice
Systematic Convolutional Encoders Code word explicitly contains information bits
Not relevant for praxis due to lower performance
Exception: recursive systematic convolutional encoders for Turbo-Codes and
Trellis Coded Modulation (TCM) Channel Coding II
Recursive Systematic Convolutional Encoders (RSC-Encoders)
-
7/31/2019 Conv Codes
15/53
15
Recursive Convolutional Encoders (RSC-Encoders) (1)
Consecutive state depends on
current state
encoder input
and feedback structure of the encoder
Recursive encoders of practical interest are mostly systematic and can be
derived from NSC codes
Beginning with a NSC encoder the generator polynomials are converted to get
a systematic but recursive encoder
Generator polynomials of RSC encoder derived from NSC generator polynomial
11
222
1
( ) ( ) 1
( )( ) ( )
( )
G D G D
G DG D G D
G D
-
7/31/2019 Conv Codes
16/53
16
Recursive Convolutional Encoders (RSC-Encoders) (2)
Output of the systematic RSC encoder is given by
with
Using the delay operatorD and g1,0=1 it follows
a() can be regarded as the current content of the register and depends on the
current input u() and the old register contents a(-i)
1
( )
( ) ( )
U D
A D G D 1,0( ) ( )
mi
i
iD g D U D
1,
1( ) ( ) ( )
m
i
i
a g a i u
1,1( ) ( ) ( )m
i
i
a u g a i
11
22 2 2
1
( ) ( ) ( ) ( )
( ) ( ) ( ) ( ) (( ) ( ))( )
X D U D G D U D
X D U D G D GU D A D G DG
DD
NSC and RSC generatesame code space X(D) =U(D)G(D) is CW of NSC
same X(D) is generated by
RSC for info word U(D)G1(D)
IIR:
infinite impulse response for finite output weight
wH(u) 2 required
-
7/31/2019 Conv Codes
17/53
17
Recursive Convolutional Encoders (RSC-Encoders) (3)
Example: (2,1,3)-RSC encoder with generators g1 = 78 (recursive) and g2 = 58
Block diagram and state diagram
1
22 2
( ) ( )
( )( ) ( ) (1 ) with ( ) : ( ) ( ) ( 1) ( 2)1
X D U D
U DX D A D D A D a u a aD D
0 0
1 0 0 1
1 1
0/00
1/11
1/10
0/00
0/01
1/10
0/01
1/11
g20 =1
g12 =1
g22 =1
g11 =1
g21 =0
a(-
1)
u()
1x
2x
a(
-2)
a()
u/[x1
x2
]
-
7/31/2019 Conv Codes
18/53
18
Recursive Convolutional Encoders (RSC-Encoders) (4)
Now other polynomial used for feedback
Example: (2,1,3)-RSC encoder with generators g1 = 78 and g2 = 58 (recursive)
g10 =1
g22 =1
g12 =1
g21 =0
g11 =1
a(-1)u()
1x
2x
a(-2)
1
2
2 2
( ) ( )( )
( ) ( ) (1 ) with ( ) : ( ) ( ) ( 2)1
X D U DU D
X D A D D D A D a u aD
0 0
1 0 0 1
1 1
0/00
1/11
0/01
0/00
1/10
0/01
1/10
1/11
a()
u/[x1x2]
-
7/31/2019 Conv Codes
19/53
19
Catastrophic Convolutional Encoder (1)
Catastrophic convolutional encoder can produce sequences of infinite length
with finite weight that do not return to the all-zero-path
Finite number of transmission errors can lead to infinite decoding errors
Example: (2,1,3)-NSC encoder with generators g1 = 58 and g2 = 68 u = [1 1 1 1 1 ] x = [11 10 00 00 00 ] wH(u) = but wH(x)=3
Fory = [00 00 00 00 00 ] ML-decoder decides for all-zero sequence inf. errors
0 0
1 0 0 1
1 1
0/00
1/11
0/01
1/01
1/10
1/00
0/11
0/10
g10 =1
g20 =1
g12 =1
g22 =0
g11 =0
g21 =1
u(-1) u(-2)
u()
-
7/31/2019 Conv Codes
20/53
20
Catastrophic Convolutional Encoder (2)
A convolutional encoder is catastrophic if there exists an information sequenceU(D) such that wH(U(D))= and wH(X(D)) <
Encoder is noncatastrophic if generator polynomials contain no common factor
or
D
corresponds to simple delay (no delay corresponds to 1)
Properties:
The state diagram of a catastrophic encoder contains a circuit in which a nonzeroinput sequence corresponds to an all-zero output sequence
state diagram contains weight-zero loop about the all-one state Systematic encoder are always noncatastrophic
Encoder is noncatastrophic if at least one output stream is formed by thesummation of an odd number of taps
1 2GCD ( ), ( ), , ( ) with integer 0ng D g D g D D l
1 2GCD ( ), ( ), , ( ) 1ng D g D g D
-
7/31/2019 Conv Codes
21/53
21
Truncated Convolutional Codes
Only sequences offinite length are viewed in practice
For information sequence u with arbitrary tail (Ende), the trellis can end in any
state
last state is not known by decoder last bits are decoded with lower reliability worse performance
Interpretation as block code: Description by generator matrix
0 1
0 1
0 1
0 1
0
m
m
m
G G GG G G
G G G G
G G
G
1, 2, ,i i i n ig g g G with
G is a truncated (abgeschnitten) convolutionmatrix
dimension: Kx Kn
-
7/31/2019 Conv Codes
22/53
22
Terminated Convolutional Codes
Appending tail bits to information sequence u
encoder stops in a predefined state (usually state 0) reliable decoding of the last information bits
Number of tail bits equals memory of encoder, tail bits depend on encoder
NSC: adding m times zero
RSC: adding m tail bits, value depends on last state encoded by information bits
Adding tail bits reduces the code rateRc Sequence u with Kinformation bits
and 1/n convolutional code:
Generator matrix:
0 1
0 1
0 1
m
m
m
G G G
G G GG
G G G
Tail
c c
K KR R
n K m K m
G is a convolution matrix
Toeplitz structure
dimension: Kx (K+m)n
-
7/31/2019 Conv Codes
23/53
23
Tailbiting Convolutional Codes
For small sequence lengthNthe addition of tail bits significantly reduces the
code rate
Tailbiting Convolutional Codes: last state corresponds to first state
No tail bits are required
State machine does not start in state 0 decoder more complex
NSC: initialize encoder with last m bits ofu
Generator matrix:
0 1
0 1
0 1
0 1
1 0
m
m
m
m
m
G G GG G G
G G G G
G
G G
G G G
G is a circular
convolution matrix
dimension: Kx Kn
-
7/31/2019 Conv Codes
24/53
24
Optimal Decoding (1)
Information sequence u contains Kinformation bits and is encoded into thecode sequence consisting ofNsymbols x() (each symbol contains n bits)
The received sequence is y, is the estimated coded sequence and a
denotes an arbitrary coded sequence
MAP criterion (Maximum A-posteriori Probability)
Optimum decoder that calculates the sequence which maximizes
1 10 0 1 1n nx x x N x N x
0x 1Nx
x
Pr Pr
Pr Pr
Pr Pr
p pp p
p p
x y a y
x ay x y a
y y
y x x y a a arg max Pr p a
x y a a
x Pr x yA-Posteriori Probability (APP)
Pr{a}: a-priori information
about source
-
7/31/2019 Conv Codes
25/53
25
Optimal Decoding (2)
Maximum Likelihood criterion (ML)
If all sequences are equally likely or if the receiver does not know
the statistics Pr{a} of the source, no a-priori information can be used
Decision criteria becomes
For equally likely input sequences the MAP criteria and ML criteria yield the
identical (optimal) result
If the input sequences are not equally likely but the input statistic Pr{a} is not
known by the receiver the ML criteria is suboptimal
argmax pa
x y a p py x y a
Pr Pr 2 K a x
-
7/31/2019 Conv Codes
26/53
26
Optimal Decoding (3)
ML-Decoding assuming a memory less channel (DMC)
As the logarithm is a strictly monotone increasing function we can write
Incremental metric (yi()|ai()) describes the transition probabilities of the channel
AWGN channel:
2
00
1exp
/ 2 // 2 /
i i
i i
ss
y ap y a
N TN T
joint probabilities
can be factorized
| /i i s sp y x E T | /i i s sp y x E T
/s s
E T /s sE Ty
1 1
0 0 1
N N n
i i
i
p p p y a
y a y a
1 1 1
0 1 0 10 1
ln ln ln N n N n N n
i i i i i i
i ii
p p y a p y a y a
y a
lni i i iy a p y a
-
7/31/2019 Conv Codes
27/53
27
Optimal Decoding (4)
Squared Euclidian distance
Correlation metric
maximize
minimize
2
0
ln/ 2 /
i i
i i i i
s
y ay a p y a C
N T
2
0
2/
i i
i i
s
y ay a
N T
2 2 2
0 0 0 0 0 0
22 4 2
/ 2 / / 2 / / 2 / / /
constantdoes not depend
on i
i i i i i i i si i
s s s s s
a
y y a a y y a Ey a C C
N T N T N T N T N T N
0 0
44
/
si i i i i
s
Ey a y a y
N T N
-
7/31/2019 Conv Codes
28/53
28
Optimal Decoding (5)
Direct approach for ML decoding:
Sum up the incremental metric (yi()|ai()) for all possible code sequences a
Determine sequence a with minimum cost function
Number of sequences a corresponds to 2K effort increases exponentially with K
Elaboration ofMarkovian property of convolutional codes
The current state depends only on previous state and the current input
Successive calculation of path metric
Viterbi Algorithm
1
0 1
arg max arg max ln arg max lnN n
i i
i
p p p y a
a a a
x y a y a
-
7/31/2019 Conv Codes
29/53
29
ML-Decoding of Convolutional Codes (1)
Example: [5,7]8-NSC-encoder, with termination K=3 bit 23=8 CW
[000 00] [00 00 00 00 00]
[100 00] [11 10 11 00 00]
[010 00] [00 11 10 11 00][110 00] [11 01 01 11 00]
[001 00] [00 00 11 10 11]
[101 00] [11 10 00 01 11]
[011 00] [00 11 01 01 11][111 00] [11 01 10 01 11]
00
10
01
11
=0
=4
=3
=2
=1
=5
0/00 0/00 0/00 0/00 0/00
0/10
1/01
1/11 1/11 1/11
0/10
1/01
0/01
1/10
0/01
0/11
1/00
0/11
0/10
0/11
0/00
1/11
0/10
1/01
0/01
1/10
0/11
1/00
-
7/31/2019 Conv Codes
30/53
30
ML-Decoding of Convolutional Codes (2)
Receive word: y = [11 11 00 01 11]
ML decoding
Incremental path metric
Cumulative state metric
00
10
01
11
20 2 2 4 0 4 1 4 2 5
00
02
26
1
11
1
2
3
30
11
3
1
21
21
3
1
1
0
4
2
10
y a y ,a Hd
1 y aj iM M
Estimated code word
Estimated informationword
x 11 10 00 01 11
x 1 0 1
0 0 0M
Hamming distance
instead of Euclidian
distance!
0/00
1/11
0/10
1/01
0/01
1/10
0/11
1/00
=0
=4
=3
=2
=1
=5
-
7/31/2019 Conv Codes
31/53
31
Viterbi Algorithm
1) Start trellis in state 0
2) Calculate (y()|a ()) fory() and any possible code words a ()3) Add the incremental path metric to the old cumulative state metricMj( -1),
j = 0,,2m-14) Select for each state the path with lowest Euclidian distance (largest correlation
metric) and discard the other paths effort increases only linearly with observation length and not exponential
5) Return to step 2) unless allNreceived words y() have been processed6) End of the Trellis
Terminated code (trellis ends in state 0): select path with best metricM0(N),
Truncated code: select path with the overall best metricMj(N)
7) Trace back the path selected in 6) (survivor) and output the correspondinginformation bits
-
7/31/2019 Conv Codes
32/53
32
00
10
01
11
=0 =4=3=2=1 =6=5
Example: (2,1,3)-NSC-encoder with generators g1 = 78 and g2 = 58
Information sequence: u = [1 0 0 1 | 0 0] x = [-1-1 -1+1 -1-1 -1-1|-1+1 -1-1]
Receive sequence y = [+1+1 -1+1 -1-1 -1-1|-1+1 +1-1]
Decoding of Convolutional Codes with Viterbi-Algorithm
+1+1 -1 +1 -1 -1 -1 -1 -1 +1 +1 -1
2
-2
2
-2
0 0
0
2
-2
2
2
0
-4
0
0
0
0
2
-2
-2
2
0
0
0
0
2
-2
-2
2
4
4
-2
2
2
-2
0
0
0
0
2
-2
-2
2
0
0
0
0
2
4
2
2
4
4
4
4
6
6
6
8
6
6
0 0 11 00 010 10 110 10 11 01 00 01 0
BPSK{0,1} {+1,-1}
-
7/31/2019 Conv Codes
33/53
33
Decoding with Viterbi Algorithm
Rule of Thumb
For continuous data transmission (or very long data blocks) the decoding delay
would become infinite or very high
It has been found experimentally that a delay of 5LC results in a negligibleperformance degradation
Reason: If the decision depth is large enough, the beginnings of different paths
merge and the decision for this part is reliable
00
10
01
11
-
7/31/2019 Conv Codes
34/53
34
Shortest Distance between Bremen and Stuttgart
Finding the path in the trellis (the Autobahn) with the
shortest path cost (distance in km) from the starting
point (Bremen) to the destination (Stuttgart)
121
Bremen Hannover
Osnabrck
Kassel
Dortmund
Wrzburg
Frankfurt
Stuttgart
161 211 137
111 212
190322212
198226
121 282 493 6300
121
232 444
333554
347 480
634
121
-
7/31/2019 Conv Codes
35/53
35
Puncturing of Convolutional Codes (1)
Variable adjustment of the code rate by puncturing (see block codes)
single binary digits of the encoded sequence are not transmitted Advantage of Puncturing:
Flexible code rate without additional hardware effort
Possibly lower decoding complexity
Although the performance of the original code is decreased, the performance of the
punctured code is in general as good as a not punctured code of the same rate
Puncturing Matrix of periodLP repeated application of columns pi
P
P
P
P
1,0 1,1 1, 1
2,0 2,1 2, 1
0 1 1
,0 ,1 , 1
L
L
L
n n n L
p p p
p p p
p p p
P p p p
Each column pi ofP contains the
puncturing scheme of a code
word and therefore consist ofn
elements pi,j GF(2)(pi,j = 0 j-th bit is not transmitted
pi,j = 1 j-th bit is transmitted)
-
7/31/2019 Conv Codes
36/53
36
Puncturing of Convolutional Codes (2)
Instead of transmitting nLP coded bits, only +LP bits are transmitted due to the
puncturing scheme
Parameter with 1 (n1)LP adjusts code rates in the range
Puncturing affects the distance properties
optimal puncturing depends on the specific convolutional codeAttention: puncturing can produce catastrophic convolutional code!
Decoding of punctured codes Placeholders for the punctured bits have to be inserted into the received sequence
prior to decoding (zeros for antipodal transmission). As the distance properties are
degraded by puncturing, the decision depth should be extended.
1
PC
P
LR
L
1
1PC
P
LR
L n n
1 Pn L
-
7/31/2019 Conv Codes
37/53
37
Puncturing of Convolutional Codes (3)
Example: (2,1,3)-NSC-code of code rateRc = 1/2 is punctured to code rateRc = 3/4 with puncturing periodLP = 3 (=1)
g10 =1
g20 =1
g12 =1
g22 =1
g11 =1
g21 =0
u(-1) u(-2)
u() 1 1 0
1 0 1
P
x1()
x2()
x()
Encoded sequence Transmit sequence
x1(0),x2(0),x1(1),x2(1),x1(2),x2(2),x1(3),x2(3), x1(0),x2(0),x1(1),x2(2),x1(3),x2(3),
-
7/31/2019 Conv Codes
38/53
38
Distance Properties of Convolutional Codes (1)
As for block codes, the distance spectrum affects the performance of
convolutional codes
Free distance dfdescribes smallest Hamming-distance between 2 sequences
Free distance df affects the asymptotic (Eb /N0) performance,for average SNR, larger distances affect the performance as well
Distance spectrum
Distance spectrum: Convolutional codes are linear comparison with all-zero-sequence
(instead of comparison of all possible sequence pairs)
Hamming weight wH of all sequences has to be calculated
Modified state diagram Self-loop in state 0 is eliminated, state 0 becomes first state Sb and last state Se
Placeholder at state transitions: L = sequence length
W = weight of uncoded input sequence
D = weight of coded output sequence
-
7/31/2019 Conv Codes
39/53
39
Distance Properties of Convolutional Codes (2)
Example: distance spectrum for (2,1,3)-NSC-code with g1 = 78 and g2 = 58 Modified state diagram (values of interest are given in exponent of placeholders)
Linear Equation System
0 0 1 0 0 1
1 1
WD2L0 0
D2L
DL
DL
WL
WDL
WDL
Sb Se
2
10 01
01 10 11
11 11 10
2
01
b
e
S WD L S WL S
S DL S DL S
S WDL S WDL S
S D L S
5 3
2: , ,
1
e
b
S WD LT W D L
S WDL WDL
Solution (transfer function):
L = sequence length
W = weight of uncoded input seq.
D = weight of coded output seq.
0 0
1 0 0 1
1 1
0/00
1/11
0/10
1/00
1/01
1/10
0/01
0/11
-
7/31/2019 Conv Codes
40/53
40
Distance Properties of Convolutional Codes (3)
Series expansion ofT(W,D,L) yields
Interpretation:
1 sequence of length = 3 with input weight w = 1 and output weight d= 5
1 sequence with input weight w = 2 and output weight d= 6 of length = 4 and = 5
1 sequence with input weight w = 3 and output weight d= 7 of length = 5 and = 7
and 2 sequences of length = 6
5 3
2
5 3
2 6 4 2 6 5
3 7 5 3 7 6 3 7 7
, ,
, ,1
2
w d l
w d lw d l
WD LT W D L
WDL WDL
WD L
W D L W D L
W D L W D L W D L
T W D L
Coefficient Tw,d,l: number ofsequences with input weight w and
output weight dof length
-
7/31/2019 Conv Codes
41/53
41
Example: (2,1,3)-NSC-code with generators g1 = 78 and g2 = 58
Presentation of the code sequences until a maximum weight ofd 7 in theTrellis diagram
Free distance df=5
Distance Properties of Convolutional Codes (4)
00
10
01
11
w=1,d=5,=3
0/10
0/111/11
=0 =4=3=2=1 =7=6=5
1/01 0/01
w=2,d=6,=4
1/00
w=2,d=6, =5
1/10
w=3,d=7, =5
w=3,d=7, =7w=3,d=7, =6
w=3,d=7, =6
-
7/31/2019 Conv Codes
42/53
42
Distance Properties of Convolutional Codes (5)
General calculation
a: state transition of state 0 into all other states with parameters W,D andL
S: state transition between states S01 up to S11 (without state 0)
b: state transitions of all states into state 0
For the example
0
, , p
p
T W D L
aS b
20 0WD L
a
0 0
0
0
WL
DL WDL
DL WDL
S
2
0
0
D L
b
S01
S10
S11
S01
S10
S11
Transition from
0 0 1 0 0 1
1 1
WD2L0 0
D2L
DL
DL
WL
WDL
WDL
Sb Se
-
7/31/2019 Conv Codes
43/53
43
Distance Properties of Convolutional Codes (6)
For sequence of lengthL the exponent ofS becomesp=L-2
2 2
2 3 2 2 3 2 5 3
3
0 0
, , 0 0 0 0 0 0
0 0 0
WL D L D L
T W D L WD L DL WDL WD L W D L WD L
DL WDL
aSb
2 2
2 3 2 2 3 2 2 4 3 2 3 3 3 4 3 2 6 4
4
0 0
, , 0 0 0 0
0 0 0
WL D L D L
T W D L WD L W D L DL WDL W D L W D L W D L W D L
DL WDL
aS b
2
3 2 4 3 2 3 3 3 4 3 2 6 5 3 7 5
5
0 0
, , 0 0
0 0
WL D L
T W D L W D L W D L W D L DL WDL W D L W D L
DL WDL
aS b
5 3 2 6 4 2 6 5 3 7 55 , ,T W D L WD L W D L W D L W D L
-
7/31/2019 Conv Codes
44/53
44
Distance Properties of Convolutional Codes (7)
Number of sequences with Hamming weight d
Total number of nonzero information bits (w) associated with code sequences of
Hamming weight d useful to determine BER
Example
, ,d w d
w
a T
, ,
1
, , 1d dw d d
d w dW
T W D Lw T D c DW
, ,d w dwc w T
, , , ,1, , 1 1 1w d l d
w d l w d l
w d l d w l
T W D L T D T D
5 2 6 2 6 3 7 3 7 3 71
1
5 6 6 2 7 2 7 7
1
5 6 7
, , 12
2 2 3 2 3 3
4 12
W
W
W
T W D LWD W D W D W D W D W D
W W
D WD WD W D W D WD
D D D
-
7/31/2019 Conv Codes
45/53
45
Distance Spectra for NSC and RSC Encoders (1)
Non-recursive convolutional encoder (NSC)
Generator
Each input sequence of weight w
results in output sequences ofweight d(w)
(special for this NSC code)
dmin =df=5 is achieved forw = 1
Number of paths increases
exponentially with distance
2
12
2
1
1
G D D D
G D D
10log10(ad
)
-
7/31/2019 Conv Codes
46/53
46
Distance Spectra for NSC and RSC Encoders (2)
Recursive convolutional encoder (RSC)
Generator
Only forw 2 output sequences
of finite length exist Output sequence of finite weight
only for even input weights
10log10(ad
)
10log10(ad
)
1
2 2
2
1
1 / 1
G D
G D D D D
-
7/31/2019 Conv Codes
47/53
47
Error bounds
An error occurs if the conditional probability of the correct code sequence x is
lower as the probability for another sequence a x
Probability of an error
4 /s s i iE T a x
for all
0 else
1 1
0 0
1 1 1 1
0 0 0 1 0 1
1
0 1
2
Pr ln ln
Pr
Pr Pr
Pr 0
w
N N
N N N n N n
i i i i
i i
N n
i
i
i ia x
P p p
y x y a
y
y x y a
y x y a
y x y a
E b d
-
7/31/2019 Conv Codes
48/53
48
Pairwise error probability Pd of sequences a and x
with distance d = dH(a,x)
The sum overdreceived symbolsyi() is a Gaussian distributed random
variable with
Mean Variance
Probability for mixing up two sequences with pair wise Hamming distance d
becomes
1
0 1
Pr 0N n
d i
i
P y
Error bounds
i ix a
/s SY d E T 2
0 / 2 /Y sd N T
0 0
1 1erfc erfc2 2
s bd c
E EP d dRN N
ii
Y y
E ti ti f th b bilit
-
7/31/2019 Conv Codes
49/53
49
Estimation of the sequence error probability
Probability for error event in a sequence
Estimation of the bit error probability
0
1erfc
2
bw d d d c
d d
EP a P a d R
N
0
1
erfc2
b
b d cd
E
P c d R N
E l (1)
-
7/31/2019 Conv Codes
50/53
50
Example (1)
Example: half-rate code with generators g1 = 78 and g2 = 58
Estimation of the sequence error probability
54 25
, , 1 dd d dd
T W D L W D L L
50
14 2 erfc
2
d bb c
d
EP d dR
N
55
1, , 1 2d d
d
T W D L D
52dda
5
51
1, , 14 2d d
dW
T W D Ld D
W
54 2ddc d
Number of sequences with
Hamming weight d
Number of information bits equal to one (w)
for all sequences with Hamming weight d
E l (2)
-
7/31/2019 Conv Codes
51/53
51
Example (2)
Asymptotic bit error rate
(BER) is determined by free
distance df
For estimating the BER at
moderate SNR, the whole
distance spectrum is required
For large error rates or small
signal-to-noise ratios, the
union bound is very loose and
may diverge
0/ in dBbE N
Comparison of simulation and analytical estimation
0 1 2 3 4 5 610-5
10-4
10-3
10-2
10-1
100
BER
Simulationd=5
d=6d=7d=20
Performance of Convolutional Codes: Quantization
-
7/31/2019 Conv Codes
52/53
52
Performance of Convolutional Codes: Quantization
By quantizing the received
sequence before decoding,
information is lost.
Hard-Decision (q = 2):Strong performance
degradation
3-bit quantization (q = 8):only a small performance
degradation compared to no
quantization
0/ in dBbE N
Influence of quantization
none
0 2 4 6 810
-5
10-4
10
-3
10-2
10-1
100
BER
q=q=2q=8
N i l R l C l i l C d
-
7/31/2019 Conv Codes
53/53
53
Numerical Results to Convolutional Codes
Influence of code rate,Lc=9Influence of constraint length,Rc=0.5
0 1 2 3 4 5 610-5
10-4
10-3
10-2
10-1
100
BER
Lc=3L
c=5
Lc=7Lc=9
0/ in dBbE N 0/ in dBbE N
0 1 2 3 4 5 610
-5
10-4
10-3
10-2
10-1
100
BER
Rc =1/4R
c=1/3
Rc =1/2Rc =2/3Rc =3/4