Waveform Coding Techniques
-
Upload
ashutosh-kumar -
Category
Documents
-
view
307 -
download
28
Transcript of Waveform Coding Techniques
1
Advanced Digital Communication
Waveform Coding Techniques
M Theerthagiri 2
Waveform Coding Techniques Pulse-code modulation Channel Noise and Error probability Quantization Noise and Signal to Noise
Ratio Robust Quantization Differential Pulse Code Modulation Delta Modulation Coding Speech at Low Bit Rates Applications
M Theerthagiri 3
Signal Encoding - 4 Types
Information Digital Analog
Signal Digital Analog
Digital data, digital signalTwo different voltage levels for binary 0 &1.More complex encoding schemes are used to to improve performance, by altering the spectrum of the signal and providing synchronization
capability
Digital data, Analog signalModem converts digital data into analog signal so that it can be transmitted over an analog line.
ASK, FSK, PSK Performance QAM
Analog data, digital signal
Voice & Video
PCMDM
Performance
Analog data, Analog signal
AM, FM, PM
Digital Analog
Digital
Analog
DATA
SIGNAL
M Theerthagiri 4
Taxonomy of Speech CodersSpeech Coders
Waveform Coders Source Coders
Time Domain: PCM, ADPCM
Frequency Domain: e.g. Sub-band coder,Adaptive transform coder
Linear Predictive Coder
Vocoder
Waveform coders: attempts to preserve the signal waveform not speech specific (I.e. general A-to-D conv)
PCM 64 kbps, ADPCM 32 kpbs, CVSDM 32 kbpsVocoders:
Analyse speech, extract and transmit model parameters Use model parameters to synthesize speech LPC-10: 2.4 kbps
Hybrids: Combine best of both… Eg: CELP (used in GSM)
M Theerthagiri 5
From analog signal to digital code (PCM)
M Theerthagiri 6
Digital representation of Analog signals Advantages
ruggedness to transmission noise and interference
efficient regeneration of the coded signal along the transmission path
the potential for communication privacy and security through encryption
the possibility of a uniform format for different kinds of baseband signals
Disadvantageous Increased transmission bandwidth
requirement Increased system complexity
M Theerthagiri 7
PCM PCM belongs to a class of signal
coders known as waveform coders in which an analog signal is approximated by mimicking the amplitude vs time waveform and hence the name
M Theerthagiri 8
What is meant by PCM? Pulse code modulation (PCM) is a
method of signal coding in which the message signal is sampled, the amplitude of each sample is rounded off to the nearest one of a finite set of discrete levels and encoded so that both time and amplitude are represented in discrete form.. This allows the message to be transmitted by means of a digital waveform.
M Theerthagiri 9
A/D
D/A
PCM system : basic elements
M Theerthagiri 10
Basic Signal Processing Operations in PCM
Sampling Quantizing Encoding Regeneration Decoding Reconstruction Multiplexing Synchronization
M Theerthagiri 11
Sampling The incoming message wave is sampled with
a train of narrow rectangular pulses so as to closely approximate the sampling process
To ensure perfect reconstruction of message the sampling rate must be greater than twice the highest frequency component W of the message wave
In practice a low pass pre-alias filter is used to at the front of the sampler to exclude frequencies greater than W before sampling
The application of sampling permits the reduction of the continuously varying message wave to a limited number of discrete values per second
M Theerthagiri 12
Sampling an analogue signal
Prior to digitisation, signals must be sampled With a frequency fs=2B=1/T
ADC converts the height of each pulse into binary representation
Sampling involves the multiplication of the signal by a train of sampling pulses
M Theerthagiri 13
Sampling as multiplication by a sampling waveform:
Sampling pulse is short enough so that can normally considered have zero duration
DAC, however produces pulses length T
Multiplication = Amplitude modulation Amplitude modulation produces sidebands…
M Theerthagiri 14
The conversion of an analog (continuous ) sample of the signal into digital (discrete) form is called quantizing process
human ear / eye can detect only finite intensity differences
it is not necessary to transmit the exact amplitude of samples
original analog signal may be approximated by a signal constructed of discrete amplitudes
Quantizing
M Theerthagiri 15
Quantizing Reduce the number of distinct output
values to a much smaller set. Main source of the “loss" in lossy
compression. Three different forms of quantization.
Uniform: midrise and midtread quantizers.
Nonuniform: companded quantizer. Vector Quantization.
M Theerthagiri 16
Quantized signal
each value is translated to its 7-bit binary equivalent the 8th bit indicates sign
M Theerthagiri 17
Quantized signal
first three sample values
← ← ←
M Theerthagiri 18
Basic signal – processing operations in PCM
Quantization
M Theerthagiri 19
The peak-to-peak range of input sample values is subdivided into a finite set of decision levels or decision thresholds that are aligned with the “risers” of the staircase
The output is assigned a discrete value selected from a finite set of representation levels or reconstruction values that are aligned with the treads of the stair case
Quantizing
M Theerthagiri 20
M Theerthagiri 21
midtread midrise
-Δ
Δ
2Δ
3Δ
-2Δ
-3Δ
3Δ/2
Δ/2
5Δ/2
....... ↑ 7Δ/2overload
level
Δ/2
3Δ/2
5Δ/2
7Δ/2
Δ 2Δ 3Δ 4Δ
overloadlevel
decisionthresholds
representationlevels
representationlevels
Two types of quantization
M Theerthagiri 22
M Theerthagiri 23
M Theerthagiri 24
M = EvenZero is not one of the output levelsZero is a decision boundary
M = OddZero is one of the output levelsZero is reconstruction level
M Theerthagiri 25
Symmetric uniform quantization midtread
peak-to-peak range of input sample values is sub-divided into a finite set of decision levels or decision thresholds
thresholds are aligned with risers decision thresholds are located at ±
Δ/2, ± 3Δ/2, …. output is assigned a discrete value
aligned with the tread of the staircase steps are at 0, ± Δ, ± 2Δ, …..
M Theerthagiri 26
Symmetric uniform quantization : midrise
decision thresholds are located at 0, ± Δ, ± 2Δ, …..
representation levels are at ± Δ/2, ± 3Δ/2, ± 5Δ/2, ….
M Theerthagiri 27
Symmetric uniform quantization Overload level :
the absolute value of which is 0.5 times peak-to-peak range of input sample values
Quantization error : the difference between values of output
and input of the quantizer Max. instantaneous value = 0.5 step
size total range of variation is
– (0.5 step) to + (0.5 step)
M Theerthagiri 28
M Theerthagiri 29
Encoding A process to translate the discrete set of
sample values to a more appropriate form of signal best suited for transmission over a line, radio path or optical fibre
One of the discrete events in a code is called a code element or a symbol
A particular arrangement of symbols used in a code to represent a single value of a discrete set is called a code word or character
In a binary code, each symbol may be either of two distinct values or kinds such as the presence or absence of a pulse.
M Theerthagiri 30
Regenerative Repeater
Regeneration
To control the effect of noise and distortion while passing through a channel
Three functions of the regenerative repeater equalizing, timing and decision making
M Theerthagiri 31
Regeneration Equalizer :
shapes the received pulses to compensate for the impairment in amplitude and phase
timing circuit : provides periodic clock pulses for sampling
the received & equalized pulses decision making device :
at each bit interval, makes a decision whether a pulse is present (exceeds a predetermined voltage level) or not ; accordingly transmits a new pulse (1 or 0)
M Theerthagiri 32
M Theerthagiri 33
Regeneration Departure of regenerated signal
The presence of channel noise and interference causes the repeater to make wrong decisions occasionally
Wrong decision : bit error Spacing between pulses deviates
from its assigned value causes jitter in to the regenerated pulse position, there by causing distortion
M Theerthagiri 34
Decoding The receiver reshapes and
cleans up the received pulses These clean pulses are
regrouped into code words and decoded or mapped back into a PAM signal
M Theerthagiri 35
Reconstruction Decoder output is passed
through a low-pass reconstruction filter whose cut off frequency = message bandwidth
M Theerthagiri 36
Multiplexing Different message sources are
multiplexed by time division
M Theerthagiri 37
Synchronization Timing operations at the receiver
must follow closely the corresponding operations at the transmitter
a local clock at the receiver to keep the same time as the distant transmitter clock
synchronization pulse or frame is transmitted alongwith code elements
M Theerthagiri 38
Channel Noise and Error probability
Performance of PCM system is influenced by Channel Noise, which may be
introduced any where along the channel path
Quantizing noise, which is introduced in the transmitter and is carried along to the receiver output
M Theerthagiri 39
Channel Noise The effect of transmission noise is
to introduce transmission errors Symbol 0 occupationally mistaken
as 1 and vice versa The fidelity (reliability) of
information transmission by PCM in the presence of channel noise is measured in terms of error rate or probability of error
M Theerthagiri 40
Additive White Gaussian Noise A basic and generally accepted model for thermal noise
in communication channels, is the set of assumptions that
the noise is additive, i.e., the received signal equals the transmit signal plus some noise, where the noise is statistically independent of the signal.
the noise is white, i.e, the power spectral density is flat, so the autocorrelation of the noise in time domain is zero for any non-zero time offset.
the noise samples have a Gaussian distribution. Mostly it is also assumed that the channel is Linear and
Time Invariant. The most basic results further assume that it is also frequency non-selective.
M Theerthagiri 41
M Theerthagiri 42
The Basic SNR Parameter for Digital Communication Systems• In digital communications, we more often
use Eb/N0, a normalized version of SNR, as a figure of merit.
R
W
N
S
WN
RS
WN
ST
N
E bbb
/
/
/0
Eb = bit energy, S = signal power
Tb = bit time, Rb = R = bit rate
N0 = noise power spectral density
N = noise power, W = bandwidth
M Theerthagiri 43
Nonuniform quantizationRobust quantization
As in speech transmission, the same quantizer has to accommodate input signals with widely varying power levels.
A nonuniform quantizer for which the SNR remains constant over a wide range of input power levels is called robust
M Theerthagiri 44
What you mean by non-uniform quantization?
Step size is not uniform. Non uniform quantizer is characterized by a step size that increases as the separation from the origin of the transfer characteristics is increased. Non-uniform quantization is otherwise called as robust quantization
M Theerthagiri 45
Nonuniform quantization In the case of uniform quantization levels, the
quantization noise power depends only on the spacing between the levels, and is independent of the actual signal level at any instant.
The SNR decreases with a decrease in the input power level relative to the maximum range of the quantizer, which is undesirable in many applications.
For example, in a speech system a fixed quantization noise power will be more objectionable when a quiet speaker is speaking than when a loud one is.
M Theerthagiri 46
Nonuniform quantization In A remedy is to use nonuniform quantization levels.
This can be achieved by using a nonuniform quantizer
level 7
level 6
level 5
level 4 level 3
level 2
level 1
level 0
0
M Theerthagiri 47
Nonuniform quantizationRobust quantization
As in speech transmission, the same quantizer has to accommodate input signals with widely varying power levels.
A nonuniform quantizer for which the SNR remains constant over a wide range of input power levels is called robust.
M Theerthagiri 48
Nonuniform quantizationProbability density function A uniform quantizer makes sense
when the probability distribution of the signal in the range -Vmax to Vmax is uniform. If we have reason to believe that the distribution is nonuniform, and we know what the actual distribution is, then we can place nonuniform quantization levels in an optimal manner
M Theerthagiri 49
Nonuniform quantizationProbability density function Recall from the discussion on information theory that
the entropy is maximized if the probability of occurrence of each level is equal.
Therefore choose the quantization levels such that the probabilities of occurrence in each level are equal.
p(x)
0x
a b c d 1
M Theerthagiri 50
More often, nonuniform quantization is achieved by first distorting the original signal with a nonlinear compressor characteristic, and then using a uniform quantizer on the result: input
output
a
2a
3a
4a
-4a
-3a
-2a
-aa 2a 3a 4a
-a-2a-3a-4a
Nonuniform levels
Uniform levels
Nonuniform quantizationCompanding
M Theerthagiri 51
Nonuniform quantizationCompanding A given signal change at small
magnitudes will then carry the uniform quantizer through more steps than the same change at large magnitudes. At the receiver, an inverse compression characteristic (or expansion) is applied, so that the overall transmission is not distorted. The processing pair (compression and expansion) is usually referred to as companding.
M Theerthagiri 52
Nonuniform quantization µ-law compander
The µ-law compander is characterized by
Vout = log(1+µVin) / log(1+µ)
The µ-law companding is used for PCM telephone systems in the USA, Canada and Japan, with the standard value of µ = 255
0
0.2
0.4
0.6
0.8
1
0 0.2 0.4 0.6 0.8 1Vin
Vou
t
mu=255
mu=1
mu=10
mu=100
mu=1000
M Theerthagiri 53
The A-law compander is characterized by
Vout = A*Vin / {1+log(A)}
for Vin < 1/A Vout = A*{1+log(A*Vin)
/ {1+log(A)} for 1/A ≤ Vin The A-law companding
is used for PCM telephone systems in Europe, with A = 87.56
0
0.2
0.4
0.6
0.8
1
0 0.2 0.4 0.6 0.8 1Vin
Vou
t
A=1
A=10
A=100
A=1000
A=87.6
Nonuniform quantizationA-law compander
M Theerthagiri 54
Non-uniform quantization For a non-uniform
quantizer, the quantization error power is related to the quantizer’s input distribution, since it has smaller quantization step for small input and larger quantization step for large input.
In most cases the quantizer input has a distribution similar to Normal distribution, which means using a non-uniform quantizer will lead to smaller quantization error power.
55
UNIFORM - QUANTIZER
Variance, Features: Variance is valid only if the
input signal does not overload Quantizer
SNR Decreases with a decrease in the input power level.
22
Q
Δσ =
12
56
ROBUST QUANTIZER
A Quantizer whose SNR remains
essentially constant for a wide range
of input power levels.…. Non Uniform Quantizer
57
Non Uniform Quantizer
Variable Step-Size. Smaller amplitude - Smaller
Step Size. Larger amplitude - Large
Step size
58
Non- UniformQuantizer MODEL
CompressorUniform
QuantizerExpander
Input Output
Compander = Compressor + Expander
59
Compressor
Compressor input
Com
pres
sor
out
put
60
Expander
Expander input
Exp
ande
r o
utpu
t
61
Quantization Error-1
Transfer Characteristics Compressor --- C(x) Expander --- C-1(x)
C(x). C-1(x) = 1
62
Quantization error-2
Compressor Characteristics ( for large L )
max
k
2xdc(x)= for k =0,1,.....L - 1
dx LΔ
Δk = Width in the interval Ik
63
Quantization Error-3
Let fX(x) = PDF of ‘X’ .
Assumptions: fX(x) is Symmetric fX(x) is approximately constant
in each interval. ie.. fX(x) = fX(yk)
64
Quantization Error-4
fX(x) = fX(yk)
Δk = xk+1 - xk for k = 0, 1, … L-1.
pk = Probability of variable X
pk = P (xk < X < xk+1 ) = fX(x) Δk = fX(yk) Δk
L-1
kk=0
p = 1
65
Quantization Error-5
Q = yk – X for xk < X < xk+1
Variance σQ
2 = E ( Q2) = E [( X – yk )2 ]
max
max
)()( 22x
x
XkQ dxxfyx
66
1
21
0
2 )(k
k
x
x
kk
kL
kQ dxyx
p
Carrying out Integration w.r.t x
1
0
22
12
1 L
kkkQ p
( Δk2 / 12 ) - Variance of error in the Interval Ik
Δk = Δ for all k in Uniform Quantizer.
67
Types of Companding
1. µ - law ( US, Canada & Japan)
2. A - law ( Europe)
68
μ-law
=255 reduces noise power in speech ~20dB
69
μ-law companding
10)1ln(
)/1ln()(
max
max
max
x
xxx
x
xc
μ = 255 - practical value
70
A-law
A=1
A=100
A=2
Normalized input
Nor
mal
ized
out
put
71
A-law
max
max
A x /x x 10£ £
1+lnA x A
max
max
1+ln(A x /x ) x1£ £ 1
1+lnA A x
max
)(
x
xc
Practical value for A = 87.5
72
Companding Gain - Gc
Companding gain
For μ-law
0)(
xasdx
xdcGc
)1ln(
)(
dx
xdcGc
73
Advantages ofNon Uniform Quantizer
Reduced Quantization noise High average SNR
M Theerthagiri 74
A
M Theerthagiri 75
B
76
DPCM - Transmitter
^
77
DPCM - Receiver
M Theerthagiri 78
Taxonomy of Speech CodersSpeech Coders
Waveform Coders Source Coders
Time Domain: PCM, ADPCM
Frequency Domain: e.g. Sub-band coder,Adaptive transform coder
Linear Predictive Coder
Vocoder
Waveform coders: attempts to preserve the signal waveform not speech specific (I.e. general A-to-D conv)
PCM 64 kbps, ADPCM 32 kpbs, CVSDM 32 kbpsVocoders:
Analyse speech, extract and transmit model parameters Use model parameters to synthesize speech LPC-10: 2.4 kbps
Hybrids: Combine best of both… Eg: CELP (used in GSM)
M Theerthagiri 79
Voice Compression Technologies
Bandwidth(Kbps)
Quality
UnacceptableUnacceptable BusinessQuality
BusinessQuality
TollQuality
TollQuality
8
16
32
24
64
0
*PCM (G.711)
*PCM (G.711)
*ADPCM 32 (G.726)
*ADPCM 32 (G.726)
*ADPCM 24 (G.726)
*ADPCM 24 (G.726)
*ADPCM 16 (G.726)
*ADPCM 16 (G.726) *
LDCELP 16 (G.728)*
LDCELP 16 (G.728)
*CS-ACELP 8 (G.729)
*CS-ACELP 8 (G.729)*
LPC 4.8*
LPC 4.8
(Cellular)(Cellular)
M Theerthagiri 80
Bandwidth RequirementsVoice Band Traffic
Encoding/Encoding/CompressionCompression
ResultResultBit RateBit Rate
G.711 PCMG.711 PCMA-Law/A-Law/uu-Law-Law
64 kbps (DS0)64 kbps (DS0)
G.726 ADPCMG.726 ADPCM 16, 24, 32, 40 kbps16, 24, 32, 40 kbps
G.729 CS-ACELPG.729 CS-ACELP 8 kbps8 kbps
G.728 LD-CELPG.728 LD-CELP 16 kbps16 kbps
G.723.1 CELPG.723.1 CELP 6.3/5.3 kbps6.3/5.3 kbpsVariableVariable
M Theerthagiri 81
Voice Compression—ADPCM
Adaptive Differential Pulse Code Modulation Waveform coding scheme Adaptive: automatic companding Differential: encode the changes between samples only Rates and bits per sample:
32Kbps = 8 Kbps x 4 bits/sample24 Kbps = 8 Kbps x 3 bits/sample16 Kbps = 8 Kbps x 2 bits/sample
M Theerthagiri 82
Speech Coding Schemes [1],[2]Speech Coding Schemes
M Theerthagiri 83
Main Attributes of Speech Coders Bit rate - This is the number of bits per second (bps) which is
required to encode the speech into a data stream. Subjective quality This is the perceived quality of the
reconstructed speech at the receiver. It may not necessarily correlate to objective measures such as the signal-to-noise ratio. Subjective quality may be further subdivided into intelligibility and naturalness. The former refers to the ability of the spoken word to be understood; the latter refers to the “human-like" rather than “robotic" or “metallic“ characteristic of many current low-rate coders.
Complexity -The computational complexity is still an issue despite the availability of ever-increasing processing power. Invariably, coders which are able to reduce the bit rate require greater algorithmic complexity - often by several orders of magnitude.
Memory - The memory storage requirements are also related to the algorithmic complexity. Template-based coders require large amounts of fast memory to store algorithm coefficients and waveform prototypes.
M Theerthagiri 84
Main Attributes of Speech Coders Delay - Some processing delay is inevitable in a speech
coder. This is due not only to the algorithmic complexity (and hence computation time) but also to the buffering requirements of the algorithm. For real-time speech coders, the coding delay must be minimized in order to achieve acceptable levels of performance.
Error sensitivity - High-complexity coders, which are able to leverage more complex algorithms to achieve lower bit rates, often produce bit streams which are more susceptible to channel or storage errors. This may manifest itself in the form of noise bursts or other artifacts.
Bandwidth - refers to the frequency range which the coder is able to faithfully reproduce. Telephony applications are usually able to accept a lower bandwidth, with the possibility of compromising the speech intelligibility.
M Theerthagiri 85
M Theerthagiri 86
A PCM technique that codes the difference between sample points to compress the digital data
more efficient because audio waves propagate in predictable patterns, DPCM predicts the next sample and codes the difference between the prediction and the actual point
Since differences between samples are expected to be smaller than the actual sampled amplitudes, fewer bits are required to represent the differences
Differential PCM
M Theerthagiri 87
DPCM Foe example if X(k) extends over the
interval VH-VL and using PCM X(k) is encoded using 28 =256 the the step size S = (VH-VL) / 28, that is VH-VL =256*S
If, However, the difference signal X(k)-X(k-1) extends only over +/- 2S the the quantized levels needed are +/- 0.5 S and at +/- 1.5 S. There are only 4 levels and two bits are adequate
M Theerthagiri 88
DPCM takes advantage of the high correlation between samples by encoding the difference between samples rather than the absolute sample value
Can reduce bit rate (by about 25 %) by using prediction based on previous samples
Sends only the difference between predicted and actual - 4 bits per sample
Over time, the error between the decoded signal and the differentially encoded signal increase …. so, periodically, a full pulse is sent rather than the difference
Differential PCM
M Theerthagiri 89
Differential PCM An extension of pulse code modulation which
differentially encodes the data to increase transmission efficiency
Differential PCM (DPCM) is used in many image and video compression algorithms, including JPEG.
The principle behind differential pulse code modulation is that the source data is likely to be an analogue signal, which is likely to change in amplitude quite gradually; there are unlikely to be any large jumps in amplitude over a short time. Therefore, the signal can be efficiently represented by an initial value, and incremental deltas against this value thereafter. Since these differences are likely to be small, fewer bits may be used to encode such a signal, and therefore throughput may be increased.
M Theerthagiri 90
M Theerthagiri 91
Differential PCMFor the given input signal the sampled values are 1, 2, 4, 5, 6, 9, 7, 4, 3, 0, 2, 3, 5, 6. Encoded using standard pulse code modulation, this data set would require ceil(log2(9))1 = 4 bits per sample.
Notice, however, that the delta between two samples is never less than -3 or greater than +3. This gives a range of 7 values, which can be encoded in ceil(log2(7))1 = 3 bits per sample. If the encoding scheme used was differential pulse code modulation, the output would be:
M Theerthagiri 92
Differential PCM At the time of the PCM process, the differences
between input sample signals are minimal. Differential PCM (DPCM) is designed to calculate this difference and then transmit this small difference signal instead of the entire input sample signal. Since the difference between input samples is less than an entire input sample, the number of bits required for transmission is reduced. This allows for a reduction in the throughput required to transmit voice signals. Using DPCM can reduce the bit rate of voice transmission down to 48 kbps.
M Theerthagiri 93
Differential PCM : process 1. Input signal is sampled at a
constant sampling frequency (twice the input frequency)
2. Samples are modulated using the PAM process. At this point, the DPCM process takes over
3. Sampled input signal is stored in what is called a predictor
4. Predictor takes the stored sample signal and sends it through a differentiator
M Theerthagiri 94
Differential PCM : process 5. Differentiator compares the
previous sample signal with the current sample signal and sends this difference to the quantizing and coding phase of PCM
6. After quantizing and coding, the difference signal is transmitted to its final destination
7. At the receiving end of the network, everything is reversed
M Theerthagiri 95
Differential PCM : process 8. First the difference signal is
de - quantized 9. Then this difference signal
is added to a sample signal stored in a predictor
10. Resulting signal is sent to a low-pass filter that reconstructs the original input signal
M Theerthagiri 96
Differential PCM System
Transmitter
Receiver
xi(nTs)e(nTs)
predicted value xp(nTs)
v(nTs)
u(nTs) = xi(nTs) + q(nTs)
b(nTs)e(nTs) = xi(nTs) - xp(nTs)
predictionerror
Q(.)
_________
Reconstructionb(nTs)
Previous sample
M Theerthagiri 97
Differential PCM System
Baseband signal x(t) is sampled @ fs = 1 / Ts
to produce a sequence of correlated samples Ts seconds apart, denoted by {x(nTs)}
Quantizer Input e(nTs) = xi(nTs) - xp(nTs) where xi(nTs) is the unquantized sample xp(nTs) is its predicted value produced by a
predictor e(nTs) is called the prediction error, the amount
by which the predictor fails to predict the input correctly
M Theerthagiri 98
Differential PCM Let the quantizer input - output
characteristics be defined by the nonlinear function Q(.)
Quantizer output v(nTs) = Q {e(nTs)} = e(nTs) + q(nTs) where q(nTs) is the quantization error
The quantizer output v(nTs) is added to the predicted value xp(nTs) to produce the predictor input
u(nTs) = xp(nTs) + v(nTs)
M Theerthagiri 99
Differential PCM
u(nTs) = xp(nTs) + e(nTs) + q(nTs) = xi(nTs) + q(nTs)
Irrespective of the properties of the predictor, the quantized signal u(nTs) differs from the original input signal by the quantization error
Output at the receiver, differs from the original input only by the quantization error incurred as a result of quantizing the prediction error
M Theerthagiri 100
M Theerthagiri 101
M Theerthagiri 102
M Theerthagiri 103
Output signal-to-quantization ratio is defined as
(SNR)O = σx2 / σQ
2 where σx
2 is the variance of the original input signal σQ
2 is the variance of the quantization error We can rewrite (SNR)O = (σx
2 / σE2) x (σE
2 / σQ2)
= GP x (SNR)P
where GP is the prediction gain produced by the differential quantization scheme
Differential PCM
M Theerthagiri 104
Delta Modulation is the one bit ( or two level) version of (DPCM) differential pulse code modulation.
M Theerthagiri 105
Delta Modulation The analog signal is approximated with
a series of segments Each segment of the approximated
signal is compared to the original analog wave to determine the increase or decrease in relative amplitude
The decision process for establishing the state of successive bits is determined by this comparison
M Theerthagiri 106
Delta Modulation only the change of information is
sent, i.e., only an increase or decrease of the signal amplitude from the previous sample is sent, whereas a no-change condition causes the modulated signal to remain at the same 0 or 1 state of the previous sample unique features : a one-bit codeword for the output
eliminates the need for word-framing simple design of transmitter and receiver
M Theerthagiri 107
Delta Modulation
δ
u(t)
signal xi(t) xi(t)
sampling period
u(t)
M Theerthagiri 108
Delta Modulation
M Theerthagiri 109
Delta Modulation
M Theerthagiri 110
Delta Modulation the difference between the input and
the approximation is quantized into only two levels, +δ or -δ
if the approximation falls below (above) the signal at the beginning of sampling period, it is increased (decreased) by δ
if the signal variation is not too rapid between successive samples, the staircase approximation is within ± δ
M Theerthagiri 111
Delta Modulation the step size Δ of the quantizer is given by Δ = 2δ prediction error e(nTs) = xi(nTs) – xp(nTs) = xi(nTs) – u(nTs - Ts)
binary quantity b(nTs) = δ sgn[e(nTs)] is the algebraic sign of the error, except for the scaling factor δ
b(nTs) is the one-bit word transmitted by the DM system
M Theerthagiri 112
Delta Modulation
xi(nTs)e(nTs) b(nTs)
u(nTs)
xp(nTs)
delay Ts
delay Ts
DM Transmitter
DM Receiver
removes out-of-band quantization noise
M Theerthagiri 113
Delta Modulation Quantization Noise – DM systems are subject
to Two Types of quantization error Slope overload distortion Granular noise
Slope overload distortion: This type of distortion is due to the use of a step size delta that is too small to follow portions of the waveform that have a steep slope. It can be reduced by increasing the step size.
Granular noise: This results from using a step size that is too large too large in parts of the waveform having a small slope. Granular noise can be reduced by decreasing the step size.
M Theerthagiri 114
M Theerthagiri 115
M Theerthagiri 116
M Theerthagiri 117
Delta Modulation - example
M Theerthagiri 118
M Theerthagiri 119
M Theerthagiri 120
Define adaptive delta modulation The performance of a delta modulator
can be improved significantly by making the step size of the modulator assume a time- varying form. In particular, during a steep segment of the input signal the step size is increased. Conversely, when the input signal is varying slowly, the step is reduced , In this way, the step size is adapting to the level of the signal. The resulting method is called adaptive delta modulation (ADM).
M Theerthagiri 121
M Theerthagiri 122
Adaptive Delta Modulation improved performance over DM step size of the modulator is
varied step size is adapted to the input
signal level during a steep segment of input
signal, step size is increased when input signal is varying
slowly, step size is reduced
M Theerthagiri 123
Adaptive Delta Modulation
x(nTs)
delay Ts
logic for step sizecontrol
M Theerthagiri 124
Coding speech at low bit rates Standard PCM operates at 64 Kbps Conservation of bandwidth / low bit
rates needed to facilitate secure transmission over low-capacity radio channels
Speech can be coded at low bit rates without compromising on acceptable fidelity may be as low as 2 Kbps
However, increase in processing complexity / processing delays are associated with this
M Theerthagiri 125
Coding speech at low bit rates
Design philosophy for a waveform coder for speech at low bit rates : To remove redundancies from the
speech signal as far as possible To assign the available bits to code the
non-redundant parts in an efficient way Algorithms for redundancy removal
and bit assignment become increasingly complex as bit rate is reduced
M Theerthagiri 126
Coding speech at low bit rates
Thumb rule Computational complexity
(measured in terms of multiply-add operations) increases by an order of magnitude for every halving of bit rate in the 64 to 8 Kbps range
M Theerthagiri 127
Define ADPCM. It means adaptive differential pulse code
modulation, a combination of adaptive quantization and adaptive prediction. Adaptive quantization refers to a quantizer that operates with a time varying step size. The autocorrelation function and power spectral density of speech signals are time varying functions of the respective variables. Predictors for such input should be time varying. So adaptive predictors are used.
M Theerthagiri 128
Coding speech at low bit rates by ADPCM Adaptive Differential PCM (achieves 32
Kbps)a widely used variation of PCM codes the difference between sample
points like differential PCM (DPCM)but also dynamically switches the coding scale to compensate for variations in amplitude and frequency
uses an adaptive predictor for the differences between pulses
how does ADPCM adapt these quantization levels ?
M Theerthagiri 129
if the difference signal is low, ADPCM increases the size of the quantization levels
if the difference signal is high, ADPCM decreases the size of the quantization levels
ADPCM adapts the quantization level to the size of the input difference signal
this generates an SNR that is uniform throughout the dynamic range of the difference signal
Coding speech at low bit rates by ADPCM
M Theerthagiri 130
ADPCM is a digital coding scheme that uses: both adaptive quantization and adaptive prediction
adaptive quantization : estimating the variance of the input
signal continuously adaptive prediction :
estimating the input signal from the quantized difference signal
Coding speech at low bit rates by ADPCM
M Theerthagiri 131
Coding speech at low bit rates:Adaptive quantization quantizer operates with a time-varying
step size Δ(nTs), where Ts is the sampling period
step size Δ(nTs) is varied to match the variance σx
2 of the input signal x(nTs) σx(nTs) is the standard deviation, varies
with time σxe(nTs) is an estimate of the standard
deviation adaptive quantization estimates σx(nTs)
continuously
M Theerthagiri 132
Two methods Derive forward estimates of σx(nTs)
using the unquantized samples of x(nTs) AQF
Derive backward estimates of σx(nTs) using the quantized samples of x(nTs) AQB
Coding speech at low bit rates:Adaptive quantization
M Theerthagiri 133
AQF the samples of the speech signal, the
unquantized ones, are buffered the samples are released after the
estimate σxe(nTs) has been obtained since estimate is done on unquantized
samples : step size Δ(nTs) is independent of quantizing noise
more reliable than the quantized case
Coding speech at low bit rates:Adaptive quantization
M Theerthagiri 134
AQF : this method requires transmission of level information (typically 5 to 6 bits per step size sample) to the remote decoder of the receiver overheads / processing delay
AQB avoids problems of level transmission, buffering, delay practically more popular compared to
AQF
Coding speech at low bit rates:Adaptive quantization
M Theerthagiri 135
Coding speech at low bit ratesAQB
x(nTs)
uses the recent history of the quantizer output to extract information for computation of Δ(nTs)
M Theerthagiri 136
What is meant by forward and backward estimation? AQF: Adaptive quantization with forward
estimation. Unquantized samples of the input signal are used to derive the forward estimates.
AQB: Adaptive quantization with backward estimation. Samples of the quantizer output are used to derive the backward estimates.
APF: Adaptive prediction with forward estimation, in which unquantized samples of the input signal are used to derive the forward estimates of the predictor coefficients.
APB: Adaptive prediction with backward estimation, in which Samples of the quantizer output and the prediction error are used to derive estimates of the predictor coefficients.
M Theerthagiri 137
Coding speech at low bit rates:Adaptive prediction Two methods Derive forward estimates of
predictor coefficients using the unquantized samples of x(nTs) APF
Derive backward estimates of predictor coefficients using the quantized samples of x(nTs) APB
M Theerthagiri 138
Coding speech at low bit ratesAPF
Buffer and predictor coefficient Calculator
To Channel
↓↓↓
also transmitted over channel
M Theerthagiri 139
Coding speech at low bit ratesAPB
x(nTs)
xe(nTs)
u(nTs)
y(nTs)
M Theerthagiri 140
Subband Coding In sub-band coding (SBC), the speech signal is
filtered into a number of subbands and each subband is adaptively encoded. The number of bits used in the encoding process differs for each subband signal with bits assigned to quantizers according to a perceptual criteria.
By encoding each subband individually, the quantization noise is confined within its subband. The output bit streams from each encoder are multiplexed and transmitted.
At the receiver demultiplexing is performed, followed by decoding of each subband data signal. The sampled subband signals are then combined to yield the recovered speech.
M Theerthagiri 141
Subband Coding Note that down sampling of subband signals must occur
at the output of the subband filters to avoid over sampling. The down sampling ratio is given by the ratio of original speech bandwidth to subband bandwidth.
Conventional filters cannot be used for the production of subband signals because of the finite width of the band-pass transition bands. If the bandpass filters overlap in the frequency domain, subsampling causes aliasing which destroys the harmonic structure of voiced sounds and results in unpleasant perceptual effects. If the bandpass ¯filters don't overlap, the speech signal cannot be perfectly reconstructed because the gaps between the channels introduce an audible echo. Quadrature mirror ¯filter (QMF) banks [32] overcome this problem and enable perfect reconstruction of the speech signal.
M Theerthagiri 142
Adaptive Subband Coding It is a frequency domain coder, in
which the speech signal is divided in to number of subbands and each one is coded separately. It uses non masking phenomenon in perception for a better speech quality. The noise shaping is done by the adaptive bit assignment.
M Theerthagiri 143
Coding speech at low bit rates Adaptive Sub-band Coding (ASBC) PCM and ADPCM function in time-
domain ASBC is a frequency domain coder Speech signal is divided into a
number of sub-bands Each sub-band is encoded separately Capable of achieving 16 Kbps with
quality comparable to 64 Kbps PCM
M Theerthagiri 144
Uses the following characteristics of speech and hearing mechanism to advantage : quasi-periodic nature of voiced speech noise-masking of hearing mechanism
Quasi-periodic nature People speak with a characteristic pitch frequency This permits reliable prediction of pitch, reduction in prediction error and reduction in number of bits per sample to be transmitted
Coding speech at low bit rates: (ASBC)
M Theerthagiri 145
Noise-masking phenomenon Human ear does not perceive in a frequency band if the noise is about 15 dB below the signal level in that band
A relatively large coding error can be tolerated near formants, coding rate can be reduced
A formant is a peak in an acoustic frequency spectrum which results from the resonant frequencies of any acoustical system. It is most commonly invoked in phonetics or acoustics involving the resonant frequencies of vocal tracts
Coding speech at low bit rates: (ASBC)
M Theerthagiri 146
The number of bits used to encode each sub-band is varied dynamically, called adaptive bit assignment
The no. of bits is shared with other bands, as necessary, depending on the encoding accuracy to be achieved for each sub-band
Coding speech at low bit rates: (ASBC)
M Theerthagiri 147
Examples : low frequency predominated signal
may use bit assignment 5, 2, 1, 0 high frequency predominated signal
may use bit assignment 1, 1, 3, sub-bands with little or no energy content
may not have to be encoded at all Quantizing noise within any sub-
band is limited to that sub-band ------ low-level speech of a sub-band cannot be hidden by quantizing noise of another sub-band
Coding speech at low bit rates: (ASBC)
M Theerthagiri 148
Steps : 1. Speech band is divided into number of
contiguous bands using a filter-bank of (BPFs) band-pass filters (typically 4 to 8)
2. The output of each BPF is translated in frequency to a low-pass form by a modulation process
3. The sub-band signals are sampled at a rate slightly higher than the relevant Nyquist rate
Coding speech at low bit rates: (ASBC)
M Theerthagiri 149
Steps : 4. The samples are digitally encoded
using ADPCM. Each sub-band is encoded based on the spectral content of that sub-band
5. The encoded samples are multiplexed and transmitted
6. Bit assignment info is also transmitted to enable the receiver decode them individually
Coding speech at low bit rates: (ASBC)
M Theerthagiri 150
Steps : 7. The decoded sub-bands are
converted at the receiver to their original locations in the frequency band
8. The frequency re-translated sub-bands are summed up to produce a close replica of the original signal
Coding speech at low bit rates: (ASBC)
M Theerthagiri 151
fs = sampling rate of original (full-band) signal
N = average number of bits used to encode a sample of the signal
M = number of sub-bands Bit rate = N x fs per second Nfs = (MN) x (fs / M) Bit rate = (Total no. of bits per sample)
x (Sampling rate per sub-band)
Coding speech at low bit rates: (ASBC)
M Theerthagiri 152
Example : No. of sub-bands = M = 4 Sampling rate of original signal = fs
= 8 KHz Average no. of bits per sample = 2 Sampling rate for each sub-band =
2 KHz Total no. of bits per sample = 8
Coding speech at low bit rates: (ASBC)
M Theerthagiri 153
Subjective quality : Mean Opinion Score (MOS)
In multimedia (audio, voice telephony, or video) especially when compression techniques are used ---
the MOS (more realistic than SNR) is used to provide a numerical indication of the perceived quality of received media after compression and/or transmission
An MOS is obtained by conducting formal tests on human subjects
Coding speech at low bit rates
M Theerthagiri 154
Subjective quality : Mean Opinion Score (MOS) the MOS is generated by averaging the
results of a set of standard subjective tests
a number of listeners rate the heard audio quality of test sentences read aloud by both male and female speakers over the communications medium being tested
the MOS is the arithmetic mean of all the individual scores
can range from 1 (worst) to 5 (best)
Coding speech at low bit rates
M Theerthagiri 155
Coding speech at low bit ratesSubjective quality : Mean Opinion Score (MOS)
MOS
Quality Impairment
5 Excellent / Perfect
Imperceptible
4 Good / High Perceptible, but not annoying
3 Fair / Communication
Slightly annoying
2 Poor Annoying
1 Bad Very annoying
M Theerthagiri 156
Subjective quality : Mean Opinion Score (MOS) Practical issues Using MOS ratings : 16 Kbps ASBC method
approaches ratings of 4, very close to the 64 Kbps and 32 Kbps DPCM methods
Using SNR ratings : 16 Kbps ASBC compares poorly with higher bit-rate PCM
ASBC falls short of 64 Kbps PCM and 32 Kbps ADPCM ---- quality drops sharply with tandem codings
however, not significant in an all-digital link
Coding speech at low bit rates
M Theerthagiri 157
Measuring Performance of Speech Coders
The quality of speech output of a speech coder is a function of bit-rate, complexity, delay and bandwidth.
M Theerthagiri 158
Digital Multiplexers Hierarchy of digital multiplexers,
whereby digitized voice, data, video signals are combined into one final data stream
Light wave transmission link That is well suited for use in long-
haul telecommunication network
Waveform Coding TechniquesApplications
M Theerthagiri 159
ApplicationsDigital Multiplexers
Waveform Coding Techniques
Computer outputsDigitized voiceDigitized faxTV signals
Operates at higher rates than inputs
at different rates
combining several digital signals at different rates into a single data stream at considerably higher bit rate than any of the inputs
Conceptual diagram of multiplexing-demultiplexing
M Theerthagiri 160
Accomplish multiplexing of digital signals by bit-by-bit interleaving procedure a selector switch that sequentially
selects a bit from each incoming line and then applies it to the high speed common line
at the receiver, the output from the common line is separated into low-speed individual components and delivered to respective destinations
Waveform coding techniques Applications-Digital Multiplexers
M Theerthagiri 161
Two major groups of digital multiplexers are used in practice
Low-speed operations Designed to combine relatively low speed digital signals
up to a maximum of 4800 bps, in to a higher speed multiplexed signal with a rate of up to 9600 bps
used primarily to transmit data over voice-grade channels
uses Modems for converting digital format to analog format
High-speed operations : Designed to operate at much higher bit rates, forms part
of data transmission service generally provided by communication carrier companies.
Example the T1 carrier system which has been developed by the BELL system in the United States in the early 1960s for digital voice communication over short-haul distances of 10-50 miles.
Waveform coding techniques Applications-Digital Multiplexers
M Theerthagiri 162
Transmission Rates
64 kbits/s
Japanese Standard North AmericaStandard
European Standard
1544 kbits/s 2048 kbits/s
8448 kbits/s
34368 kbits/s
139264 kbits/s
564992 kbits/s
6312 kbits/s
44736 kbits/s
274176 kbits/s
32064 kbits/s
97728 kbits/s
97728 kbits/s
x24x30
x4x3
x4
x4
x3
x4
x5 x7
x6
x4
x4x3
M Theerthagiri 163
Digital Hierarchy
MULTIPLEXINGLEVELS(DS)
# OF VOICECHANNELS
NORTHAMERICA
EUROPE JAPAN
0 1 0.064 0.064 0.064
1 24 1.544 1.544
30 2.048
48 3.152 3.152
2(4xDS1)
96 6.312 6.312
120 8.448
M Theerthagiri 164
MultiplexingLevels
# OF VOICECHANNELS
NORTHAMERICA
EUROPE JAPAN
3 (7xDS2) 480 34.368 32.064
672 44.376
1344 91.053
1440 97.728
4 (6xDS3) 1920 139.264
4032 274.176
5760 397.200
7680 565.148
M Theerthagiri 165
Waveform coding techniques Applications-Digital Multiplexers
Digital Hierarchy BELL System
T1 @ 1.544 mbpsT2 @ 6.312 mbpsT3 @ 44.736 mbpsT4 @ 274.176 mbps
MUX
PCM
MUX
DPCM
MUX
Digital Data
Channel Bank
T4
T3
T2
T1Voice Signals
12
24
First Level
Second Level
Third Level
Fourth Level
.,,,
.,, .
,,
Picturephone
Television
1
4
1
1
6
7
DS0DS1
DS2
DS3
DS4
M Theerthagiri 166
Digital Trunk
24DS0
T2Mux
(M1-2)
DS1
DS2
T3Mux
(M2-3)
DS3
DS1
DS1
DS1
DS2
DS2
DS2
DS2
DS2
DS2T4
Mux(M3-4)
DS3
DS3
DS3
DS3
DS3
DS4
T3Mux
(M1-3)28DS1
Level # Voice bps DS0 1 64k DS1 24 1.544M DS1c 48 3.152M DS2 96 6.312M DS3 672 44.736M DS4 4032 274.176M
North American Hierarchy
48DS0
T1Mux
(ChanBank)
1CMux
DS1C
M Theerthagiri 167
M Theerthagiri 168
M Theerthagiri 169
Waveform Coding TechniquesApplications
Digital hierarchy - T1 carrier - Bell system
Level Type Input Output
First Channel bank
24 *Voice
signals
T1(1.544 Mbps)
Second
Multiplexer 4 * T1Digital data
T2(6.312 Mbps)
Third Multiplexer 7 * T2 DPCM(Picturephone
)
T3(44.736 Mbps)
Fourth Multiplexer 6 * T3 PCM(Television)
T4(274.176
Mbps)
M Theerthagiri 170
M Theerthagiri 171
M Theerthagiri 172
T1 Carrier System Hierarchy of digital transmission formats that are used in
North America. The T stands for "Trunk". The basic unit of the T-carrier system is the DS-0, which is multiplexed to form transmission formats with higher speeds.
There exist four of them: T1, T2, T3 and T4. T1 is composed of 24 DS-0s. T2 = 4*T1. T3 = 7*T2. T4 = 6*T3.
Each of the T* units can also be referred to as a DS* unit, that is, T1=DS1, T2=DS2 etc.
The T-carrier system is quite similar to, and compatible with, the E-carrier system used in Europe, but it has lower capacity since it uses in-band signaling, or bit-robbing.
M Theerthagiri 173
T1 Carrier System The T1 carrier system was developed in the United
States in the early 1960s for digital voice communication over short-haul distances of 10-50 miles. Each channel (user) is first sampled at a rate of 8000 samples per second and quantised using 8 bit
companding. 24 voice channels are then combined into a composite
signal denoted as DS1. We thus have a total of 192 bits. One bit is added to this total for synchronisation purposes.
A 1010... sequence, in odd-numbered frames, is used for this purpose. There is a total of 193 bits in a frame of duration 1/8000 = 125ms.
The trunk rate is (193/125) x 106 = 1.544 Mbits/s.
M Theerthagiri 174
Waveform coding techniques Applications-Digital Multiplexers Basic problems involved in the design of
Digital Multiplexers- irrespective of its groupings
Synchronization : Demultiplexing requires that the bit rates of
signals are locked to a common clock; synchronization of the incoming signals is
necessary Framing : The multiplexed signal needs to
be encapsulated using framing to enable identification of individual components at the receiver
Handling of small variations / drift in input bit rates
M Theerthagiri 175
Bit stuffing : Used to cater for requirements of synchronization and rate adjustment to accommodate small variations in the input data rates
Outgoing bit rate of the mux is kept slightly higher than the sum of the maximum expected bit rates of the input channels
this is done by stuffing bits, which are additional non-information carrying bits
incoming signal is stuffed with no. of bits, as necessary, to raise its bit rate equal to that of a locally generated clock
at the demultiplexer, corresponding destuffing is carried out by removing the identified stuffed bits
Waveform coding techniques Applications-Digital Multiplexers
M Theerthagiri 176
Bit Stuffing It was noted earlier that provision must be made to handle small
transmission-rate variations from users. To handle small rate variations, we can employ a bit stuffing technique.
Consider the arrangement as shown in Figure 18.9. Figure 18.9 Elastic buffer for bit stuffing. The data sequence from each user is fed into a elastic buffer at the
rate of R1 bits per second. The contents of this buffer are then fed to the input of the multiplexer at a higher rate, and the multiplexer also monitors the buffer contents.
If the input rate R1 begins to drop relative to the clock rate R'1, the buffer contents decrease. When the number of bits in the buffer drops below a predefined threshold level, the multiplexer disables readout ofthis buffer by the stuff signal, as shown in Figure 18.9. A bit is then stuffed. When the buffer contents rise above the threshold level, sampling of the buffer contents is resumed.
An example of the bit-stuffing process is shown in Figure 18.10. Bits are stuffed into the multiplexed data stream at time t = 3 when the input rate of user 1 drops below the threshold level and at time t = 6 when the input rate of user 2 drops below the threshold level.
M Theerthagiri 177
M Theerthagiri 178
Designed to accommodate 24 voice channels primarily for short distance
human voice signal 300 Hz to 3400 Hz passed through a LPF with cut-off frequency
of 3.4 KHz before sampling W = 3.4 KHz, Nyquist rate = 6.8 KHz,
standard sampling rate in telephone systems is 8 KHz
each frame, therefore, occupies 125 µseconds
each frame comprises 24 * 8-bit words plus a synchronizing bit added at the end of the frame; total = 193 bits
T1 Carrier System
M Theerthagiri 179
T1 carrier (1.544 Mb/s)Digital part of phone system based on the T1 carrier:
193 bit frame (125 us, 8000 samples/s, 8 bits/sample/channel)
channel 1 channel 2 channel 3 channel 24
8 data bitsper channel
bit 1 is aframing code
Each channel has a data rateof 8000 samples/s * 8 bits/channel = 64 Kb/s
M Theerthagiri 180
M Theerthagiri 181
Waveform coding techniques Applications-Digital Multiplexers Digital Multiplexers : T1 carrier system
frame size = 193 bits frame duration = 125 µseconds duration of each bit = 0.647 µseconds bit rate = 1.544 Mbps
Special Supervisory or signalling information needed to transmit information related to :
telephone off-hook dialled number telephone on-hook
in every sixth frame, the LSB of each voice channel is deleted
the signalling bit in inserted in the place of the LSB
M Theerthagiri 182
Super Frame For two reasons assignment of 8th digit in
every 6th frame to signaling and the need for two signaling paths for some switching systems it is necessary to identify a super frame of 12 frame in which the 6th and 12th frame contain two signaling paths. To achieve this and still allow for rapid synchronization of the receiver framing circuitry the frames are divided into odd and even frames.
M Theerthagiri 183
T1 System Framing Structure
M Theerthagiri 184
M Theerthagiri 185
M Theerthagiri 186
ApplicationsDigital Multiplexers : Bell system M12 MultiplexerSignal format
each frame is sub-divided into four sub-framesthese four sub-frames I, II, III, IV are transmitted in that order
IIIIIIIV
IIIIIIIV
SIGNAL FORMAT of BELL System M12 Multiplexer
M Theerthagiri 187
M Theerthagiri 188
Three types of control Bits Needed to provide synchronization, frame indication, and to
identify which of the 4 input signals has been stuffed. These control bits are labeled as F, M and C F- Control Bit: two per sub frame. Constitute the main framing
pulse. The main framing sequence if F0F1F0F1F0F1F0F1 0r 01 01 01 01
M-Control Bits- 1 per sub frame forms secondary framing pulse. It is 0111
C-Control Bits- Three per sub frame are stuffing indicators. CI refers to input channel I CII refers to input channel II CIII refers to input channel III CIV refers to input channel IV
“000” for three Cs indicates no stuffing and “111” for three Cs indicates stuffing.
M Theerthagiri 189
M Theerthagiri 190
Digital Hierarchy
The output of the M12 multiplexer is operating 136 kbs faster than the agragate rate of four DS1 6.312 vs 4x1.544=6.176
M12 frame has 1176 bits, i.e. 294-bit subframes ; each subframe is made of up of 49-bits blocks; each block starts with a control bit followed by a 4x12 info bits from four DS1 channels
M Theerthagiri 191
Makeup of a DS2 Frame
M1 01 02 03 04 C1 01 02 03 04 F0 01 02 03 04 C2 01 02 03 04 C3 01 02 03 04 F1 01 02 03 04
M1 01 02 03 04 C1 01 02 03 04 F0 01 02 03 04 C2 01 02 03 04 C3 01 02 03 04 F1 01 02 03 04
Bit stuffing
4 M bits (O11X X=0 alarm) C=000,111 bit stuffing
absent/present nominal stuffing rate 1796 bps,
max 5367
M Theerthagiri 192
M Theerthagiri 193
M Theerthagiri 194
M Theerthagiri 195
M12 Multiplexer 12 bits from each (of the four) T1
inputs are interleaved to accumulate a total of 48 bits
control bits are inserted by the multiplexer
1 bit is inserted in between sequences of 48 data bits
each frame contains 24 control bits control bits are of 3 types : F, M , C
M Theerthagiri 196
M12 MultiplexerType
No. of bits per sub-frame
Description
F 2 main framing pulses
M 1 secondary framing pulses to identify the four sub-frames
C 3 stuffing indicatorsCI refers to input channel I, CII refers to II, …..
M Theerthagiri 197
M12 Multiplexer all three C-control bits set to 1
indicates that a stuffed bit has been inserted into that T1 signal; 0 → no stuffing
stuffed bit is inserted in the position of the first information bit of the T1 signal that follows the F1 control bit in the same sub-frame
a single error in any of the 3 C-control bits can be detected at the receiver by using majority logic
M Theerthagiri 198
M12 Multiplexer Demultiplexing : search for main framing sequence
F0F1F0F1F0F1F0F1 establishes identity for the four input T1 signals, M- and C-control bits
correct framing of the C-control bits is verified from the M0M1M1M1 sequence
finally the four T1 signals are properly demultiplexed and de-stuffed
M Theerthagiri 199
Waveform coding techniques Applications-Light Wave Transmission
Optical Fibre Cable links Advantages : low transmission
loss, high bandwidths, small size, light weight, immunity to EMI
Applications :long-haul, high-speed communications
M Theerthagiri 200
Waveform coding techniques Applications-Light Wave Transmission
Optical Fibre Cable links Transmitter (Driver + Light Source) optical fiber waveguide Receiver
Transmitter input is binary data fed from the output
of a device like the digital multiplexer the driver for the light source is a low-
voltage-high-current device the driver turns the light source on or off
M Theerthagiri 201
Waveform coding techniques Applications-Light Wave Transmission
light source consists of a laser injection device or a
semiconductor LED the on-off light pulses transmitted are
launched into the OFC Optical fiber waveguide:
source-to-fiber coupling loss fiber-loss or attenuation dispersion
M Theerthagiri 202
Waveform coding techniques Applications-Light Wave Transmission
Receiver : regeneration of original data detection : light pulses are converted back
to electrical current pulses ; uses a photodiode to convert from power to current
pulse shaping and timing : amplification / filtering / equalization of electrical pulses and extraction of timing information
decision making : to decide that the received pulse is on or off
M Theerthagiri 203
M Theerthagiri 204
Optical Link Loss Budget Analysis
M Theerthagiri 205
M Theerthagiri 206
M Theerthagiri 207
Taxonomy of Speech CodersSpeech Coders
Waveform Coders Source Coders
Time Domain: PCM, ADPCM
Frequency Domain: e.g. Sub-band coder,Adaptive transform coder
Linear Predictive Coder
Vocoder
Waveform coders: attempts to preserve the signal waveform not speech specific (I.e. general A-to-D conv)
PCM 64 kbps, ADPCM 32 kpbs, CVSDM 32 kbpsVocoders:
Analyse speech, extract and transmit model parameters Use model parameters to synthesize speech LPC-10: 2.4 kbps
Hybrids: Combine best of both… Eg: CELP (used in GSM)
M Theerthagiri 208
Speech Quality of Various Coders
M Theerthagiri 209
How does DPCM calculate the difference between the current sample signal and a previous sample?
The first part of DPCM works exactly like PCM (that is why it is called differential PCM). The input signal is sampled at a constant sampling frequency (twice the input frequency). Then these samples are modulated using the PAM process. At this point, the DPCM process takes over.
The sampled input signal is stored in what is called a predictor. The predictor takes the stored sample signal and sends it through a differentiator. The differentiator compares the previous sample signal with the current sample signal and sends this difference to the quantizing and coding phase of PCM (this phase can be uniform quantizing or companding with A−law or u−law).
After quantizing and coding, the difference signal is transmitted to its final destination. At the receiving end of the network, everything is reversed. First the difference signal is dequantized. Then this difference signal is added to a sample signal stored in a predictor and sent to a low−pass filter that reconstructs the original input signal.
M Theerthagiri 210
Linear Predictive Coding (LPC) In DPCM, the value of the current
sample is guessed based on the previous sample. Can a better prediction be made ?
The answer is yes. For example, we can use the previous two samples to predict the current one
LPC is more general than DPCM. It exploit the correlation between multiple consecutive samples
M Theerthagiri 211
M Theerthagiri 212
M Theerthagiri 213
M Theerthagiri 214