module7-0
-
Upload
ariel-mendoza -
Category
Documents
-
view
213 -
download
0
Transcript of module7-0
-
8/16/2019 module7-0
1/159
Digital Sign
Module 7: Stochastic Signal Processing
-
8/16/2019 module7-0
2/159
Module Overview:
◮ Module 7.1: Stochastic signals
◮ Module 7.2: Quantization
◮ Module 7.2: A/D and D/A conversion
7
-
8/16/2019 module7-0
3/159
Digital Sign
Module 7.1: Stochastic
-
8/16/2019 module7-0
4/159
Overview:
◮ A simple random signal
◮ Power spectral density
◮ Filtering a stochastic signal
◮ Noise
7.1
-
8/16/2019 module7-0
5/159
Overview:
◮ A simple random signal
◮ Power spectral density
◮ Filtering a stochastic signal
◮ Noise
7.1
-
8/16/2019 module7-0
6/159
Overview:
◮ A simple random signal
◮ Power spectral density
◮ Filtering a stochastic signal
◮ Noise
7.1
-
8/16/2019 module7-0
7/159
-
8/16/2019 module7-0
8/159
Deterministic vs. stochastic
◮ deterministic signals are known in advance: x [n] = sin(0.2 n)
◮ interesting signals are not known in advance: s [n] = what I’m going to s
◮ we usually know something, though: s [n] is a speech signal
◮ stochastic signals can be described probabilistically
◮ can we do signal processing with random signals? Yes!
◮ will not develop stochastic signal processing rigorously but give enough iwith things such as “noise”
7.1
-
8/16/2019 module7-0
9/159
Deterministic vs. stochastic
◮ deterministic signals are known in advance: x [n] = sin(0.2 n)
◮ interesting signals are not known in advance: s [n] = what I’m going to s
◮ we usually know something, though: s [n] is a speech signal
◮ stochastic signals can be described probabilistically
◮ can we do signal processing with random signals? Yes!
◮ will not develop stochastic signal processing rigorously but give enough iwith things such as “noise”
7.1
-
8/16/2019 module7-0
10/159
Deterministic vs. stochastic
◮ deterministic signals are known in advance: x [n] = sin(0.2 n)
◮ interesting signals are not known in advance: s [n] = what I’m going to s
◮ we usually know something, though: s [n] is a speech signal
◮ stochastic signals can be described probabilistically
◮ can we do signal processing with random signals? Yes!
◮ will not develop stochastic signal processing rigorously but give enough iwith things such as “noise”
7.1
-
8/16/2019 module7-0
11/159
Deterministic vs. stochastic
◮ deterministic signals are known in advance: x [n] = sin(0.2 n)
◮ interesting signals are not known in advance: s [n] = what I’m going to s
◮ we usually know something, though: s [n] is a speech signal
◮ stochastic signals can be described probabilistically
◮ can we do signal processing with random signals? Yes!
◮ will not develop stochastic signal processing rigorously but give enough iwith things such as “noise”
7.1
-
8/16/2019 module7-0
12/159
-
8/16/2019 module7-0
13/159
Deterministic vs. stochastic
◮ deterministic signals are known in advance: x [n] = sin(0.2 n)
◮ interesting signals are not known in advance: s [n] = what I’m going to s
◮ we usually know something, though: s [n] is a speech signal
◮ stochastic signals can be described probabilistically
◮ can we do signal processing with random signals? Yes!
◮ will not develop stochastic signal processing rigorously but give enough iwith things such as “noise”
7.1
-
8/16/2019 module7-0
14/159
A simple discrete-time random signal generator
For each new sample, toss a fair coin:
x [n] =
+1 if the outcome of the n-th toss is head
−1 if the outcome of the n-th toss is tail
◮ each sample is independent from all others
◮ each sample value has a 50% probability
7.1
-
8/16/2019 module7-0
15/159
A simple discrete-time random signal generator
For each new sample, toss a fair coin:
x [n] =
+1 if the outcome of the n-th toss is head
−1 if the outcome of the n-th toss is tail
◮ each sample is independent from all others
◮ each sample value has a 50% probability
7.1
-
8/16/2019 module7-0
16/159
A simple discrete-time random signal generator
◮ every time we turn on the generator we obtain a different realization of
◮
we know the “mechanism” behind each instance◮ but how can we analyze a random signal?
7.1
-
8/16/2019 module7-0
17/159
A i l di i d i l
-
8/16/2019 module7-0
18/159
A simple discrete-time random signal generator
◮ every time we turn on the generator we obtain a different realization of
◮ we know the “mechanism” behind each instance
◮ but how can we analyze a random signal?
0 5 10 15 20 25 30
−1
0
1
7.1
A i l di i d i l
-
8/16/2019 module7-0
19/159
A simple discrete-time random signal generator
◮ every time we turn on the generator we obtain a different realization of
◮ we know the “mechanism” behind each instance
◮ but how can we analyze a random signal?
0 5 10 15 20 25 30
−1
0
1
7.1
A i l di i d i l
-
8/16/2019 module7-0
20/159
A simple discrete-time random signal generator
◮ every time we turn on the generator we obtain a different realization of
◮ we know the “mechanism” behind each instance
◮ but how can we analyze a random signal?
0 5 10 15 20 25 30
−1
0
1
7.1
-
8/16/2019 module7-0
21/159
S t l ti ?
-
8/16/2019 module7-0
22/159
Spectral properties?
◮ let’s try with the DFT of a finite set of random samples
◮ every time it’s different; maybe with more data?
◮ no clear pattern... we need a new strategy
0 5 10 15 20 25 300
16
3248
64
80
96
|
X [ k
] |
2
7.1
Spectral properties?
-
8/16/2019 module7-0
23/159
Spectral properties?
◮ let’s try with the DFT of a finite set of random samples
◮ every time it’s different; maybe with more data?
◮ no clear pattern... we need a new strategy
0 5 10 15 20 25 300
16
3248
64
80
96
|
X [ k
] |
2
7.1
Spectral properties?
-
8/16/2019 module7-0
24/159
Spectral properties?
◮ let’s try with the DFT of a finite set of random samples
◮ every time it’s different; maybe with more data?
◮ no clear pattern... we need a new strategy
0 5 10 15 20 25 300
16
3248
64
80
96
|
X [ k ] |
2
7.1
Spectral properties?
-
8/16/2019 module7-0
25/159
Spectral properties?
◮ let’s try with the DFT of a finite set of random samples
◮ every time it’s different; maybe with more data?
◮ no clear pattern... we need a new strategy
0 10 20 30 40 50 600
66
132198
264
330
396
|
X [ k ] |
2
7.1
Spectral properties?
-
8/16/2019 module7-0
26/159
Spectral properties?
◮ let’s try with the DFT of a finite set of random samples
◮ every time it’s different; maybe with more data?
◮ no clear pattern... we need a new strategy
0 21 42 63 84 105 1260
133
266399
532
665
798
|
X [ k ] |
2
7.1
Averaging
-
8/16/2019 module7-0
27/159
Averaging
◮ when faced with random data an intuitive response is to take “averages”
◮ in probability theory the average is across realizations and it’s called exp
◮ for the coin-toss signal:
E [x [n]] =
−1
·P [n-th toss is tail] + 1
·P [n-th toss is head]
◮ so the average value for each sample is zero...
7.1
Averaging
-
8/16/2019 module7-0
28/159
Averaging
◮ when faced with random data an intuitive response is to take “averages”
◮ in probability theory the average is across realizations and it’s called exp
◮ for the coin-toss signal:
E [x [n]] =
−1
·P [n-th toss is tail] + 1
·P [n-th toss is head]
◮ so the average value for each sample is zero...
7.1
Averaging
-
8/16/2019 module7-0
29/159
Averaging
◮ when faced with random data an intuitive response is to take “averages”
◮ in probability theory the average is across realizations and it’s called exp
◮ for the coin-toss signal:
E [x [n]] =
−1
·P [n-th toss is tail] + 1
·P [n-th toss is head]
◮ so the average value for each sample is zero...
7.1
Averaging
-
8/16/2019 module7-0
30/159
Averaging
◮ when faced with random data an intuitive response is to take “averages”
◮ in probability theory the average is across realizations and it’s called exp
◮ for the coin-toss signal:
E [x [n]] =
−1
·P [n-th toss is tail] + 1
·P [n-th toss is head]
◮ so the average value for each sample is zero...
7.1
Averaging the DFT
-
8/16/2019 module7-0
31/159
Averaging the DFT
◮ ... as a consequence, averaging the DFT will not work
◮ E [X [k ]] = 0
◮ however the signal “moves”, so its energy or power must be nonzero
7.1
Averaging the DFT
-
8/16/2019 module7-0
32/159
Averaging the DFT
◮ ... as a consequence, averaging the DFT will not work
◮ E [X [k ]] = 0
◮ however the signal “moves”, so its energy or power must be nonzero
7.1
Averaging the DFT
-
8/16/2019 module7-0
33/159
Averaging the DFT
◮ ... as a consequence, averaging the DFT will not work
◮ E [X [k ]] = 0
◮ however the signal “moves”, so its energy or power must be nonzero
7.1
Energy and power
-
8/16/2019 module7-0
34/159
gy p
◮ the coin-toss signal has infinite energy (see Module 2.1):
E x = limN →∞
N n=−N
|x [n]|2 = limN →∞
(2N + 1) = ∞
◮ however it has finite power over any interval:
P x = limN →∞
12N + 1
N n=−N
|x [n]|2 = 1
7.1
Averaging
-
8/16/2019 module7-0
35/159
g g
let’s try to average the DFT’s square magnitude, normalized:
◮ pick an interval length N
◮ pick a number of iterations M
◮ run the signal generator M times and obtain M N -point realizations
◮ compute the DFT of each realization
◮ average their square magnitude divided by N
7.1
Averaging
-
8/16/2019 module7-0
36/159
g g
let’s try to average the DFT’s square magnitude, normalized:
◮ pick an interval length N
◮ pick a number of iterations M
◮ run the signal generator M times and obtain M N -point realizations
◮ compute the DFT of each realization
◮ average their square magnitude divided by N
7.1
Averaging
-
8/16/2019 module7-0
37/159
g g
let’s try to average the DFT’s square magnitude, normalized:
◮ pick an interval length N
◮ pick a number of iterations M
◮ run the signal generator M times and obtain M N -point realizations
◮ compute the DFT of each realization
◮ average their square magnitude divided by N
7.1
Averaging
-
8/16/2019 module7-0
38/159
let’s try to average the DFT’s square magnitude, normalized:
◮ pick an interval length N
◮ pick a number of iterations M
◮ run the signal generator M times and obtain M N -point realizations
◮ compute the DFT of each realization
◮ average their square magnitude divided by N
7.1
Averaging
-
8/16/2019 module7-0
39/159
let’s try to average the DFT’s square magnitude, normalized:
◮ pick an interval length N
◮ pick a number of iterations M
◮ run the signal generator M times and obtain M N -point realizations
◮ compute the DFT of each realization
◮ average their square magnitude divided by N
7.1
Averaged DFT square magnitude
-
8/16/2019 module7-0
40/159
M = 1
0 5 10 15 20 25 300
1
2
7.1
Averaged DFT square magnitude
-
8/16/2019 module7-0
41/159
M = 10
0 5 10 15 20 25 300
1
2
7.1
Averaged DFT square magnitude
-
8/16/2019 module7-0
42/159
M = 1000
0 5 10 15 20 25 300
1
2
7.1
Averaged DFT square magnitude
-
8/16/2019 module7-0
43/159
M = 5000
0 5 10 15 20 25 300
1
2
7.1
Power spectral density
-
8/16/2019 module7-0
44/159
P [k ] = E |X N [k ]|2/N
◮ it looks very much as if P [k ] = 1
◮ if |X N [k ]|2 tends to the energy distribution in frequency...◮ ...|X N [k ]|2/N tends to the power distribution (aka density ) in frequency◮ the frequency-domain representation for stochastic processes is the powe
7.1
Power spectral density
-
8/16/2019 module7-0
45/159
P [k ] = E |X N [k ]|2/N
◮ it looks very much as if P [k ] = 1
◮ if |X N [k ]|2 tends to the energy distribution in frequency...◮ ...|X N [k ]|2/N tends to the power distribution (aka density ) in frequency◮ the frequency-domain representation for stochastic processes is the powe
7.1
Power spectral density
-
8/16/2019 module7-0
46/159
P [k ] = E |X N [k ]|2/N
◮ it looks very much as if P [k ] = 1
◮ if |X N [k ]|2 tends to the energy distribution in frequency...◮ ...|X N [k ]|2/N tends to the power distribution (aka density ) in frequency◮ the frequency-domain representation for stochastic processes is the powe
7.1
-
8/16/2019 module7-0
47/159
-
8/16/2019 module7-0
48/159
-
8/16/2019 module7-0
49/159
Power spectral density: intuition
-
8/16/2019 module7-0
50/159
◮ P [k ] = 1 means that the power is equally distributed over all frequencies
◮ i.e., we cannot predict if the signal moves “slowly” or “super-fast”
◮ this is because each sample is independent of each other: we could have aones or a realization in which the sign changes every other sample or any
7.1
Filtering a random process
-
8/16/2019 module7-0
51/159
◮ let’s filter the random process with a 2-point Moving Average filter
◮ y [n] = (x [n] + x [n − 1])/2◮ what is the power spectral density?
7.1
Filtering a random process
-
8/16/2019 module7-0
52/159
◮ let’s filter the random process with a 2-point Moving Average filter
◮ y [n] = (x [n] + x [n − 1])/2◮ what is the power spectral density?
7.1
Filtering a random process
-
8/16/2019 module7-0
53/159
◮ let’s filter the random process with a 2-point Moving Average filter
◮ y [n] = (x [n] + x [n − 1])/2◮ what is the power spectral density?
7.1
Averaged DFT magnitude of filtered process
-
8/16/2019 module7-0
54/159
M = 1
0 5 10 15 20 25 300
1
7.1
Averaged DFT magnitude of filtered process
-
8/16/2019 module7-0
55/159
M = 10
0 5 10 15 20 25 300
1
7.1
Averaged DFT magnitude of filtered process
-
8/16/2019 module7-0
56/159
M = 5000
0 5 10 15 20 25 300
1
7.1
Averaged DFT magnitude of filtered process
-
8/16/2019 module7-0
57/159
|(1 + e j (2π/N )k )/2|2
M = 5000
0 5 10 15 20 25 300
1
7.1
Filtering a random process
-
8/16/2019 module7-0
58/159
◮ it looks like P y [k ] = P x [k ] |H [k ]|2, where H [k ] = DFT {h[n]}◮ can we generalize these results beyond a finite set of samples?
7.1
Filtering a random process
-
8/16/2019 module7-0
59/159
◮ it looks like P y [k ] = P x [k ] |H [k ]|2, where H [k ] = DFT {h[n]}◮ can we generalize these results beyond a finite set of samples?
7.1
Stochastic signal processing
-
8/16/2019 module7-0
60/159
◮ a stochastic process is characterized by its power spectral density (PSD)
◮
it can be shown (see the textbook) that the PSD is
P x (e j ω) = DTFT{r x [n]}
where r x [n] = E [x [k ] x [n + k ]] is the autocorrelation of the process.
◮ for a filtered stochastic process y [n] =
H{x [n]
}, it is:
P y (e j ω) = |H (e j ω)|2 P x (e j ω)
7.1
-
8/16/2019 module7-0
61/159
Stochastic signal processing
-
8/16/2019 module7-0
62/159
◮ a stochastic process is characterized by its power spectral density (PSD)
◮
it can be shown (see the textbook) that the PSD is
P x (e j ω) = DTFT{r x [n]}
where r x [n] = E [x [k ] x [n + k ]] is the autocorrelation of the process.
◮ for a filtered stochastic process y [n] =
H{x [n]
}, it is:
P y (e j ω) = |H (e j ω)|2 P x (e j ω)
7 1
Stochastic signal processing
-
8/16/2019 module7-0
63/159
key points:
◮ filters designed for deterministic signals still work (in magnitude) in the s
◮ we lose the concept of phase since we don’t know the shape of a realizat
7 1
Stochastic signal processing
-
8/16/2019 module7-0
64/159
key points:
◮ filters designed for deterministic signals still work (in magnitude) in the s
◮ we lose the concept of phase since we don’t know the shape of a realizat
7 1
Noise
-
8/16/2019 module7-0
65/159
◮ noise is everywhere:
• thermal noise• sum of extraneous interferences• quantization and numerical errors• ...
◮
we can model noise as a stochastic signal◮ the most important noise is white noise
7 1
Noise
-
8/16/2019 module7-0
66/159
◮ noise is everywhere:
• thermal noise
• sum of extraneous interferences• quantization and numerical errors• ...
◮
we can model noise as a stochastic signal◮ the most important noise is white noise
7 1
Noise
-
8/16/2019 module7-0
67/159
◮ noise is everywhere:
• thermal noise
• sum of extraneous interferences• quantization and numerical errors• ...
◮
we can model noise as a stochastic signal◮ the most important noise is white noise
7 1
Noise
-
8/16/2019 module7-0
68/159
◮ noise is everywhere:
• thermal noise
• sum of extraneous interferences• quantization and numerical errors• ...
◮
we can model noise as a stochastic signal◮ the most important noise is white noise
7 1
Noise
-
8/16/2019 module7-0
69/159
◮ noise is everywhere:
• thermal noise
• sum of extraneous interferences• quantization and numerical errors• ...
◮
we can model noise as a stochastic signal◮ the most important noise is white noise
7 1
Noise
-
8/16/2019 module7-0
70/159
◮ noise is everywhere:
• thermal noise
• sum of extraneous interferences• quantization and numerical errors• ...
◮
we can model noise as a stochastic signal◮ the most important noise is white noise
7 1
Noise
-
8/16/2019 module7-0
71/159
◮ noise is everywhere:
• thermal noise
• sum of extraneous interferences• quantization and numerical errors• ...
◮
we can model noise as a stochastic signal◮ the most important noise is white noise
7 1
White noise
-
8/16/2019 module7-0
72/159
◮ “white” indicates uncorrelated samples
◮ r w [n] = σ2δ [n]
◮ P w (e j ω) = σ2
7 1
White noise
-
8/16/2019 module7-0
73/159
◮ “white” indicates uncorrelated samples
◮ r w [n] = σ2δ [n]
◮ P w (e j ω) = σ2
7 1
White noise
-
8/16/2019 module7-0
74/159
◮ “white” indicates uncorrelated samples
◮ r w [n] = σ2δ [n]
◮ P w (e j ω) = σ2
7 1
White noise
-
8/16/2019 module7-0
75/159
σ2
−π −π/2 0 π/2
P w
( e
j ω )
7 1
White noise
-
8/16/2019 module7-0
76/159
◮
the PSD is independent of the probability distribution of the single sampon the variance)
◮ distribution is important to estimate bounds for the signal
◮ very often a Gaussian distribution models the experimental data the best
◮
AWGN: additive white Gaussian noise
7.1
White noise
-
8/16/2019 module7-0
77/159
◮
the PSD is independent of the probability distribution of the single sampon the variance)
◮ distribution is important to estimate bounds for the signal
◮ very often a Gaussian distribution models the experimental data the best
◮
AWGN: additive white Gaussian noise
7.1
White noise
-
8/16/2019 module7-0
78/159
◮
the PSD is independent of the probability distribution of the single sampon the variance)
◮ distribution is important to estimate bounds for the signal
◮ very often a Gaussian distribution models the experimental data the best
◮
AWGN: additive white Gaussian noise
7.1
White noise
-
8/16/2019 module7-0
79/159
◮
the PSD is independent of the probability distribution of the single sampon the variance)
◮ distribution is important to estimate bounds for the signal
◮ very often a Gaussian distribution models the experimental data the best
◮
AWGN: additive white Gaussian noise
7.1
-
8/16/2019 module7-0
80/159
END OF MODULE 7.1
-
8/16/2019 module7-0
81/159
Digital Sign
Module
Overview:
-
8/16/2019 module7-0
82/159
◮ Quantization
◮ Uniform quantization and error analysis
◮ Clipping, saturation, companding
7.2
Overview:
-
8/16/2019 module7-0
83/159
◮ Quantization
◮ Uniform quantization and error analysis
◮ Clipping, saturation, companding
7.2
-
8/16/2019 module7-0
84/159
Quantization
-
8/16/2019 module7-0
85/159
◮ digital devices can only deal with integers (b bits per sample)
◮ we need to map the range of a signal onto a finite set of values
◮ irreversible loss of information → quantization noise
7.2
Quantization
-
8/16/2019 module7-0
86/159
◮ digital devices can only deal with integers (b bits per sample)
◮ we need to map the range of a signal onto a finite set of values
◮ irreversible loss of information → quantization noise
7.2
Quantization
-
8/16/2019 module7-0
87/159
◮ digital devices can only deal with integers (b bits per sample)
◮ we need to map the range of a signal onto a finite set of values
◮ irreversible loss of information → quantization noise
7.2
Quantization schemes
-
8/16/2019 module7-0
88/159
x [n] Q{·} x̂ [n]
Several factors at play:
◮ storage budget (bits per sample)
◮ storage scheme (fixed point, floating point)
◮ properties of the input
• range• probability distribution
7.2
Quantization schemes
-
8/16/2019 module7-0
89/159
x [n] Q{·} x̂ [n]
Several factors at play:
◮ storage budget (bits per sample)
◮ storage scheme (fixed point, floating point)
◮ properties of the input
• range• probability distribution
7.2
Quantization schemes
-
8/16/2019 module7-0
90/159
x [n] Q{·} x̂ [n]
Several factors at play:
◮ storage budget (bits per sample)
◮ storage scheme (fixed point, floating point)
◮ properties of the input
• range• probability distribution
7.2
Quantization schemes
-
8/16/2019 module7-0
91/159
x [n] Q{·} x̂ [n]
Several factors at play:
◮ storage budget (bits per sample)
◮ storage scheme (fixed point, floating point)
◮ properties of the input
• range• probability distribution
7.2
Quantization schemes
-
8/16/2019 module7-0
92/159
x [n] Q{·} x̂ [n]
Several factors at play:
◮ storage budget (bits per sample)
◮ storage scheme (fixed point, floating point)
◮ properties of the input
• range• probability distribution
7.2
Scalar quantization
-
8/16/2019 module7-0
93/159
x [n] Q{·} x̂ [n]
The simplest quantizer:
◮ each sample is encoded individually (hence scalar )
◮ each sample is quantized independently (memoryless quantization)
◮ each sample is encoded using R bits
7.2
Scalar quantization
-
8/16/2019 module7-0
94/159
x [n] Q{·} x̂ [n]
The simplest quantizer:
◮ each sample is encoded individually (hence scalar )
◮ each sample is quantized independently (memoryless quantization)
◮ each sample is encoded using R bits
7.2
Scalar quantization
-
8/16/2019 module7-0
95/159
x [n] Q{·} x̂ [n]
R bps
The simplest quantizer:
◮ each sample is encoded individually (hence scalar )
◮ each sample is quantized independently (memoryless quantization)
◮ each sample is encoded using R bits
7.2
Scalar quantization
Assume input signal bounded: A ≤ x [n] ≤ B for all n:
-
8/16/2019 module7-0
96/159
◮ each sample quantized over 2R possible values ⇒ 2R intervals.◮ each interval associated to a quantization value
A B
7.2
Scalar quantization
Assume input signal bounded: A ≤ x [n] ≤ B for all n:
-
8/16/2019 module7-0
97/159
◮ each sample quantized over 2R possible values ⇒ 2R intervals.◮ each interval associated to a quantization value
A B
7.2
Scalar quantization
Assume input signal bounded: A ≤ x [n] ≤ B for all n:
-
8/16/2019 module7-0
98/159
◮ each sample quantized over 2R possible values ⇒ 2R intervals.◮ each interval associated to a quantization value
A Bx̂ 3x̂ 0 x̂ 1 x̂ 2
7.2
Scalar quantization
Example for R = 2:
-
8/16/2019 module7-0
99/159
A B
i 0 i 1
x̂ 0
I 0
k = 00
i 2
x̂ 1
I 1
k = 01
i 3
x̂ 2
I 2
k = 10
i
x̂ 3
I 3
k = 11
◮ what are the optimal interval boundaries i k ?
◮ what are the optimal quantization values x̂ k ?
7.2
Scalar quantization
Example for R = 2:
-
8/16/2019 module7-0
100/159
A B
i 0 i 1
x̂ 0
I 0
k = 00
i 2
x̂ 1
I 1
k = 01
i 3
x̂ 2
I 2
k = 10
i
x̂ 3
I 3
k = 11
◮ what are the optimal interval boundaries i k ?
◮ what are the optimal quantization values x̂ k ?
7.2
Quantization Error
-
8/16/2019 module7-0
101/159
e [n] = Q{x [n]} − x [n] = x̂ [n] − x [n]
◮ model x [n] as a stochastic process
◮ model error as a white noise sequence:
• error samples are uncorrelated
• all error samples have the same distribution◮ we need statistics of the input to study the error
7.2
Quantization Error
-
8/16/2019 module7-0
102/159
e [n] = Q{x [n]} − x [n] = x̂ [n] − x [n]
◮ model x [n] as a stochastic process
◮ model error as a white noise sequence:
• error samples are uncorrelated
• all error samples have the same distribution◮ we need statistics of the input to study the error
7.2
Quantization Error
-
8/16/2019 module7-0
103/159
e [n] = Q{x [n]} − x [n] = x̂ [n] − x [n]
◮ model x [n] as a stochastic process
◮ model error as a white noise sequence:
• error samples are uncorrelated
• all error samples have the same distribution◮ we need statistics of the input to study the error
7.2
Quantization Error
-
8/16/2019 module7-0
104/159
e [n] = Q{x [n]} − x [n] = x̂ [n] − x [n]
◮ model x [n] as a stochastic process
◮ model error as a white noise sequence:
• error samples are uncorrelated
• all error samples have the same distribution◮ we need statistics of the input to study the error
7.2
Quantization Error
-
8/16/2019 module7-0
105/159
e [n] = Q{x [n]} − x [n] = x̂ [n] − x [n]
◮ model x [n] as a stochastic process
◮ model error as a white noise sequence:
• error samples are uncorrelated
• all error samples have the same distribution◮ we need statistics of the input to study the error
7.2
Uniform quantization
◮ simple but very general case
-
8/16/2019 module7-0
106/159
◮ range is split into 2R equal intervals of width ∆ = (B −A)2−R
7.2
Uniform quantization
◮ simple but very general case
-
8/16/2019 module7-0
107/159
◮ range is split into 2R equal intervals of width ∆ = (B −A)2−R
7.2
Uniform quantization
◮ simple but very general case
-
8/16/2019 module7-0
108/159
◮ range is split into 2R equal intervals of width ∆ = (B −A)2−R
A B
∆
7.2
Uniform quantization
Mean Square Error is the variance of the error signal:
-
8/16/2019 module7-0
109/159
σ2e = E |Q{x [n]} − x [n]|2
=
B
A
f x (τ )(Q{τ } − τ )2 d τ
=
2R −1k =0
I k
f x (τ )(x̂ k − τ )2 d τ
error depends on the probability distribution of the input
7.2
-
8/16/2019 module7-0
110/159
Uniform quantization
Mean Square Error is the variance of the error signal:
-
8/16/2019 module7-0
111/159
σ2e = E |Q{x [n]} − x [n]|2
=
B
A
f x (τ )(Q{τ } − τ )2 d τ
=
2R −1k =0
I k
f x (τ )(x̂ k − τ )2 d τ
error depends on the probability distribution of the input
7.2
Uniform quantization
Mean Square Error is the variance of the error signal:
-
8/16/2019 module7-0
112/159
σ2e = E |Q{x [n]} − x [n]|2
=
B
A
f x (τ )(Q{τ } − τ )2 d τ
=
2R −1k =0
I k
f x (τ )(x̂ k − τ )2 d τ
error depends on the probability distribution of the input
7.2
Uniform quantization of uniform input
-
8/16/2019 module7-0
113/159
Uniform-input hypothesis:
f x (τ ) =
1
B − A
σ2e =
2R −1
k =0
I k (x̂ k − τ )2
B
−A
d τ
7.2
Uniform quantization of uniform input
Let’s find the optimal quantization point by minimizing the error
-
8/16/2019 module7-0
114/159
y g
∂σ2e
∂ ̂x m = ∂
∂ ̂x m
2R −1k =0
I k
(x̂ k −
τ )2
B − A d τ
=
I m
2(x̂ m − τ )B − A d τ
=
(x̂ m
−τ )2
B −A A+m∆+∆
A+m∆
7.2
Uniform quantization of uniform input
Let’s find the optimal quantization point by minimizing the error
-
8/16/2019 module7-0
115/159
∂σ2e ∂ ̂x m =
∂
∂ ̂x m
2R −1k =0
I k
(x̂ k −
τ )2
B − A d τ
=
I m
2(x̂ m − τ )B − A d τ
=
(x̂ m
−τ )2
B −A A+m∆+∆
A+m∆
7.2
Uniform quantization of uniform input
Let’s find the optimal quantization point by minimizing the error
-
8/16/2019 module7-0
116/159
∂σ2
e ∂ ̂x m = ∂
∂ ̂x m
2R −1k =0
I k
(x̂ k −
τ )2
B − A d τ
=
I m
2(x̂ m − τ )B − A d τ
=
(x̂ m
−τ )2
B −A A+m∆+∆
A+m∆
7.2
Uniform quantization of uniform input
-
8/16/2019 module7-0
117/159
Minimizing the error:∂σ2e ∂ ̂x m
= 0 for x̂ m = A + m∆ + ∆
2
optimal quantization point is the interval’s midpoint, for all inte
7.2
-
8/16/2019 module7-0
118/159
Uniform quantization of uniform input
Quantizer’s mean square error:
-
8/16/2019 module7-0
119/159
σ2e =
2R −1k =0
A+k ∆+∆A+k ∆
(A + k ∆ + ∆/2−
τ )2
B −A d τ
= 2R ∆
0
(∆/2 − τ )2B − A d τ
= ∆2
12
7.2
Uniform quantization of uniform input
Quantizer’s mean square error:
-
8/16/2019 module7-0
120/159
σ2e =
2R −1k =0
A+k ∆+∆A+k ∆
(A + k ∆ + ∆/2−
τ )2
B −A d τ
= 2R ∆
0
(∆/2 − τ )2B − A d τ
= ∆2
12
7.2
-
8/16/2019 module7-0
121/159
Error analysis
◮ error energy2 ∆2/12 ∆ (B A)/2R
-
8/16/2019 module7-0
122/159
σ2e = ∆2/12, ∆ = (B − A)/2R
◮
signal energyσ2x = (B − A)2/12
◮ signal to noise ratioSNR = 22R
◮ in dB
SNRdB = 10 log10 22R ≈ 6R dB
7.2
Error analysis
◮ error energyσ2 ∆2/12 ∆ (B A)/2R
-
8/16/2019 module7-0
123/159
σ2e = ∆2/12, ∆ = (B − A)/2R
◮
signal energyσ2x = (B − A)2/12
◮ signal to noise ratioSNR = 22R
◮ in dB
SNRdB = 10 log10 22R ≈ 6R dB
7.2
Error analysis
◮ error energyσ2 = ∆2/12 ∆ = (B A)/2R
-
8/16/2019 module7-0
124/159
σe = ∆ /12, ∆ = (B − A)/2◮
signal energyσ2x = (B − A)2/12
◮ signal to noise ratioSNR = 22R
◮ in dB
SNRdB = 10 log10 22R ≈ 6R dB
7.2
-
8/16/2019 module7-0
125/159
The “6dB/bit” rule of thumb
-
8/16/2019 module7-0
126/159
◮ a compact disk has 16 bits/sample:
max SNR = 96dB
◮ a DVD has 24 bits/sample:max SNR = 144dB
7.2
The “6dB/bit” rule of thumb
-
8/16/2019 module7-0
127/159
◮ a compact disk has 16 bits/sample:
max SNR = 96dB
◮ a DVD has 24 bits/sample:max SNR = 144dB
7.2
-
8/16/2019 module7-0
128/159
Other quantization errors
-
8/16/2019 module7-0
129/159
If input is not bounded to [A,B ]:◮ clip samples to [A,B ]: linear distortion (can be put to good use in guita
◮ smoothly saturate input: this simulates the saturation curves of analog e
7.2
Other quantization errors
-
8/16/2019 module7-0
130/159
If input is not bounded to [A,B ]:◮ clip samples to [A,B ]: linear distortion (can be put to good use in guita
◮ smoothly saturate input: this simulates the saturation curves of analog e
7.2
Clipping vs saturation
-
8/16/2019 module7-0
131/159
−2 −1 0 1 2
−1
0
1
−2 −1 0
−1
0
1
7.2
Other quantization errors
If input is not uniform:
-
8/16/2019 module7-0
132/159
◮ use uniform quantizer and accept increased error.
For instance, if input is Gaussian:
σ2e =
√ 3π
2 σ2 ∆2
◮ design optimal quantizer for input distribution, if known (Lloyd-Max algo
◮ use “companders”
7.2
Other quantization errors
If input is not uniform:
-
8/16/2019 module7-0
133/159
◮ use uniform quantizer and accept increased error.
For instance, if input is Gaussian:
σ2e =
√ 3π
2 σ2 ∆2
◮ design optimal quantizer for input distribution, if known (Lloyd-Max algo
◮ use “companders”
7.2
Other quantization errors
If input is not uniform:
-
8/16/2019 module7-0
134/159
◮ use uniform quantizer and accept increased error.
For instance, if input is Gaussian:
σ2e =
√ 3π
2 σ2 ∆2
◮ design optimal quantizer for input distribution, if known (Lloyd-Max algo
◮ use “companders”
7.2
µ-law compander
C{x }
-
8/16/2019 module7-0
135/159
C{x [n]} = sgn(x [n]) ln(1 + µ|x [n]|)ln(1 + µ)
7.2
-
8/16/2019 module7-0
136/159
-
8/16/2019 module7-0
137/159
Digital Sign
Module 7.3: A/D and
Overview:
-
8/16/2019 module7-0
138/159
◮ Analog-to-digital (A/D) conversion
◮ Digital-to-analog (D/A) conversion
7.3
Overview:
-
8/16/2019 module7-0
139/159
◮ Analog-to-digital (A/D) conversion
◮ Digital-to-analog (D/A) conversion
7.3
From analog to digital
-
8/16/2019 module7-0
140/159
◮
sampling discretizes time◮ quantization discretized amplitude
◮ how is it done in practice?
7.3
From analog to digital
-
8/16/2019 module7-0
141/159
◮
sampling discretizes time◮ quantization discretized amplitude
◮ how is it done in practice?
7.3
From analog to digital
-
8/16/2019 module7-0
142/159
◮
sampling discretizes time◮ quantization discretized amplitude
◮ how is it done in practice?
7.3
From analog to digital
-
8/16/2019 module7-0
143/159
7.3
-
8/16/2019 module7-0
144/159
A tiny bit of electronics: the op-amp
+v p
v
-
8/16/2019 module7-0
145/159
−v n
v o
v o = G (v p − v n)
7.3
The two key properties
-
8/16/2019 module7-0
146/159
◮ infinite input gain (G ≈ ∞)◮ zero input current
7.3
The two key properties
-
8/16/2019 module7-0
147/159
◮ infinite input gain (G ≈ ∞)◮ zero input current
7.3
Inside the box
+V cc
-
8/16/2019 module7-0
148/159
v p v n
v o
−V cc 7.3
The op-amp in open loop: comparator
+x
-
8/16/2019 module7-0
149/159
−V T
y
y =+V cc if x > V T −V cc if x
-
8/16/2019 module7-0
150/159
−V T
y
y =+V cc if x > V T −V cc if x
-
8/16/2019 module7-0
151/159
−
y
y = x
7.3
The op-amp in closed loop: buffer
+x
y
-
8/16/2019 module7-0
152/159
−
y
y = x
7.3
The op-amp in closed loop: inverting amplifier
R 1
R 2
-
8/16/2019 module7-0
153/159
−
+
x
y
y = −(R 2/R 1)x
7.3
The op-amp in closed loop: inverting amplifier
R 1
R 2
-
8/16/2019 module7-0
154/159
−
+
x
y
y = −(R 2/R 1)x
7.3
A/D Converter: Sample & Hold
−
−
T1
-
8/16/2019 module7-0
155/159
+
+
C1x (t )
k (t )
F s
7.3
A/D Converter: 2-Bit Quantizer
+
R +V 0
x [n]11
10+0 5V0
-
8/16/2019 module7-0
156/159
−
−
+
−
+
R
R
R
LSB
MSB
10
01
+0.5V 0
0
−0.5V 0
−V 0
7.3
D/A Converter
V 0
MSBLSB ...
-
8/16/2019 module7-0
157/159
−
+
2R 2R 2R
2R R
x (t )
R
R
7.3
-
8/16/2019 module7-0
158/159
END OF MODULE 7.3
-
8/16/2019 module7-0
159/159
END OF MODULE 7