6.02 Fall 2012 Lecture #8 - MIT OpenCourseWare · • SignaltoNoise Ratio (SNR) – usually denotes...
Transcript of 6.02 Fall 2012 Lecture #8 - MIT OpenCourseWare · • SignaltoNoise Ratio (SNR) – usually denotes...
602 Fall 2012 Lecture 8
bull Noise bad things happen to good signals bull Signalshytoshynoise ratio and decibel (dB) scale bull PDFrsquos means variances Gaussian noise bull Bit error rate for bipolar signaling
602 Fall 2012 Lecture 8 Slide 1
tt
t
Single Link Communication Model
602 Fall 2012 Lecture 8 Slide 2602 Fall 2012 LeLeLLLeLecture 8 Slide 2
Digitize (if needed)
Original source
Source coding
Source binary digits (ldquomessage bitsrdquo)
Bit stream
Renderdisplay
Receiving appuser
Source decoding
Bit stream
Channel Coding
(bit error correction)
Recv samples
+ Demapper
Mapper +
Xmit samples
Bits
(Voltages) over
physical link
etc
SignalsSignals (Vo ltages)
Channel Decoding
(reducing or removing bit errors)
Renderdisplay etc
Receiving appuserg a
Source decoding
etc
de
BB iitt ssttreamB
Digitize (if needed)
Original source
iti
Source coding
SSoouurrccee bbiinnaarryy ddiiggiittssSS (ldquomessage bitsrdquo)
BBiitt streammBB
Endshyhost computers
(Bits
602 Fall 2012 Lecture 8 Slide 3
From Baseband to Modulated Signal and Back
codeword bits in
1001110101 DAC
ADC
NOISY amp DISTORTING ANALOG CHANNEL
modulate
1001110101
codeword bits out
demodulate amp filter
generate digitized symbols
sample amp threshold
x[n]
y[n]PDJHE072SHQampRXUVHDUH
Mapping Bits to Samples at Transmitter codeword bits in
1001110101 generate digitized symbols
x[n]
16 samples per bit
1 0 0 1 1 1 0 1 0 1 602 Fall 2012 Lecture 8 Slide 4
rre
Samples after Processing at Receiver bits in
1001110101 DACmodulate generate digitized symbols
codeword
x[n] NOISY amp DISTORTING ANALOG CHANNEL
ADC demodulate
y[n] Assuming no noise only endshytoshyend distortion
filter
602 Fall 2012 Lecture 8 Slide 502 Fall 2012 Lecturururururururururrrrrurrrrrruuurrrrrrurururuurr eeeeeeeeeeeeeeeeeeeeeeeeeeeeeee eeeee 8 Slide 5
y[n]
e
Mapping Samples to Bits at Receiver codeword bits in
1001110101 DACmodulate generate digitized symbols n = sample index
x[n] j = bit index NOISY amp DISTORTING ANALOG CHANNEL
y[n] b[j] demodulate 1001110101
amp filter sample amp threshold
codeword bits out 16 samples per bit
ADC
602 Fall 2012 Lecture 8 Slide 602 Fall 2012 LeLeLeLeLectctctctcturururururuuuuuurrrruuuu ruuruuu ruuuuu eeeeeeeeee eeeeeeeeeeeeeeeee eeeeee 88888 SSSSSlililiiiliiliii de 6
1 0 0 1 1 1 0 1 0 1
b[j]
Threshold
For now assume no distortion only
Additive ZeroshyMean Noise
bull Received signal y[n] = x[n] + w[n]
ie received samples y[n] are the transmitted samples x[n] + zeroshymean noise w[n] on each sample assumed iid (independent and identically distributed at each n)
bull SignalshytoshyNoise Ratio (SNR) ndash usually denotes the ratio of
(timeshyaveraged or peak) signal power ie squared amplitude of x[n]
to
noise variance ie expected squared amplitude of w[n]
602 Fall 2012 Lecture 8 Slide 7
SNR Example
Changing the amplification factor (gain) A leads to different SNR values
bull Lower A lower SNR bull Signal quality degrades with
lower SNR
602 Fall 2012 Lecture 8 Slide 8
SignalshytoshyNoise Ratio (SNR) 10logX X
100 10000000000
90 1000000000
80 100000000
70 10000000
60 1000000
50 100000
40 10000
30 1000
20 100
10 10
0 1
shy10 01
shy20 001
shy30 0001
shy40 00001
shy50 0000001
shy60 00000001
shy70 000000001
The SignalshytoshyNoise ratio (SNR) is useful in judging the impact of noise on system performance
PPsignalSNR = PPnoise SNR for power is often measured in decibels (dB)
SNR (db) =10 log10
⎛ ⎜⎜⎝
PPsignal PPnoise
⎞ ⎟⎟⎠
Caution For measuring ratios of amplitudes rather than powers take 20 log10 (ratio) shy80 0000000001
shy90 00000000001
shy100 000000000001301db is a factor of 2 in power ratio602 Fall 2012 Lecture 8 Slide 9
Noise Characterization From Histogram to PDF
Experiment create histograms of sample values from independent trials of increasing lengths
Histogram typically converges to a shape that is known ndash after normalization to unit area ndash as a probability density function (PDF)
602 Fall 2012 Lecture 8 Slide 10
Using the PDF in Probability Calculations
We say that X is a random variable governed by the PDF fX(x) if X takes on a numerical value in the range of x1 to x2 with a probability calculated from the PDF of Xas
int x2p(x1 lt X lt x2 ) = fX (x)dx x1
A PDF is not a probability ndash its associated integrals are Note that probability values are always in the range of 0 to 1
602 Fall 2012 Lecture 8 Slide 11
The Ubiquity of Gaussian Noise The net noise observed at the receiver is often the sum of many small independent random contributions from many factors If these independent random variables have finite mean and variance the Central Limit Theorem says their sum will be a Gaussian
The figure below shows the histograms of the results of 10000 trials of summing 100 random samples drawn from [shy11] using two different distributions
1shy1
1 Triangular Uniform
1shy1
05 PDF PDF
602 Fall 2012 Lecture 8 Slide 12
Mean and Variance of a Random Variable X
σx
The mean or expected value JX is defined and computed as
=infin x fX (x)dxmicroX intminusinfin
The variance σX2 is the expected squared variation or deviation
of the random variable around the mean and is thus computed as
2 infin σ X = intminusinfin
(x minusmicroX )2 fX (x)dx The square root of the variance is the standard deviation σX
602 Fall 2012 Lecture 8 Slide 13
Visualizing Mean and Variance
Changes in mean shift the Changes in variance narrow center of mass of PDF or broaden the PDF (but
area is always equal to 1) 602 Fall 2012 Lecture 8 Slide 14
The Gaussian Distribution
A Gaussian random variable W with mean J and variance σ2 has a PDF described by
1fW (w) = 2πσ 2
602 Fall 2012 Lecture 8 Slide 15
e minus wminusmicro( )2
2σ 2
Lecture 8 Slide 15
minusmicro 2
σ 2
Noise Model for iid Process w[n]
bull Assume each w[n] is distributed as the Gaussian random variable W on the preceding slide but with mean 0 and independently of w[] at all other times
602 Fall 2012 Lecture 8 Slide 16
Estimating noise parameters bull Transmit a sequence of ldquo0rdquo bits ie hold the
voltage V0 at the transmitter
bull Observe received samples y[n] n = 0 1 K 1 ndash Process these samples to obtain the statistics of the noise
process for additive noise under the assumption of iid (independent identically distributed) noise samples (or more generally an ldquoergodicrdquo process ndash beyond our scope)
bull Noise samples w[n] = y[n] V0
bull For large K can use the sample mean m to estimate 1 and sample standard deviation s to estimate σ
602 Fall 2012 Lecture 8 Slide 17
Back to distinguishing ldquo1rdquo from ldquo0rdquo bull Assume bipolar signaling
Transmit L samples x[] at +Vp (=V1) to signal a ldquo1rdquo
Transmit L samples x[] at ndashVp (=V0) to signal a ldquo0rdquo
bull Simpleshyminded receiver take a single sample value y[nj] at an appropriately chosen instant nj in the
bull jshyth bit interval Decide between the following two hypotheses
y[nj] = +Vp + w[nj] (==gt ldquo1rdquo)
or
y[nj] = ndashVp + w[nj] (==gt ldquo0rdquo)
where w[nj] is Gaussian zeroshymean variance σ2
602 Fall 2012 Lecture 8 Slide 18
Connecting the SNR and BER
EV S = p
2σ 2 = N
P(ldquo0rdquo)=05 1=shyVp =
1=Vp
noise
P(ldquo1rdquo)=05
= noise
0 shyVp +Vp
)( S
N E V 1 1 erfc == )( P error
602 Fall 2012 Lecture 8 Slide 19
BER erfc ( p ) = σ 2 2 2 0
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
tt
t
Single Link Communication Model
602 Fall 2012 Lecture 8 Slide 2602 Fall 2012 LeLeLLLeLecture 8 Slide 2
Digitize (if needed)
Original source
Source coding
Source binary digits (ldquomessage bitsrdquo)
Bit stream
Renderdisplay
Receiving appuser
Source decoding
Bit stream
Channel Coding
(bit error correction)
Recv samples
+ Demapper
Mapper +
Xmit samples
Bits
(Voltages) over
physical link
etc
SignalsSignals (Vo ltages)
Channel Decoding
(reducing or removing bit errors)
Renderdisplay etc
Receiving appuserg a
Source decoding
etc
de
BB iitt ssttreamB
Digitize (if needed)
Original source
iti
Source coding
SSoouurrccee bbiinnaarryy ddiiggiittssSS (ldquomessage bitsrdquo)
BBiitt streammBB
Endshyhost computers
(Bits
602 Fall 2012 Lecture 8 Slide 3
From Baseband to Modulated Signal and Back
codeword bits in
1001110101 DAC
ADC
NOISY amp DISTORTING ANALOG CHANNEL
modulate
1001110101
codeword bits out
demodulate amp filter
generate digitized symbols
sample amp threshold
x[n]
y[n]PDJHE072SHQampRXUVHDUH
Mapping Bits to Samples at Transmitter codeword bits in
1001110101 generate digitized symbols
x[n]
16 samples per bit
1 0 0 1 1 1 0 1 0 1 602 Fall 2012 Lecture 8 Slide 4
rre
Samples after Processing at Receiver bits in
1001110101 DACmodulate generate digitized symbols
codeword
x[n] NOISY amp DISTORTING ANALOG CHANNEL
ADC demodulate
y[n] Assuming no noise only endshytoshyend distortion
filter
602 Fall 2012 Lecture 8 Slide 502 Fall 2012 Lecturururururururururrrrrurrrrrruuurrrrrrurururuurr eeeeeeeeeeeeeeeeeeeeeeeeeeeeeee eeeee 8 Slide 5
y[n]
e
Mapping Samples to Bits at Receiver codeword bits in
1001110101 DACmodulate generate digitized symbols n = sample index
x[n] j = bit index NOISY amp DISTORTING ANALOG CHANNEL
y[n] b[j] demodulate 1001110101
amp filter sample amp threshold
codeword bits out 16 samples per bit
ADC
602 Fall 2012 Lecture 8 Slide 602 Fall 2012 LeLeLeLeLectctctctcturururururuuuuuurrrruuuu ruuruuu ruuuuu eeeeeeeeee eeeeeeeeeeeeeeeee eeeeee 88888 SSSSSlililiiiliiliii de 6
1 0 0 1 1 1 0 1 0 1
b[j]
Threshold
For now assume no distortion only
Additive ZeroshyMean Noise
bull Received signal y[n] = x[n] + w[n]
ie received samples y[n] are the transmitted samples x[n] + zeroshymean noise w[n] on each sample assumed iid (independent and identically distributed at each n)
bull SignalshytoshyNoise Ratio (SNR) ndash usually denotes the ratio of
(timeshyaveraged or peak) signal power ie squared amplitude of x[n]
to
noise variance ie expected squared amplitude of w[n]
602 Fall 2012 Lecture 8 Slide 7
SNR Example
Changing the amplification factor (gain) A leads to different SNR values
bull Lower A lower SNR bull Signal quality degrades with
lower SNR
602 Fall 2012 Lecture 8 Slide 8
SignalshytoshyNoise Ratio (SNR) 10logX X
100 10000000000
90 1000000000
80 100000000
70 10000000
60 1000000
50 100000
40 10000
30 1000
20 100
10 10
0 1
shy10 01
shy20 001
shy30 0001
shy40 00001
shy50 0000001
shy60 00000001
shy70 000000001
The SignalshytoshyNoise ratio (SNR) is useful in judging the impact of noise on system performance
PPsignalSNR = PPnoise SNR for power is often measured in decibels (dB)
SNR (db) =10 log10
⎛ ⎜⎜⎝
PPsignal PPnoise
⎞ ⎟⎟⎠
Caution For measuring ratios of amplitudes rather than powers take 20 log10 (ratio) shy80 0000000001
shy90 00000000001
shy100 000000000001301db is a factor of 2 in power ratio602 Fall 2012 Lecture 8 Slide 9
Noise Characterization From Histogram to PDF
Experiment create histograms of sample values from independent trials of increasing lengths
Histogram typically converges to a shape that is known ndash after normalization to unit area ndash as a probability density function (PDF)
602 Fall 2012 Lecture 8 Slide 10
Using the PDF in Probability Calculations
We say that X is a random variable governed by the PDF fX(x) if X takes on a numerical value in the range of x1 to x2 with a probability calculated from the PDF of Xas
int x2p(x1 lt X lt x2 ) = fX (x)dx x1
A PDF is not a probability ndash its associated integrals are Note that probability values are always in the range of 0 to 1
602 Fall 2012 Lecture 8 Slide 11
The Ubiquity of Gaussian Noise The net noise observed at the receiver is often the sum of many small independent random contributions from many factors If these independent random variables have finite mean and variance the Central Limit Theorem says their sum will be a Gaussian
The figure below shows the histograms of the results of 10000 trials of summing 100 random samples drawn from [shy11] using two different distributions
1shy1
1 Triangular Uniform
1shy1
05 PDF PDF
602 Fall 2012 Lecture 8 Slide 12
Mean and Variance of a Random Variable X
σx
The mean or expected value JX is defined and computed as
=infin x fX (x)dxmicroX intminusinfin
The variance σX2 is the expected squared variation or deviation
of the random variable around the mean and is thus computed as
2 infin σ X = intminusinfin
(x minusmicroX )2 fX (x)dx The square root of the variance is the standard deviation σX
602 Fall 2012 Lecture 8 Slide 13
Visualizing Mean and Variance
Changes in mean shift the Changes in variance narrow center of mass of PDF or broaden the PDF (but
area is always equal to 1) 602 Fall 2012 Lecture 8 Slide 14
The Gaussian Distribution
A Gaussian random variable W with mean J and variance σ2 has a PDF described by
1fW (w) = 2πσ 2
602 Fall 2012 Lecture 8 Slide 15
e minus wminusmicro( )2
2σ 2
Lecture 8 Slide 15
minusmicro 2
σ 2
Noise Model for iid Process w[n]
bull Assume each w[n] is distributed as the Gaussian random variable W on the preceding slide but with mean 0 and independently of w[] at all other times
602 Fall 2012 Lecture 8 Slide 16
Estimating noise parameters bull Transmit a sequence of ldquo0rdquo bits ie hold the
voltage V0 at the transmitter
bull Observe received samples y[n] n = 0 1 K 1 ndash Process these samples to obtain the statistics of the noise
process for additive noise under the assumption of iid (independent identically distributed) noise samples (or more generally an ldquoergodicrdquo process ndash beyond our scope)
bull Noise samples w[n] = y[n] V0
bull For large K can use the sample mean m to estimate 1 and sample standard deviation s to estimate σ
602 Fall 2012 Lecture 8 Slide 17
Back to distinguishing ldquo1rdquo from ldquo0rdquo bull Assume bipolar signaling
Transmit L samples x[] at +Vp (=V1) to signal a ldquo1rdquo
Transmit L samples x[] at ndashVp (=V0) to signal a ldquo0rdquo
bull Simpleshyminded receiver take a single sample value y[nj] at an appropriately chosen instant nj in the
bull jshyth bit interval Decide between the following two hypotheses
y[nj] = +Vp + w[nj] (==gt ldquo1rdquo)
or
y[nj] = ndashVp + w[nj] (==gt ldquo0rdquo)
where w[nj] is Gaussian zeroshymean variance σ2
602 Fall 2012 Lecture 8 Slide 18
Connecting the SNR and BER
EV S = p
2σ 2 = N
P(ldquo0rdquo)=05 1=shyVp =
1=Vp
noise
P(ldquo1rdquo)=05
= noise
0 shyVp +Vp
)( S
N E V 1 1 erfc == )( P error
602 Fall 2012 Lecture 8 Slide 19
BER erfc ( p ) = σ 2 2 2 0
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
602 Fall 2012 Lecture 8 Slide 3
From Baseband to Modulated Signal and Back
codeword bits in
1001110101 DAC
ADC
NOISY amp DISTORTING ANALOG CHANNEL
modulate
1001110101
codeword bits out
demodulate amp filter
generate digitized symbols
sample amp threshold
x[n]
y[n]PDJHE072SHQampRXUVHDUH
Mapping Bits to Samples at Transmitter codeword bits in
1001110101 generate digitized symbols
x[n]
16 samples per bit
1 0 0 1 1 1 0 1 0 1 602 Fall 2012 Lecture 8 Slide 4
rre
Samples after Processing at Receiver bits in
1001110101 DACmodulate generate digitized symbols
codeword
x[n] NOISY amp DISTORTING ANALOG CHANNEL
ADC demodulate
y[n] Assuming no noise only endshytoshyend distortion
filter
602 Fall 2012 Lecture 8 Slide 502 Fall 2012 Lecturururururururururrrrrurrrrrruuurrrrrrurururuurr eeeeeeeeeeeeeeeeeeeeeeeeeeeeeee eeeee 8 Slide 5
y[n]
e
Mapping Samples to Bits at Receiver codeword bits in
1001110101 DACmodulate generate digitized symbols n = sample index
x[n] j = bit index NOISY amp DISTORTING ANALOG CHANNEL
y[n] b[j] demodulate 1001110101
amp filter sample amp threshold
codeword bits out 16 samples per bit
ADC
602 Fall 2012 Lecture 8 Slide 602 Fall 2012 LeLeLeLeLectctctctcturururururuuuuuurrrruuuu ruuruuu ruuuuu eeeeeeeeee eeeeeeeeeeeeeeeee eeeeee 88888 SSSSSlililiiiliiliii de 6
1 0 0 1 1 1 0 1 0 1
b[j]
Threshold
For now assume no distortion only
Additive ZeroshyMean Noise
bull Received signal y[n] = x[n] + w[n]
ie received samples y[n] are the transmitted samples x[n] + zeroshymean noise w[n] on each sample assumed iid (independent and identically distributed at each n)
bull SignalshytoshyNoise Ratio (SNR) ndash usually denotes the ratio of
(timeshyaveraged or peak) signal power ie squared amplitude of x[n]
to
noise variance ie expected squared amplitude of w[n]
602 Fall 2012 Lecture 8 Slide 7
SNR Example
Changing the amplification factor (gain) A leads to different SNR values
bull Lower A lower SNR bull Signal quality degrades with
lower SNR
602 Fall 2012 Lecture 8 Slide 8
SignalshytoshyNoise Ratio (SNR) 10logX X
100 10000000000
90 1000000000
80 100000000
70 10000000
60 1000000
50 100000
40 10000
30 1000
20 100
10 10
0 1
shy10 01
shy20 001
shy30 0001
shy40 00001
shy50 0000001
shy60 00000001
shy70 000000001
The SignalshytoshyNoise ratio (SNR) is useful in judging the impact of noise on system performance
PPsignalSNR = PPnoise SNR for power is often measured in decibels (dB)
SNR (db) =10 log10
⎛ ⎜⎜⎝
PPsignal PPnoise
⎞ ⎟⎟⎠
Caution For measuring ratios of amplitudes rather than powers take 20 log10 (ratio) shy80 0000000001
shy90 00000000001
shy100 000000000001301db is a factor of 2 in power ratio602 Fall 2012 Lecture 8 Slide 9
Noise Characterization From Histogram to PDF
Experiment create histograms of sample values from independent trials of increasing lengths
Histogram typically converges to a shape that is known ndash after normalization to unit area ndash as a probability density function (PDF)
602 Fall 2012 Lecture 8 Slide 10
Using the PDF in Probability Calculations
We say that X is a random variable governed by the PDF fX(x) if X takes on a numerical value in the range of x1 to x2 with a probability calculated from the PDF of Xas
int x2p(x1 lt X lt x2 ) = fX (x)dx x1
A PDF is not a probability ndash its associated integrals are Note that probability values are always in the range of 0 to 1
602 Fall 2012 Lecture 8 Slide 11
The Ubiquity of Gaussian Noise The net noise observed at the receiver is often the sum of many small independent random contributions from many factors If these independent random variables have finite mean and variance the Central Limit Theorem says their sum will be a Gaussian
The figure below shows the histograms of the results of 10000 trials of summing 100 random samples drawn from [shy11] using two different distributions
1shy1
1 Triangular Uniform
1shy1
05 PDF PDF
602 Fall 2012 Lecture 8 Slide 12
Mean and Variance of a Random Variable X
σx
The mean or expected value JX is defined and computed as
=infin x fX (x)dxmicroX intminusinfin
The variance σX2 is the expected squared variation or deviation
of the random variable around the mean and is thus computed as
2 infin σ X = intminusinfin
(x minusmicroX )2 fX (x)dx The square root of the variance is the standard deviation σX
602 Fall 2012 Lecture 8 Slide 13
Visualizing Mean and Variance
Changes in mean shift the Changes in variance narrow center of mass of PDF or broaden the PDF (but
area is always equal to 1) 602 Fall 2012 Lecture 8 Slide 14
The Gaussian Distribution
A Gaussian random variable W with mean J and variance σ2 has a PDF described by
1fW (w) = 2πσ 2
602 Fall 2012 Lecture 8 Slide 15
e minus wminusmicro( )2
2σ 2
Lecture 8 Slide 15
minusmicro 2
σ 2
Noise Model for iid Process w[n]
bull Assume each w[n] is distributed as the Gaussian random variable W on the preceding slide but with mean 0 and independently of w[] at all other times
602 Fall 2012 Lecture 8 Slide 16
Estimating noise parameters bull Transmit a sequence of ldquo0rdquo bits ie hold the
voltage V0 at the transmitter
bull Observe received samples y[n] n = 0 1 K 1 ndash Process these samples to obtain the statistics of the noise
process for additive noise under the assumption of iid (independent identically distributed) noise samples (or more generally an ldquoergodicrdquo process ndash beyond our scope)
bull Noise samples w[n] = y[n] V0
bull For large K can use the sample mean m to estimate 1 and sample standard deviation s to estimate σ
602 Fall 2012 Lecture 8 Slide 17
Back to distinguishing ldquo1rdquo from ldquo0rdquo bull Assume bipolar signaling
Transmit L samples x[] at +Vp (=V1) to signal a ldquo1rdquo
Transmit L samples x[] at ndashVp (=V0) to signal a ldquo0rdquo
bull Simpleshyminded receiver take a single sample value y[nj] at an appropriately chosen instant nj in the
bull jshyth bit interval Decide between the following two hypotheses
y[nj] = +Vp + w[nj] (==gt ldquo1rdquo)
or
y[nj] = ndashVp + w[nj] (==gt ldquo0rdquo)
where w[nj] is Gaussian zeroshymean variance σ2
602 Fall 2012 Lecture 8 Slide 18
Connecting the SNR and BER
EV S = p
2σ 2 = N
P(ldquo0rdquo)=05 1=shyVp =
1=Vp
noise
P(ldquo1rdquo)=05
= noise
0 shyVp +Vp
)( S
N E V 1 1 erfc == )( P error
602 Fall 2012 Lecture 8 Slide 19
BER erfc ( p ) = σ 2 2 2 0
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
Mapping Bits to Samples at Transmitter codeword bits in
1001110101 generate digitized symbols
x[n]
16 samples per bit
1 0 0 1 1 1 0 1 0 1 602 Fall 2012 Lecture 8 Slide 4
rre
Samples after Processing at Receiver bits in
1001110101 DACmodulate generate digitized symbols
codeword
x[n] NOISY amp DISTORTING ANALOG CHANNEL
ADC demodulate
y[n] Assuming no noise only endshytoshyend distortion
filter
602 Fall 2012 Lecture 8 Slide 502 Fall 2012 Lecturururururururururrrrrurrrrrruuurrrrrrurururuurr eeeeeeeeeeeeeeeeeeeeeeeeeeeeeee eeeee 8 Slide 5
y[n]
e
Mapping Samples to Bits at Receiver codeword bits in
1001110101 DACmodulate generate digitized symbols n = sample index
x[n] j = bit index NOISY amp DISTORTING ANALOG CHANNEL
y[n] b[j] demodulate 1001110101
amp filter sample amp threshold
codeword bits out 16 samples per bit
ADC
602 Fall 2012 Lecture 8 Slide 602 Fall 2012 LeLeLeLeLectctctctcturururururuuuuuurrrruuuu ruuruuu ruuuuu eeeeeeeeee eeeeeeeeeeeeeeeee eeeeee 88888 SSSSSlililiiiliiliii de 6
1 0 0 1 1 1 0 1 0 1
b[j]
Threshold
For now assume no distortion only
Additive ZeroshyMean Noise
bull Received signal y[n] = x[n] + w[n]
ie received samples y[n] are the transmitted samples x[n] + zeroshymean noise w[n] on each sample assumed iid (independent and identically distributed at each n)
bull SignalshytoshyNoise Ratio (SNR) ndash usually denotes the ratio of
(timeshyaveraged or peak) signal power ie squared amplitude of x[n]
to
noise variance ie expected squared amplitude of w[n]
602 Fall 2012 Lecture 8 Slide 7
SNR Example
Changing the amplification factor (gain) A leads to different SNR values
bull Lower A lower SNR bull Signal quality degrades with
lower SNR
602 Fall 2012 Lecture 8 Slide 8
SignalshytoshyNoise Ratio (SNR) 10logX X
100 10000000000
90 1000000000
80 100000000
70 10000000
60 1000000
50 100000
40 10000
30 1000
20 100
10 10
0 1
shy10 01
shy20 001
shy30 0001
shy40 00001
shy50 0000001
shy60 00000001
shy70 000000001
The SignalshytoshyNoise ratio (SNR) is useful in judging the impact of noise on system performance
PPsignalSNR = PPnoise SNR for power is often measured in decibels (dB)
SNR (db) =10 log10
⎛ ⎜⎜⎝
PPsignal PPnoise
⎞ ⎟⎟⎠
Caution For measuring ratios of amplitudes rather than powers take 20 log10 (ratio) shy80 0000000001
shy90 00000000001
shy100 000000000001301db is a factor of 2 in power ratio602 Fall 2012 Lecture 8 Slide 9
Noise Characterization From Histogram to PDF
Experiment create histograms of sample values from independent trials of increasing lengths
Histogram typically converges to a shape that is known ndash after normalization to unit area ndash as a probability density function (PDF)
602 Fall 2012 Lecture 8 Slide 10
Using the PDF in Probability Calculations
We say that X is a random variable governed by the PDF fX(x) if X takes on a numerical value in the range of x1 to x2 with a probability calculated from the PDF of Xas
int x2p(x1 lt X lt x2 ) = fX (x)dx x1
A PDF is not a probability ndash its associated integrals are Note that probability values are always in the range of 0 to 1
602 Fall 2012 Lecture 8 Slide 11
The Ubiquity of Gaussian Noise The net noise observed at the receiver is often the sum of many small independent random contributions from many factors If these independent random variables have finite mean and variance the Central Limit Theorem says their sum will be a Gaussian
The figure below shows the histograms of the results of 10000 trials of summing 100 random samples drawn from [shy11] using two different distributions
1shy1
1 Triangular Uniform
1shy1
05 PDF PDF
602 Fall 2012 Lecture 8 Slide 12
Mean and Variance of a Random Variable X
σx
The mean or expected value JX is defined and computed as
=infin x fX (x)dxmicroX intminusinfin
The variance σX2 is the expected squared variation or deviation
of the random variable around the mean and is thus computed as
2 infin σ X = intminusinfin
(x minusmicroX )2 fX (x)dx The square root of the variance is the standard deviation σX
602 Fall 2012 Lecture 8 Slide 13
Visualizing Mean and Variance
Changes in mean shift the Changes in variance narrow center of mass of PDF or broaden the PDF (but
area is always equal to 1) 602 Fall 2012 Lecture 8 Slide 14
The Gaussian Distribution
A Gaussian random variable W with mean J and variance σ2 has a PDF described by
1fW (w) = 2πσ 2
602 Fall 2012 Lecture 8 Slide 15
e minus wminusmicro( )2
2σ 2
Lecture 8 Slide 15
minusmicro 2
σ 2
Noise Model for iid Process w[n]
bull Assume each w[n] is distributed as the Gaussian random variable W on the preceding slide but with mean 0 and independently of w[] at all other times
602 Fall 2012 Lecture 8 Slide 16
Estimating noise parameters bull Transmit a sequence of ldquo0rdquo bits ie hold the
voltage V0 at the transmitter
bull Observe received samples y[n] n = 0 1 K 1 ndash Process these samples to obtain the statistics of the noise
process for additive noise under the assumption of iid (independent identically distributed) noise samples (or more generally an ldquoergodicrdquo process ndash beyond our scope)
bull Noise samples w[n] = y[n] V0
bull For large K can use the sample mean m to estimate 1 and sample standard deviation s to estimate σ
602 Fall 2012 Lecture 8 Slide 17
Back to distinguishing ldquo1rdquo from ldquo0rdquo bull Assume bipolar signaling
Transmit L samples x[] at +Vp (=V1) to signal a ldquo1rdquo
Transmit L samples x[] at ndashVp (=V0) to signal a ldquo0rdquo
bull Simpleshyminded receiver take a single sample value y[nj] at an appropriately chosen instant nj in the
bull jshyth bit interval Decide between the following two hypotheses
y[nj] = +Vp + w[nj] (==gt ldquo1rdquo)
or
y[nj] = ndashVp + w[nj] (==gt ldquo0rdquo)
where w[nj] is Gaussian zeroshymean variance σ2
602 Fall 2012 Lecture 8 Slide 18
Connecting the SNR and BER
EV S = p
2σ 2 = N
P(ldquo0rdquo)=05 1=shyVp =
1=Vp
noise
P(ldquo1rdquo)=05
= noise
0 shyVp +Vp
)( S
N E V 1 1 erfc == )( P error
602 Fall 2012 Lecture 8 Slide 19
BER erfc ( p ) = σ 2 2 2 0
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
rre
Samples after Processing at Receiver bits in
1001110101 DACmodulate generate digitized symbols
codeword
x[n] NOISY amp DISTORTING ANALOG CHANNEL
ADC demodulate
y[n] Assuming no noise only endshytoshyend distortion
filter
602 Fall 2012 Lecture 8 Slide 502 Fall 2012 Lecturururururururururrrrrurrrrrruuurrrrrrurururuurr eeeeeeeeeeeeeeeeeeeeeeeeeeeeeee eeeee 8 Slide 5
y[n]
e
Mapping Samples to Bits at Receiver codeword bits in
1001110101 DACmodulate generate digitized symbols n = sample index
x[n] j = bit index NOISY amp DISTORTING ANALOG CHANNEL
y[n] b[j] demodulate 1001110101
amp filter sample amp threshold
codeword bits out 16 samples per bit
ADC
602 Fall 2012 Lecture 8 Slide 602 Fall 2012 LeLeLeLeLectctctctcturururururuuuuuurrrruuuu ruuruuu ruuuuu eeeeeeeeee eeeeeeeeeeeeeeeee eeeeee 88888 SSSSSlililiiiliiliii de 6
1 0 0 1 1 1 0 1 0 1
b[j]
Threshold
For now assume no distortion only
Additive ZeroshyMean Noise
bull Received signal y[n] = x[n] + w[n]
ie received samples y[n] are the transmitted samples x[n] + zeroshymean noise w[n] on each sample assumed iid (independent and identically distributed at each n)
bull SignalshytoshyNoise Ratio (SNR) ndash usually denotes the ratio of
(timeshyaveraged or peak) signal power ie squared amplitude of x[n]
to
noise variance ie expected squared amplitude of w[n]
602 Fall 2012 Lecture 8 Slide 7
SNR Example
Changing the amplification factor (gain) A leads to different SNR values
bull Lower A lower SNR bull Signal quality degrades with
lower SNR
602 Fall 2012 Lecture 8 Slide 8
SignalshytoshyNoise Ratio (SNR) 10logX X
100 10000000000
90 1000000000
80 100000000
70 10000000
60 1000000
50 100000
40 10000
30 1000
20 100
10 10
0 1
shy10 01
shy20 001
shy30 0001
shy40 00001
shy50 0000001
shy60 00000001
shy70 000000001
The SignalshytoshyNoise ratio (SNR) is useful in judging the impact of noise on system performance
PPsignalSNR = PPnoise SNR for power is often measured in decibels (dB)
SNR (db) =10 log10
⎛ ⎜⎜⎝
PPsignal PPnoise
⎞ ⎟⎟⎠
Caution For measuring ratios of amplitudes rather than powers take 20 log10 (ratio) shy80 0000000001
shy90 00000000001
shy100 000000000001301db is a factor of 2 in power ratio602 Fall 2012 Lecture 8 Slide 9
Noise Characterization From Histogram to PDF
Experiment create histograms of sample values from independent trials of increasing lengths
Histogram typically converges to a shape that is known ndash after normalization to unit area ndash as a probability density function (PDF)
602 Fall 2012 Lecture 8 Slide 10
Using the PDF in Probability Calculations
We say that X is a random variable governed by the PDF fX(x) if X takes on a numerical value in the range of x1 to x2 with a probability calculated from the PDF of Xas
int x2p(x1 lt X lt x2 ) = fX (x)dx x1
A PDF is not a probability ndash its associated integrals are Note that probability values are always in the range of 0 to 1
602 Fall 2012 Lecture 8 Slide 11
The Ubiquity of Gaussian Noise The net noise observed at the receiver is often the sum of many small independent random contributions from many factors If these independent random variables have finite mean and variance the Central Limit Theorem says their sum will be a Gaussian
The figure below shows the histograms of the results of 10000 trials of summing 100 random samples drawn from [shy11] using two different distributions
1shy1
1 Triangular Uniform
1shy1
05 PDF PDF
602 Fall 2012 Lecture 8 Slide 12
Mean and Variance of a Random Variable X
σx
The mean or expected value JX is defined and computed as
=infin x fX (x)dxmicroX intminusinfin
The variance σX2 is the expected squared variation or deviation
of the random variable around the mean and is thus computed as
2 infin σ X = intminusinfin
(x minusmicroX )2 fX (x)dx The square root of the variance is the standard deviation σX
602 Fall 2012 Lecture 8 Slide 13
Visualizing Mean and Variance
Changes in mean shift the Changes in variance narrow center of mass of PDF or broaden the PDF (but
area is always equal to 1) 602 Fall 2012 Lecture 8 Slide 14
The Gaussian Distribution
A Gaussian random variable W with mean J and variance σ2 has a PDF described by
1fW (w) = 2πσ 2
602 Fall 2012 Lecture 8 Slide 15
e minus wminusmicro( )2
2σ 2
Lecture 8 Slide 15
minusmicro 2
σ 2
Noise Model for iid Process w[n]
bull Assume each w[n] is distributed as the Gaussian random variable W on the preceding slide but with mean 0 and independently of w[] at all other times
602 Fall 2012 Lecture 8 Slide 16
Estimating noise parameters bull Transmit a sequence of ldquo0rdquo bits ie hold the
voltage V0 at the transmitter
bull Observe received samples y[n] n = 0 1 K 1 ndash Process these samples to obtain the statistics of the noise
process for additive noise under the assumption of iid (independent identically distributed) noise samples (or more generally an ldquoergodicrdquo process ndash beyond our scope)
bull Noise samples w[n] = y[n] V0
bull For large K can use the sample mean m to estimate 1 and sample standard deviation s to estimate σ
602 Fall 2012 Lecture 8 Slide 17
Back to distinguishing ldquo1rdquo from ldquo0rdquo bull Assume bipolar signaling
Transmit L samples x[] at +Vp (=V1) to signal a ldquo1rdquo
Transmit L samples x[] at ndashVp (=V0) to signal a ldquo0rdquo
bull Simpleshyminded receiver take a single sample value y[nj] at an appropriately chosen instant nj in the
bull jshyth bit interval Decide between the following two hypotheses
y[nj] = +Vp + w[nj] (==gt ldquo1rdquo)
or
y[nj] = ndashVp + w[nj] (==gt ldquo0rdquo)
where w[nj] is Gaussian zeroshymean variance σ2
602 Fall 2012 Lecture 8 Slide 18
Connecting the SNR and BER
EV S = p
2σ 2 = N
P(ldquo0rdquo)=05 1=shyVp =
1=Vp
noise
P(ldquo1rdquo)=05
= noise
0 shyVp +Vp
)( S
N E V 1 1 erfc == )( P error
602 Fall 2012 Lecture 8 Slide 19
BER erfc ( p ) = σ 2 2 2 0
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
e
Mapping Samples to Bits at Receiver codeword bits in
1001110101 DACmodulate generate digitized symbols n = sample index
x[n] j = bit index NOISY amp DISTORTING ANALOG CHANNEL
y[n] b[j] demodulate 1001110101
amp filter sample amp threshold
codeword bits out 16 samples per bit
ADC
602 Fall 2012 Lecture 8 Slide 602 Fall 2012 LeLeLeLeLectctctctcturururururuuuuuurrrruuuu ruuruuu ruuuuu eeeeeeeeee eeeeeeeeeeeeeeeee eeeeee 88888 SSSSSlililiiiliiliii de 6
1 0 0 1 1 1 0 1 0 1
b[j]
Threshold
For now assume no distortion only
Additive ZeroshyMean Noise
bull Received signal y[n] = x[n] + w[n]
ie received samples y[n] are the transmitted samples x[n] + zeroshymean noise w[n] on each sample assumed iid (independent and identically distributed at each n)
bull SignalshytoshyNoise Ratio (SNR) ndash usually denotes the ratio of
(timeshyaveraged or peak) signal power ie squared amplitude of x[n]
to
noise variance ie expected squared amplitude of w[n]
602 Fall 2012 Lecture 8 Slide 7
SNR Example
Changing the amplification factor (gain) A leads to different SNR values
bull Lower A lower SNR bull Signal quality degrades with
lower SNR
602 Fall 2012 Lecture 8 Slide 8
SignalshytoshyNoise Ratio (SNR) 10logX X
100 10000000000
90 1000000000
80 100000000
70 10000000
60 1000000
50 100000
40 10000
30 1000
20 100
10 10
0 1
shy10 01
shy20 001
shy30 0001
shy40 00001
shy50 0000001
shy60 00000001
shy70 000000001
The SignalshytoshyNoise ratio (SNR) is useful in judging the impact of noise on system performance
PPsignalSNR = PPnoise SNR for power is often measured in decibels (dB)
SNR (db) =10 log10
⎛ ⎜⎜⎝
PPsignal PPnoise
⎞ ⎟⎟⎠
Caution For measuring ratios of amplitudes rather than powers take 20 log10 (ratio) shy80 0000000001
shy90 00000000001
shy100 000000000001301db is a factor of 2 in power ratio602 Fall 2012 Lecture 8 Slide 9
Noise Characterization From Histogram to PDF
Experiment create histograms of sample values from independent trials of increasing lengths
Histogram typically converges to a shape that is known ndash after normalization to unit area ndash as a probability density function (PDF)
602 Fall 2012 Lecture 8 Slide 10
Using the PDF in Probability Calculations
We say that X is a random variable governed by the PDF fX(x) if X takes on a numerical value in the range of x1 to x2 with a probability calculated from the PDF of Xas
int x2p(x1 lt X lt x2 ) = fX (x)dx x1
A PDF is not a probability ndash its associated integrals are Note that probability values are always in the range of 0 to 1
602 Fall 2012 Lecture 8 Slide 11
The Ubiquity of Gaussian Noise The net noise observed at the receiver is often the sum of many small independent random contributions from many factors If these independent random variables have finite mean and variance the Central Limit Theorem says their sum will be a Gaussian
The figure below shows the histograms of the results of 10000 trials of summing 100 random samples drawn from [shy11] using two different distributions
1shy1
1 Triangular Uniform
1shy1
05 PDF PDF
602 Fall 2012 Lecture 8 Slide 12
Mean and Variance of a Random Variable X
σx
The mean or expected value JX is defined and computed as
=infin x fX (x)dxmicroX intminusinfin
The variance σX2 is the expected squared variation or deviation
of the random variable around the mean and is thus computed as
2 infin σ X = intminusinfin
(x minusmicroX )2 fX (x)dx The square root of the variance is the standard deviation σX
602 Fall 2012 Lecture 8 Slide 13
Visualizing Mean and Variance
Changes in mean shift the Changes in variance narrow center of mass of PDF or broaden the PDF (but
area is always equal to 1) 602 Fall 2012 Lecture 8 Slide 14
The Gaussian Distribution
A Gaussian random variable W with mean J and variance σ2 has a PDF described by
1fW (w) = 2πσ 2
602 Fall 2012 Lecture 8 Slide 15
e minus wminusmicro( )2
2σ 2
Lecture 8 Slide 15
minusmicro 2
σ 2
Noise Model for iid Process w[n]
bull Assume each w[n] is distributed as the Gaussian random variable W on the preceding slide but with mean 0 and independently of w[] at all other times
602 Fall 2012 Lecture 8 Slide 16
Estimating noise parameters bull Transmit a sequence of ldquo0rdquo bits ie hold the
voltage V0 at the transmitter
bull Observe received samples y[n] n = 0 1 K 1 ndash Process these samples to obtain the statistics of the noise
process for additive noise under the assumption of iid (independent identically distributed) noise samples (or more generally an ldquoergodicrdquo process ndash beyond our scope)
bull Noise samples w[n] = y[n] V0
bull For large K can use the sample mean m to estimate 1 and sample standard deviation s to estimate σ
602 Fall 2012 Lecture 8 Slide 17
Back to distinguishing ldquo1rdquo from ldquo0rdquo bull Assume bipolar signaling
Transmit L samples x[] at +Vp (=V1) to signal a ldquo1rdquo
Transmit L samples x[] at ndashVp (=V0) to signal a ldquo0rdquo
bull Simpleshyminded receiver take a single sample value y[nj] at an appropriately chosen instant nj in the
bull jshyth bit interval Decide between the following two hypotheses
y[nj] = +Vp + w[nj] (==gt ldquo1rdquo)
or
y[nj] = ndashVp + w[nj] (==gt ldquo0rdquo)
where w[nj] is Gaussian zeroshymean variance σ2
602 Fall 2012 Lecture 8 Slide 18
Connecting the SNR and BER
EV S = p
2σ 2 = N
P(ldquo0rdquo)=05 1=shyVp =
1=Vp
noise
P(ldquo1rdquo)=05
= noise
0 shyVp +Vp
)( S
N E V 1 1 erfc == )( P error
602 Fall 2012 Lecture 8 Slide 19
BER erfc ( p ) = σ 2 2 2 0
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
For now assume no distortion only
Additive ZeroshyMean Noise
bull Received signal y[n] = x[n] + w[n]
ie received samples y[n] are the transmitted samples x[n] + zeroshymean noise w[n] on each sample assumed iid (independent and identically distributed at each n)
bull SignalshytoshyNoise Ratio (SNR) ndash usually denotes the ratio of
(timeshyaveraged or peak) signal power ie squared amplitude of x[n]
to
noise variance ie expected squared amplitude of w[n]
602 Fall 2012 Lecture 8 Slide 7
SNR Example
Changing the amplification factor (gain) A leads to different SNR values
bull Lower A lower SNR bull Signal quality degrades with
lower SNR
602 Fall 2012 Lecture 8 Slide 8
SignalshytoshyNoise Ratio (SNR) 10logX X
100 10000000000
90 1000000000
80 100000000
70 10000000
60 1000000
50 100000
40 10000
30 1000
20 100
10 10
0 1
shy10 01
shy20 001
shy30 0001
shy40 00001
shy50 0000001
shy60 00000001
shy70 000000001
The SignalshytoshyNoise ratio (SNR) is useful in judging the impact of noise on system performance
PPsignalSNR = PPnoise SNR for power is often measured in decibels (dB)
SNR (db) =10 log10
⎛ ⎜⎜⎝
PPsignal PPnoise
⎞ ⎟⎟⎠
Caution For measuring ratios of amplitudes rather than powers take 20 log10 (ratio) shy80 0000000001
shy90 00000000001
shy100 000000000001301db is a factor of 2 in power ratio602 Fall 2012 Lecture 8 Slide 9
Noise Characterization From Histogram to PDF
Experiment create histograms of sample values from independent trials of increasing lengths
Histogram typically converges to a shape that is known ndash after normalization to unit area ndash as a probability density function (PDF)
602 Fall 2012 Lecture 8 Slide 10
Using the PDF in Probability Calculations
We say that X is a random variable governed by the PDF fX(x) if X takes on a numerical value in the range of x1 to x2 with a probability calculated from the PDF of Xas
int x2p(x1 lt X lt x2 ) = fX (x)dx x1
A PDF is not a probability ndash its associated integrals are Note that probability values are always in the range of 0 to 1
602 Fall 2012 Lecture 8 Slide 11
The Ubiquity of Gaussian Noise The net noise observed at the receiver is often the sum of many small independent random contributions from many factors If these independent random variables have finite mean and variance the Central Limit Theorem says their sum will be a Gaussian
The figure below shows the histograms of the results of 10000 trials of summing 100 random samples drawn from [shy11] using two different distributions
1shy1
1 Triangular Uniform
1shy1
05 PDF PDF
602 Fall 2012 Lecture 8 Slide 12
Mean and Variance of a Random Variable X
σx
The mean or expected value JX is defined and computed as
=infin x fX (x)dxmicroX intminusinfin
The variance σX2 is the expected squared variation or deviation
of the random variable around the mean and is thus computed as
2 infin σ X = intminusinfin
(x minusmicroX )2 fX (x)dx The square root of the variance is the standard deviation σX
602 Fall 2012 Lecture 8 Slide 13
Visualizing Mean and Variance
Changes in mean shift the Changes in variance narrow center of mass of PDF or broaden the PDF (but
area is always equal to 1) 602 Fall 2012 Lecture 8 Slide 14
The Gaussian Distribution
A Gaussian random variable W with mean J and variance σ2 has a PDF described by
1fW (w) = 2πσ 2
602 Fall 2012 Lecture 8 Slide 15
e minus wminusmicro( )2
2σ 2
Lecture 8 Slide 15
minusmicro 2
σ 2
Noise Model for iid Process w[n]
bull Assume each w[n] is distributed as the Gaussian random variable W on the preceding slide but with mean 0 and independently of w[] at all other times
602 Fall 2012 Lecture 8 Slide 16
Estimating noise parameters bull Transmit a sequence of ldquo0rdquo bits ie hold the
voltage V0 at the transmitter
bull Observe received samples y[n] n = 0 1 K 1 ndash Process these samples to obtain the statistics of the noise
process for additive noise under the assumption of iid (independent identically distributed) noise samples (or more generally an ldquoergodicrdquo process ndash beyond our scope)
bull Noise samples w[n] = y[n] V0
bull For large K can use the sample mean m to estimate 1 and sample standard deviation s to estimate σ
602 Fall 2012 Lecture 8 Slide 17
Back to distinguishing ldquo1rdquo from ldquo0rdquo bull Assume bipolar signaling
Transmit L samples x[] at +Vp (=V1) to signal a ldquo1rdquo
Transmit L samples x[] at ndashVp (=V0) to signal a ldquo0rdquo
bull Simpleshyminded receiver take a single sample value y[nj] at an appropriately chosen instant nj in the
bull jshyth bit interval Decide between the following two hypotheses
y[nj] = +Vp + w[nj] (==gt ldquo1rdquo)
or
y[nj] = ndashVp + w[nj] (==gt ldquo0rdquo)
where w[nj] is Gaussian zeroshymean variance σ2
602 Fall 2012 Lecture 8 Slide 18
Connecting the SNR and BER
EV S = p
2σ 2 = N
P(ldquo0rdquo)=05 1=shyVp =
1=Vp
noise
P(ldquo1rdquo)=05
= noise
0 shyVp +Vp
)( S
N E V 1 1 erfc == )( P error
602 Fall 2012 Lecture 8 Slide 19
BER erfc ( p ) = σ 2 2 2 0
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
SNR Example
Changing the amplification factor (gain) A leads to different SNR values
bull Lower A lower SNR bull Signal quality degrades with
lower SNR
602 Fall 2012 Lecture 8 Slide 8
SignalshytoshyNoise Ratio (SNR) 10logX X
100 10000000000
90 1000000000
80 100000000
70 10000000
60 1000000
50 100000
40 10000
30 1000
20 100
10 10
0 1
shy10 01
shy20 001
shy30 0001
shy40 00001
shy50 0000001
shy60 00000001
shy70 000000001
The SignalshytoshyNoise ratio (SNR) is useful in judging the impact of noise on system performance
PPsignalSNR = PPnoise SNR for power is often measured in decibels (dB)
SNR (db) =10 log10
⎛ ⎜⎜⎝
PPsignal PPnoise
⎞ ⎟⎟⎠
Caution For measuring ratios of amplitudes rather than powers take 20 log10 (ratio) shy80 0000000001
shy90 00000000001
shy100 000000000001301db is a factor of 2 in power ratio602 Fall 2012 Lecture 8 Slide 9
Noise Characterization From Histogram to PDF
Experiment create histograms of sample values from independent trials of increasing lengths
Histogram typically converges to a shape that is known ndash after normalization to unit area ndash as a probability density function (PDF)
602 Fall 2012 Lecture 8 Slide 10
Using the PDF in Probability Calculations
We say that X is a random variable governed by the PDF fX(x) if X takes on a numerical value in the range of x1 to x2 with a probability calculated from the PDF of Xas
int x2p(x1 lt X lt x2 ) = fX (x)dx x1
A PDF is not a probability ndash its associated integrals are Note that probability values are always in the range of 0 to 1
602 Fall 2012 Lecture 8 Slide 11
The Ubiquity of Gaussian Noise The net noise observed at the receiver is often the sum of many small independent random contributions from many factors If these independent random variables have finite mean and variance the Central Limit Theorem says their sum will be a Gaussian
The figure below shows the histograms of the results of 10000 trials of summing 100 random samples drawn from [shy11] using two different distributions
1shy1
1 Triangular Uniform
1shy1
05 PDF PDF
602 Fall 2012 Lecture 8 Slide 12
Mean and Variance of a Random Variable X
σx
The mean or expected value JX is defined and computed as
=infin x fX (x)dxmicroX intminusinfin
The variance σX2 is the expected squared variation or deviation
of the random variable around the mean and is thus computed as
2 infin σ X = intminusinfin
(x minusmicroX )2 fX (x)dx The square root of the variance is the standard deviation σX
602 Fall 2012 Lecture 8 Slide 13
Visualizing Mean and Variance
Changes in mean shift the Changes in variance narrow center of mass of PDF or broaden the PDF (but
area is always equal to 1) 602 Fall 2012 Lecture 8 Slide 14
The Gaussian Distribution
A Gaussian random variable W with mean J and variance σ2 has a PDF described by
1fW (w) = 2πσ 2
602 Fall 2012 Lecture 8 Slide 15
e minus wminusmicro( )2
2σ 2
Lecture 8 Slide 15
minusmicro 2
σ 2
Noise Model for iid Process w[n]
bull Assume each w[n] is distributed as the Gaussian random variable W on the preceding slide but with mean 0 and independently of w[] at all other times
602 Fall 2012 Lecture 8 Slide 16
Estimating noise parameters bull Transmit a sequence of ldquo0rdquo bits ie hold the
voltage V0 at the transmitter
bull Observe received samples y[n] n = 0 1 K 1 ndash Process these samples to obtain the statistics of the noise
process for additive noise under the assumption of iid (independent identically distributed) noise samples (or more generally an ldquoergodicrdquo process ndash beyond our scope)
bull Noise samples w[n] = y[n] V0
bull For large K can use the sample mean m to estimate 1 and sample standard deviation s to estimate σ
602 Fall 2012 Lecture 8 Slide 17
Back to distinguishing ldquo1rdquo from ldquo0rdquo bull Assume bipolar signaling
Transmit L samples x[] at +Vp (=V1) to signal a ldquo1rdquo
Transmit L samples x[] at ndashVp (=V0) to signal a ldquo0rdquo
bull Simpleshyminded receiver take a single sample value y[nj] at an appropriately chosen instant nj in the
bull jshyth bit interval Decide between the following two hypotheses
y[nj] = +Vp + w[nj] (==gt ldquo1rdquo)
or
y[nj] = ndashVp + w[nj] (==gt ldquo0rdquo)
where w[nj] is Gaussian zeroshymean variance σ2
602 Fall 2012 Lecture 8 Slide 18
Connecting the SNR and BER
EV S = p
2σ 2 = N
P(ldquo0rdquo)=05 1=shyVp =
1=Vp
noise
P(ldquo1rdquo)=05
= noise
0 shyVp +Vp
)( S
N E V 1 1 erfc == )( P error
602 Fall 2012 Lecture 8 Slide 19
BER erfc ( p ) = σ 2 2 2 0
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
SignalshytoshyNoise Ratio (SNR) 10logX X
100 10000000000
90 1000000000
80 100000000
70 10000000
60 1000000
50 100000
40 10000
30 1000
20 100
10 10
0 1
shy10 01
shy20 001
shy30 0001
shy40 00001
shy50 0000001
shy60 00000001
shy70 000000001
The SignalshytoshyNoise ratio (SNR) is useful in judging the impact of noise on system performance
PPsignalSNR = PPnoise SNR for power is often measured in decibels (dB)
SNR (db) =10 log10
⎛ ⎜⎜⎝
PPsignal PPnoise
⎞ ⎟⎟⎠
Caution For measuring ratios of amplitudes rather than powers take 20 log10 (ratio) shy80 0000000001
shy90 00000000001
shy100 000000000001301db is a factor of 2 in power ratio602 Fall 2012 Lecture 8 Slide 9
Noise Characterization From Histogram to PDF
Experiment create histograms of sample values from independent trials of increasing lengths
Histogram typically converges to a shape that is known ndash after normalization to unit area ndash as a probability density function (PDF)
602 Fall 2012 Lecture 8 Slide 10
Using the PDF in Probability Calculations
We say that X is a random variable governed by the PDF fX(x) if X takes on a numerical value in the range of x1 to x2 with a probability calculated from the PDF of Xas
int x2p(x1 lt X lt x2 ) = fX (x)dx x1
A PDF is not a probability ndash its associated integrals are Note that probability values are always in the range of 0 to 1
602 Fall 2012 Lecture 8 Slide 11
The Ubiquity of Gaussian Noise The net noise observed at the receiver is often the sum of many small independent random contributions from many factors If these independent random variables have finite mean and variance the Central Limit Theorem says their sum will be a Gaussian
The figure below shows the histograms of the results of 10000 trials of summing 100 random samples drawn from [shy11] using two different distributions
1shy1
1 Triangular Uniform
1shy1
05 PDF PDF
602 Fall 2012 Lecture 8 Slide 12
Mean and Variance of a Random Variable X
σx
The mean or expected value JX is defined and computed as
=infin x fX (x)dxmicroX intminusinfin
The variance σX2 is the expected squared variation or deviation
of the random variable around the mean and is thus computed as
2 infin σ X = intminusinfin
(x minusmicroX )2 fX (x)dx The square root of the variance is the standard deviation σX
602 Fall 2012 Lecture 8 Slide 13
Visualizing Mean and Variance
Changes in mean shift the Changes in variance narrow center of mass of PDF or broaden the PDF (but
area is always equal to 1) 602 Fall 2012 Lecture 8 Slide 14
The Gaussian Distribution
A Gaussian random variable W with mean J and variance σ2 has a PDF described by
1fW (w) = 2πσ 2
602 Fall 2012 Lecture 8 Slide 15
e minus wminusmicro( )2
2σ 2
Lecture 8 Slide 15
minusmicro 2
σ 2
Noise Model for iid Process w[n]
bull Assume each w[n] is distributed as the Gaussian random variable W on the preceding slide but with mean 0 and independently of w[] at all other times
602 Fall 2012 Lecture 8 Slide 16
Estimating noise parameters bull Transmit a sequence of ldquo0rdquo bits ie hold the
voltage V0 at the transmitter
bull Observe received samples y[n] n = 0 1 K 1 ndash Process these samples to obtain the statistics of the noise
process for additive noise under the assumption of iid (independent identically distributed) noise samples (or more generally an ldquoergodicrdquo process ndash beyond our scope)
bull Noise samples w[n] = y[n] V0
bull For large K can use the sample mean m to estimate 1 and sample standard deviation s to estimate σ
602 Fall 2012 Lecture 8 Slide 17
Back to distinguishing ldquo1rdquo from ldquo0rdquo bull Assume bipolar signaling
Transmit L samples x[] at +Vp (=V1) to signal a ldquo1rdquo
Transmit L samples x[] at ndashVp (=V0) to signal a ldquo0rdquo
bull Simpleshyminded receiver take a single sample value y[nj] at an appropriately chosen instant nj in the
bull jshyth bit interval Decide between the following two hypotheses
y[nj] = +Vp + w[nj] (==gt ldquo1rdquo)
or
y[nj] = ndashVp + w[nj] (==gt ldquo0rdquo)
where w[nj] is Gaussian zeroshymean variance σ2
602 Fall 2012 Lecture 8 Slide 18
Connecting the SNR and BER
EV S = p
2σ 2 = N
P(ldquo0rdquo)=05 1=shyVp =
1=Vp
noise
P(ldquo1rdquo)=05
= noise
0 shyVp +Vp
)( S
N E V 1 1 erfc == )( P error
602 Fall 2012 Lecture 8 Slide 19
BER erfc ( p ) = σ 2 2 2 0
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
Noise Characterization From Histogram to PDF
Experiment create histograms of sample values from independent trials of increasing lengths
Histogram typically converges to a shape that is known ndash after normalization to unit area ndash as a probability density function (PDF)
602 Fall 2012 Lecture 8 Slide 10
Using the PDF in Probability Calculations
We say that X is a random variable governed by the PDF fX(x) if X takes on a numerical value in the range of x1 to x2 with a probability calculated from the PDF of Xas
int x2p(x1 lt X lt x2 ) = fX (x)dx x1
A PDF is not a probability ndash its associated integrals are Note that probability values are always in the range of 0 to 1
602 Fall 2012 Lecture 8 Slide 11
The Ubiquity of Gaussian Noise The net noise observed at the receiver is often the sum of many small independent random contributions from many factors If these independent random variables have finite mean and variance the Central Limit Theorem says their sum will be a Gaussian
The figure below shows the histograms of the results of 10000 trials of summing 100 random samples drawn from [shy11] using two different distributions
1shy1
1 Triangular Uniform
1shy1
05 PDF PDF
602 Fall 2012 Lecture 8 Slide 12
Mean and Variance of a Random Variable X
σx
The mean or expected value JX is defined and computed as
=infin x fX (x)dxmicroX intminusinfin
The variance σX2 is the expected squared variation or deviation
of the random variable around the mean and is thus computed as
2 infin σ X = intminusinfin
(x minusmicroX )2 fX (x)dx The square root of the variance is the standard deviation σX
602 Fall 2012 Lecture 8 Slide 13
Visualizing Mean and Variance
Changes in mean shift the Changes in variance narrow center of mass of PDF or broaden the PDF (but
area is always equal to 1) 602 Fall 2012 Lecture 8 Slide 14
The Gaussian Distribution
A Gaussian random variable W with mean J and variance σ2 has a PDF described by
1fW (w) = 2πσ 2
602 Fall 2012 Lecture 8 Slide 15
e minus wminusmicro( )2
2σ 2
Lecture 8 Slide 15
minusmicro 2
σ 2
Noise Model for iid Process w[n]
bull Assume each w[n] is distributed as the Gaussian random variable W on the preceding slide but with mean 0 and independently of w[] at all other times
602 Fall 2012 Lecture 8 Slide 16
Estimating noise parameters bull Transmit a sequence of ldquo0rdquo bits ie hold the
voltage V0 at the transmitter
bull Observe received samples y[n] n = 0 1 K 1 ndash Process these samples to obtain the statistics of the noise
process for additive noise under the assumption of iid (independent identically distributed) noise samples (or more generally an ldquoergodicrdquo process ndash beyond our scope)
bull Noise samples w[n] = y[n] V0
bull For large K can use the sample mean m to estimate 1 and sample standard deviation s to estimate σ
602 Fall 2012 Lecture 8 Slide 17
Back to distinguishing ldquo1rdquo from ldquo0rdquo bull Assume bipolar signaling
Transmit L samples x[] at +Vp (=V1) to signal a ldquo1rdquo
Transmit L samples x[] at ndashVp (=V0) to signal a ldquo0rdquo
bull Simpleshyminded receiver take a single sample value y[nj] at an appropriately chosen instant nj in the
bull jshyth bit interval Decide between the following two hypotheses
y[nj] = +Vp + w[nj] (==gt ldquo1rdquo)
or
y[nj] = ndashVp + w[nj] (==gt ldquo0rdquo)
where w[nj] is Gaussian zeroshymean variance σ2
602 Fall 2012 Lecture 8 Slide 18
Connecting the SNR and BER
EV S = p
2σ 2 = N
P(ldquo0rdquo)=05 1=shyVp =
1=Vp
noise
P(ldquo1rdquo)=05
= noise
0 shyVp +Vp
)( S
N E V 1 1 erfc == )( P error
602 Fall 2012 Lecture 8 Slide 19
BER erfc ( p ) = σ 2 2 2 0
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
Using the PDF in Probability Calculations
We say that X is a random variable governed by the PDF fX(x) if X takes on a numerical value in the range of x1 to x2 with a probability calculated from the PDF of Xas
int x2p(x1 lt X lt x2 ) = fX (x)dx x1
A PDF is not a probability ndash its associated integrals are Note that probability values are always in the range of 0 to 1
602 Fall 2012 Lecture 8 Slide 11
The Ubiquity of Gaussian Noise The net noise observed at the receiver is often the sum of many small independent random contributions from many factors If these independent random variables have finite mean and variance the Central Limit Theorem says their sum will be a Gaussian
The figure below shows the histograms of the results of 10000 trials of summing 100 random samples drawn from [shy11] using two different distributions
1shy1
1 Triangular Uniform
1shy1
05 PDF PDF
602 Fall 2012 Lecture 8 Slide 12
Mean and Variance of a Random Variable X
σx
The mean or expected value JX is defined and computed as
=infin x fX (x)dxmicroX intminusinfin
The variance σX2 is the expected squared variation or deviation
of the random variable around the mean and is thus computed as
2 infin σ X = intminusinfin
(x minusmicroX )2 fX (x)dx The square root of the variance is the standard deviation σX
602 Fall 2012 Lecture 8 Slide 13
Visualizing Mean and Variance
Changes in mean shift the Changes in variance narrow center of mass of PDF or broaden the PDF (but
area is always equal to 1) 602 Fall 2012 Lecture 8 Slide 14
The Gaussian Distribution
A Gaussian random variable W with mean J and variance σ2 has a PDF described by
1fW (w) = 2πσ 2
602 Fall 2012 Lecture 8 Slide 15
e minus wminusmicro( )2
2σ 2
Lecture 8 Slide 15
minusmicro 2
σ 2
Noise Model for iid Process w[n]
bull Assume each w[n] is distributed as the Gaussian random variable W on the preceding slide but with mean 0 and independently of w[] at all other times
602 Fall 2012 Lecture 8 Slide 16
Estimating noise parameters bull Transmit a sequence of ldquo0rdquo bits ie hold the
voltage V0 at the transmitter
bull Observe received samples y[n] n = 0 1 K 1 ndash Process these samples to obtain the statistics of the noise
process for additive noise under the assumption of iid (independent identically distributed) noise samples (or more generally an ldquoergodicrdquo process ndash beyond our scope)
bull Noise samples w[n] = y[n] V0
bull For large K can use the sample mean m to estimate 1 and sample standard deviation s to estimate σ
602 Fall 2012 Lecture 8 Slide 17
Back to distinguishing ldquo1rdquo from ldquo0rdquo bull Assume bipolar signaling
Transmit L samples x[] at +Vp (=V1) to signal a ldquo1rdquo
Transmit L samples x[] at ndashVp (=V0) to signal a ldquo0rdquo
bull Simpleshyminded receiver take a single sample value y[nj] at an appropriately chosen instant nj in the
bull jshyth bit interval Decide between the following two hypotheses
y[nj] = +Vp + w[nj] (==gt ldquo1rdquo)
or
y[nj] = ndashVp + w[nj] (==gt ldquo0rdquo)
where w[nj] is Gaussian zeroshymean variance σ2
602 Fall 2012 Lecture 8 Slide 18
Connecting the SNR and BER
EV S = p
2σ 2 = N
P(ldquo0rdquo)=05 1=shyVp =
1=Vp
noise
P(ldquo1rdquo)=05
= noise
0 shyVp +Vp
)( S
N E V 1 1 erfc == )( P error
602 Fall 2012 Lecture 8 Slide 19
BER erfc ( p ) = σ 2 2 2 0
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
The Ubiquity of Gaussian Noise The net noise observed at the receiver is often the sum of many small independent random contributions from many factors If these independent random variables have finite mean and variance the Central Limit Theorem says their sum will be a Gaussian
The figure below shows the histograms of the results of 10000 trials of summing 100 random samples drawn from [shy11] using two different distributions
1shy1
1 Triangular Uniform
1shy1
05 PDF PDF
602 Fall 2012 Lecture 8 Slide 12
Mean and Variance of a Random Variable X
σx
The mean or expected value JX is defined and computed as
=infin x fX (x)dxmicroX intminusinfin
The variance σX2 is the expected squared variation or deviation
of the random variable around the mean and is thus computed as
2 infin σ X = intminusinfin
(x minusmicroX )2 fX (x)dx The square root of the variance is the standard deviation σX
602 Fall 2012 Lecture 8 Slide 13
Visualizing Mean and Variance
Changes in mean shift the Changes in variance narrow center of mass of PDF or broaden the PDF (but
area is always equal to 1) 602 Fall 2012 Lecture 8 Slide 14
The Gaussian Distribution
A Gaussian random variable W with mean J and variance σ2 has a PDF described by
1fW (w) = 2πσ 2
602 Fall 2012 Lecture 8 Slide 15
e minus wminusmicro( )2
2σ 2
Lecture 8 Slide 15
minusmicro 2
σ 2
Noise Model for iid Process w[n]
bull Assume each w[n] is distributed as the Gaussian random variable W on the preceding slide but with mean 0 and independently of w[] at all other times
602 Fall 2012 Lecture 8 Slide 16
Estimating noise parameters bull Transmit a sequence of ldquo0rdquo bits ie hold the
voltage V0 at the transmitter
bull Observe received samples y[n] n = 0 1 K 1 ndash Process these samples to obtain the statistics of the noise
process for additive noise under the assumption of iid (independent identically distributed) noise samples (or more generally an ldquoergodicrdquo process ndash beyond our scope)
bull Noise samples w[n] = y[n] V0
bull For large K can use the sample mean m to estimate 1 and sample standard deviation s to estimate σ
602 Fall 2012 Lecture 8 Slide 17
Back to distinguishing ldquo1rdquo from ldquo0rdquo bull Assume bipolar signaling
Transmit L samples x[] at +Vp (=V1) to signal a ldquo1rdquo
Transmit L samples x[] at ndashVp (=V0) to signal a ldquo0rdquo
bull Simpleshyminded receiver take a single sample value y[nj] at an appropriately chosen instant nj in the
bull jshyth bit interval Decide between the following two hypotheses
y[nj] = +Vp + w[nj] (==gt ldquo1rdquo)
or
y[nj] = ndashVp + w[nj] (==gt ldquo0rdquo)
where w[nj] is Gaussian zeroshymean variance σ2
602 Fall 2012 Lecture 8 Slide 18
Connecting the SNR and BER
EV S = p
2σ 2 = N
P(ldquo0rdquo)=05 1=shyVp =
1=Vp
noise
P(ldquo1rdquo)=05
= noise
0 shyVp +Vp
)( S
N E V 1 1 erfc == )( P error
602 Fall 2012 Lecture 8 Slide 19
BER erfc ( p ) = σ 2 2 2 0
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
Mean and Variance of a Random Variable X
σx
The mean or expected value JX is defined and computed as
=infin x fX (x)dxmicroX intminusinfin
The variance σX2 is the expected squared variation or deviation
of the random variable around the mean and is thus computed as
2 infin σ X = intminusinfin
(x minusmicroX )2 fX (x)dx The square root of the variance is the standard deviation σX
602 Fall 2012 Lecture 8 Slide 13
Visualizing Mean and Variance
Changes in mean shift the Changes in variance narrow center of mass of PDF or broaden the PDF (but
area is always equal to 1) 602 Fall 2012 Lecture 8 Slide 14
The Gaussian Distribution
A Gaussian random variable W with mean J and variance σ2 has a PDF described by
1fW (w) = 2πσ 2
602 Fall 2012 Lecture 8 Slide 15
e minus wminusmicro( )2
2σ 2
Lecture 8 Slide 15
minusmicro 2
σ 2
Noise Model for iid Process w[n]
bull Assume each w[n] is distributed as the Gaussian random variable W on the preceding slide but with mean 0 and independently of w[] at all other times
602 Fall 2012 Lecture 8 Slide 16
Estimating noise parameters bull Transmit a sequence of ldquo0rdquo bits ie hold the
voltage V0 at the transmitter
bull Observe received samples y[n] n = 0 1 K 1 ndash Process these samples to obtain the statistics of the noise
process for additive noise under the assumption of iid (independent identically distributed) noise samples (or more generally an ldquoergodicrdquo process ndash beyond our scope)
bull Noise samples w[n] = y[n] V0
bull For large K can use the sample mean m to estimate 1 and sample standard deviation s to estimate σ
602 Fall 2012 Lecture 8 Slide 17
Back to distinguishing ldquo1rdquo from ldquo0rdquo bull Assume bipolar signaling
Transmit L samples x[] at +Vp (=V1) to signal a ldquo1rdquo
Transmit L samples x[] at ndashVp (=V0) to signal a ldquo0rdquo
bull Simpleshyminded receiver take a single sample value y[nj] at an appropriately chosen instant nj in the
bull jshyth bit interval Decide between the following two hypotheses
y[nj] = +Vp + w[nj] (==gt ldquo1rdquo)
or
y[nj] = ndashVp + w[nj] (==gt ldquo0rdquo)
where w[nj] is Gaussian zeroshymean variance σ2
602 Fall 2012 Lecture 8 Slide 18
Connecting the SNR and BER
EV S = p
2σ 2 = N
P(ldquo0rdquo)=05 1=shyVp =
1=Vp
noise
P(ldquo1rdquo)=05
= noise
0 shyVp +Vp
)( S
N E V 1 1 erfc == )( P error
602 Fall 2012 Lecture 8 Slide 19
BER erfc ( p ) = σ 2 2 2 0
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
Visualizing Mean and Variance
Changes in mean shift the Changes in variance narrow center of mass of PDF or broaden the PDF (but
area is always equal to 1) 602 Fall 2012 Lecture 8 Slide 14
The Gaussian Distribution
A Gaussian random variable W with mean J and variance σ2 has a PDF described by
1fW (w) = 2πσ 2
602 Fall 2012 Lecture 8 Slide 15
e minus wminusmicro( )2
2σ 2
Lecture 8 Slide 15
minusmicro 2
σ 2
Noise Model for iid Process w[n]
bull Assume each w[n] is distributed as the Gaussian random variable W on the preceding slide but with mean 0 and independently of w[] at all other times
602 Fall 2012 Lecture 8 Slide 16
Estimating noise parameters bull Transmit a sequence of ldquo0rdquo bits ie hold the
voltage V0 at the transmitter
bull Observe received samples y[n] n = 0 1 K 1 ndash Process these samples to obtain the statistics of the noise
process for additive noise under the assumption of iid (independent identically distributed) noise samples (or more generally an ldquoergodicrdquo process ndash beyond our scope)
bull Noise samples w[n] = y[n] V0
bull For large K can use the sample mean m to estimate 1 and sample standard deviation s to estimate σ
602 Fall 2012 Lecture 8 Slide 17
Back to distinguishing ldquo1rdquo from ldquo0rdquo bull Assume bipolar signaling
Transmit L samples x[] at +Vp (=V1) to signal a ldquo1rdquo
Transmit L samples x[] at ndashVp (=V0) to signal a ldquo0rdquo
bull Simpleshyminded receiver take a single sample value y[nj] at an appropriately chosen instant nj in the
bull jshyth bit interval Decide between the following two hypotheses
y[nj] = +Vp + w[nj] (==gt ldquo1rdquo)
or
y[nj] = ndashVp + w[nj] (==gt ldquo0rdquo)
where w[nj] is Gaussian zeroshymean variance σ2
602 Fall 2012 Lecture 8 Slide 18
Connecting the SNR and BER
EV S = p
2σ 2 = N
P(ldquo0rdquo)=05 1=shyVp =
1=Vp
noise
P(ldquo1rdquo)=05
= noise
0 shyVp +Vp
)( S
N E V 1 1 erfc == )( P error
602 Fall 2012 Lecture 8 Slide 19
BER erfc ( p ) = σ 2 2 2 0
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
The Gaussian Distribution
A Gaussian random variable W with mean J and variance σ2 has a PDF described by
1fW (w) = 2πσ 2
602 Fall 2012 Lecture 8 Slide 15
e minus wminusmicro( )2
2σ 2
Lecture 8 Slide 15
minusmicro 2
σ 2
Noise Model for iid Process w[n]
bull Assume each w[n] is distributed as the Gaussian random variable W on the preceding slide but with mean 0 and independently of w[] at all other times
602 Fall 2012 Lecture 8 Slide 16
Estimating noise parameters bull Transmit a sequence of ldquo0rdquo bits ie hold the
voltage V0 at the transmitter
bull Observe received samples y[n] n = 0 1 K 1 ndash Process these samples to obtain the statistics of the noise
process for additive noise under the assumption of iid (independent identically distributed) noise samples (or more generally an ldquoergodicrdquo process ndash beyond our scope)
bull Noise samples w[n] = y[n] V0
bull For large K can use the sample mean m to estimate 1 and sample standard deviation s to estimate σ
602 Fall 2012 Lecture 8 Slide 17
Back to distinguishing ldquo1rdquo from ldquo0rdquo bull Assume bipolar signaling
Transmit L samples x[] at +Vp (=V1) to signal a ldquo1rdquo
Transmit L samples x[] at ndashVp (=V0) to signal a ldquo0rdquo
bull Simpleshyminded receiver take a single sample value y[nj] at an appropriately chosen instant nj in the
bull jshyth bit interval Decide between the following two hypotheses
y[nj] = +Vp + w[nj] (==gt ldquo1rdquo)
or
y[nj] = ndashVp + w[nj] (==gt ldquo0rdquo)
where w[nj] is Gaussian zeroshymean variance σ2
602 Fall 2012 Lecture 8 Slide 18
Connecting the SNR and BER
EV S = p
2σ 2 = N
P(ldquo0rdquo)=05 1=shyVp =
1=Vp
noise
P(ldquo1rdquo)=05
= noise
0 shyVp +Vp
)( S
N E V 1 1 erfc == )( P error
602 Fall 2012 Lecture 8 Slide 19
BER erfc ( p ) = σ 2 2 2 0
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
Noise Model for iid Process w[n]
bull Assume each w[n] is distributed as the Gaussian random variable W on the preceding slide but with mean 0 and independently of w[] at all other times
602 Fall 2012 Lecture 8 Slide 16
Estimating noise parameters bull Transmit a sequence of ldquo0rdquo bits ie hold the
voltage V0 at the transmitter
bull Observe received samples y[n] n = 0 1 K 1 ndash Process these samples to obtain the statistics of the noise
process for additive noise under the assumption of iid (independent identically distributed) noise samples (or more generally an ldquoergodicrdquo process ndash beyond our scope)
bull Noise samples w[n] = y[n] V0
bull For large K can use the sample mean m to estimate 1 and sample standard deviation s to estimate σ
602 Fall 2012 Lecture 8 Slide 17
Back to distinguishing ldquo1rdquo from ldquo0rdquo bull Assume bipolar signaling
Transmit L samples x[] at +Vp (=V1) to signal a ldquo1rdquo
Transmit L samples x[] at ndashVp (=V0) to signal a ldquo0rdquo
bull Simpleshyminded receiver take a single sample value y[nj] at an appropriately chosen instant nj in the
bull jshyth bit interval Decide between the following two hypotheses
y[nj] = +Vp + w[nj] (==gt ldquo1rdquo)
or
y[nj] = ndashVp + w[nj] (==gt ldquo0rdquo)
where w[nj] is Gaussian zeroshymean variance σ2
602 Fall 2012 Lecture 8 Slide 18
Connecting the SNR and BER
EV S = p
2σ 2 = N
P(ldquo0rdquo)=05 1=shyVp =
1=Vp
noise
P(ldquo1rdquo)=05
= noise
0 shyVp +Vp
)( S
N E V 1 1 erfc == )( P error
602 Fall 2012 Lecture 8 Slide 19
BER erfc ( p ) = σ 2 2 2 0
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
Estimating noise parameters bull Transmit a sequence of ldquo0rdquo bits ie hold the
voltage V0 at the transmitter
bull Observe received samples y[n] n = 0 1 K 1 ndash Process these samples to obtain the statistics of the noise
process for additive noise under the assumption of iid (independent identically distributed) noise samples (or more generally an ldquoergodicrdquo process ndash beyond our scope)
bull Noise samples w[n] = y[n] V0
bull For large K can use the sample mean m to estimate 1 and sample standard deviation s to estimate σ
602 Fall 2012 Lecture 8 Slide 17
Back to distinguishing ldquo1rdquo from ldquo0rdquo bull Assume bipolar signaling
Transmit L samples x[] at +Vp (=V1) to signal a ldquo1rdquo
Transmit L samples x[] at ndashVp (=V0) to signal a ldquo0rdquo
bull Simpleshyminded receiver take a single sample value y[nj] at an appropriately chosen instant nj in the
bull jshyth bit interval Decide between the following two hypotheses
y[nj] = +Vp + w[nj] (==gt ldquo1rdquo)
or
y[nj] = ndashVp + w[nj] (==gt ldquo0rdquo)
where w[nj] is Gaussian zeroshymean variance σ2
602 Fall 2012 Lecture 8 Slide 18
Connecting the SNR and BER
EV S = p
2σ 2 = N
P(ldquo0rdquo)=05 1=shyVp =
1=Vp
noise
P(ldquo1rdquo)=05
= noise
0 shyVp +Vp
)( S
N E V 1 1 erfc == )( P error
602 Fall 2012 Lecture 8 Slide 19
BER erfc ( p ) = σ 2 2 2 0
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
Back to distinguishing ldquo1rdquo from ldquo0rdquo bull Assume bipolar signaling
Transmit L samples x[] at +Vp (=V1) to signal a ldquo1rdquo
Transmit L samples x[] at ndashVp (=V0) to signal a ldquo0rdquo
bull Simpleshyminded receiver take a single sample value y[nj] at an appropriately chosen instant nj in the
bull jshyth bit interval Decide between the following two hypotheses
y[nj] = +Vp + w[nj] (==gt ldquo1rdquo)
or
y[nj] = ndashVp + w[nj] (==gt ldquo0rdquo)
where w[nj] is Gaussian zeroshymean variance σ2
602 Fall 2012 Lecture 8 Slide 18
Connecting the SNR and BER
EV S = p
2σ 2 = N
P(ldquo0rdquo)=05 1=shyVp =
1=Vp
noise
P(ldquo1rdquo)=05
= noise
0 shyVp +Vp
)( S
N E V 1 1 erfc == )( P error
602 Fall 2012 Lecture 8 Slide 19
BER erfc ( p ) = σ 2 2 2 0
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
Connecting the SNR and BER
EV S = p
2σ 2 = N
P(ldquo0rdquo)=05 1=shyVp =
1=Vp
noise
P(ldquo1rdquo)=05
= noise
0 shyVp +Vp
)( S
N E V 1 1 erfc == )( P error
602 Fall 2012 Lecture 8 Slide 19
BER erfc ( p ) = σ 2 2 2 0
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
Bit Error Rate for Bipolar Signaling Scheme with
SingleshySample Decision
05 erfc(sqrt (EsN0))
Es N0 dB 6RXUFHKWWSZZZGVSORJFRPELWHUURUSUREDELOLWIRUESVNPRGXODWLRQ
602 Fall 2012 ampRXUWHVRIULVKQD6DQNDU0DGKDYDQ3LOODL8VHGZLWKSHUPLVVLRQ Lecture 8 Slide 20
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
But we can do better bull Why just take a single sample from a bit interval bull Instead average M (leL) samples
y[n] = +Vp + w[n] so avg y[n] = +Vp + avg w[n]
bull avg w[n] is still Gaussian still has mean 0 but its variance is now σ2M instead of σ2 SNR is increased by a factor of M
bull Same analysis as before but now bit energy Eb = MEs instead of sample energy Es
1 Eb 1 MBER = P(error) = erfc( ) = erfc(Vp )602 Fall 2012 2 N0 2 σ 2 Lecture 8 Slide 21
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
Implications for Signaling Rate bull As the noise intensity increases we need to slow
down the signaling rate ie increase the number of samples per bit (K) to get higher energy in the (MleK) samples extracted from a bit interval if we wish to maintain the same error performance
ndash eg Voyager 2 was transmitting at 115 kbitss when it was near Jupiter in 1979 Last month it was over 9 billion miles away 13 light hours away from the sun twice as far away from the sun as Pluto And now transmitting at only 160 bitss The received power at the Deep Space Network antennas on earth when Voyager was near Neptune was on the order of 10^(shy16) watts shyshyshy 20 billion times smaller than an ordinary digital watch consumes The power now is estimated at 10^(shy19) watts
602 Fall 2012 Lecture 8 Slide 22
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
Flipped bits can have serious consequences
bull ldquoOnNovember 30 2006 a telemetered command to Voyager 2was incorrectly decoded by its onshyboard computermdashin a random errormdash as a command to tur n on the electrical heaters of the spacecrafts magnetometer These heaters remained tur ned on until December 4 2006 and during that time there was a resulting high temperature above 130 degC (266 degF) significantly higher than the magnetometers were designed to endure and a sensor rotated away from the correct orientation It has not been possible to fully diagnose and correct for the damage caused to theVoyager 2smagnetometer although efforts to do so are proceedingrdquo
bull ldquoOnApril 22 2010Voyager 2encountered scientific data format problems as reported by theAssociated Press on May 6 2010 OnMay 17 2010JPLengineers revealed that a flipped bit in an onshyboard computer had caused the issue and scheduled a bit reset for May 19 OnMay 23 2010Voyager 2has resumed sending science data from deep space after engineers fixed the flipped bitrdquo
602 Fall 2012 httpenwikipediaorgwikiVoyager_2 Lecture 8 Slide 23
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
What if the received signal is distorted bull Suppose y[n] = plusmnx0[n] + w[n] in a given bit slot (L
samples) where x0[n] is known and w[n] is still iid Gaussian zero mean variance σ2
bull Compute a weighted linear combination of the y[] in that bit slot
sum any[n] = plusmn sum anx0[n] + sum anw[n]
This is still Gaussian mean plusmn sum anx0[n] but now the 2variance is σ2 sum (an)
bull So what choice of the an will maximize SNR Simple answer an = x0[n] ldquomatched filteringrdquo
bull Resulting SNR for receiver detection and error performance is sum (x0[n])2 σ2 ie again the ratio of bit energy Eb to noise power
602 Fall 2012 Lecture 8 Slide 24
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms
The moral of the story is hellip hellip if yoursquore doing appropriateoptimal processing at the receiver your signalshytoshynoise ratio (and therefore your error performance) in the case of iid Gaussian noise is the ratio of bit energy (not sample energy) to noise variance
602 Fall 2012 Lecture 8 Slide 25
MIT OpenCourseWarehttpocwmitedu
QWURGXFWLRQWR(OHFWULFDO(QJLQHHULQJDQGampRPSXWHU6FLHQFHFall 201
For information about citing these materials or our Terms of Use visit httpocwmiteduterms