Dgital Communication Lectures

202
DIGITAL COMMUNICATIONS

description

Digicoomsss

Transcript of Dgital Communication Lectures

Page 1: Dgital Communication Lectures

DIGITAL COMMUNICATIONS

Page 2: Dgital Communication Lectures

Lecture 1

BASICS OF COMMUNICATION SYSTEMS

Page 3: Dgital Communication Lectures

Introduction

Electronic Communication The transmission, reception, and

processing of information with the use of electronic circuits

Information Knowledge or intelligence that is

communicated (i.e., transmitted or received) between two or more points

Page 4: Dgital Communication Lectures

Introduction

Digital Modulation The transmittal of digitally modulated

analog signals (carriers) between two or more points in a communications systems

Sometimes referred to as digital radio because digitally modulated signals can be propagated through Earth’s atmosphere and used in wireless communications systems

Page 5: Dgital Communication Lectures

Introduction

Digital Communications Include systems where relatively high-

frequency analog carriers are modulated by relatively low-frequency digital signals (digital radio) and systems involving the transmission of digital pulses (digital transmission)

Page 6: Dgital Communication Lectures

Introduction

ASK FSK PSK

QAM

Page 7: Dgital Communication Lectures

Applications

1

•Relatively low-speed voice-band data communications modems such as those found in most personal computers

2

•High-speed data transmission systems, such as broadband digital subscriber lines (DSL)

3

•Digital microwave and satellite communications systems

4

•Cellular telephone Personal Communications Systems (PCS)

Page 8: Dgital Communication Lectures

Basic Telecommunication System

Source Transducer

Transducer Sink

Transmission Medium

Attenuation

In an electrical communication system, at the transmitting side, a transducer converts the real –life information into an electrical signal. At the receiving side, a transducer converts the electrical signal back into real-life information

Page 9: Dgital Communication Lectures

Basic Telecommunication System

Source Transducer

Transducer SinkNOISE!!!

Transmission Medium

Note: As the electrical signal passes through the transmission medium, the signal gets attenuated. In addition, the transmission medium introduces noise and, as a result, the signal gets distorted.

Page 10: Dgital Communication Lectures

Basic Telecommunication System

The objective of designing a communication system is to reproduce the electrical signal at the receiving end with minimal distortion.

Page 11: Dgital Communication Lectures

Basic Telecommunication System

Channel

RS 232 Port

RS 232 Port

Note: The serial ports of two computers can be connected directly using a copper cable. However, due to the signal attenuation, the distance cannot be more than 100 meters.

Page 12: Dgital Communication Lectures

Basic Telecommunication System

Two computers can communicate with each other through the telephone network, using a modem at each end. The modem converts the digital signals generated by the computer into analog form for transmission over the medium at the transmitting end and the reverse at the receiving end.

Page 13: Dgital Communication Lectures

Basic Telecommunication System

SourceBaseband

Signal Processing

Medium Access

Processing

Transmitter

ReceiverDecoding of Data

Baseband Signal

ProcessingSink

Medium

(a) Transmitting Side

(a) Receiving Side

Page 14: Dgital Communication Lectures

Basic Telecommunication System

1

•Multiplexer

2

•Multiple access

3

•Error detection and correction

4

•Source coding

5

•Signaling

Depending on the type of communication, the distance to be covered, etc., a communication system will consist of a number of elements, each element carrying out a specific function. Some important elements are:

Page 15: Dgital Communication Lectures

Basic Telecommunication System

Note:

Two voice signals cannot be mixed directly because it will not be possible to separate them at the receiving end. The two voice signals can be transformed into different frequencies to combine them and send over the medium.

Page 16: Dgital Communication Lectures

Types of Communication

1

•Point-to-point communication

2

•Point-to-multipoint communication

3

•Broadcasting

4

•Simplex communication

5

•Half-duplex communication

6

•Full-duplex communication

Page 17: Dgital Communication Lectures

Transmission Impairments

1

•Attenuation•The

amplitude of the signal wave decreases as the signal travels through the medium.

2

•Delay distortion•Occur

s as a result of different frequency components arriving at different times in the guided media such as copper wire or coaxial cable

3

•Noise•Ther

mal noise, intermodulation, crosstalk, impulse noise

Page 18: Dgital Communication Lectures

Transmission Impairments Thermal Noise – occurs due to the

thermal agitation of electrons in a conductor. (white noise), N = kTB

Intermodulation Noise – When two signals of different frequencies are sent through the medium, due to nonlinearity of the transmitters, frequency components such as f1 + f2 and f1 – f2 are produced, which are unwanted components and need to be filtered out.

Page 19: Dgital Communication Lectures

Transmission Impairments Crosstalk – Unwanted coupling

between signal paths Impulse Noise – occurs due to

external electromagnetic disturbances such as lightning. This also causes burst of errors.

Page 20: Dgital Communication Lectures

Analog Versus Digital Transmission

Analog Communicatio

n

The signal, whose amplitude varies continuously, is

transmitted over the medium.

Reproducing the analog signal at the receiving end is very difficult due to transmission

impairments

Digital Communicatio

n

1s and 0s are transmitted as voltage pulses. So, even if the pulse s

distorted due to noise, it is not very difficult to detect the pulses at the

receiving end.

Much more immune to noise

Page 21: Dgital Communication Lectures

Advantages of Digital Transmission

More reliable transmission

• Because only discrimination between ones and zeros is required

Less costly implementation

• Because of the advances in digital logic chips

Ease of combining various types of signals (voice, video, etc.,)

Ease of developing secure communication systems

Page 22: Dgital Communication Lectures

Lecture 2

Information theory

Page 23: Dgital Communication Lectures

Claude Shannon

-Laid the foundation of information theory in 1948. His paper “A Mathematical Theory of Communication” published in Bell System Technical Journal is the basis for the entire telecommunications developments that have taken place during the last five decades. A good understanding of the concepts proposed by Shannon is a must for every budding telecommunication professional.

Page 24: Dgital Communication Lectures

Requirements of a Communication System

The requirement of a communication system is to transmit the information from the source to the sink without errors, in spite of the fact that noise is always introduced in the communication medium.

Page 25: Dgital Communication Lectures

The Communication System

Information Source

Transmitter

ReceiverInformatio

n Sink

NoiseSource

Channel

Generic Communication System

Page 26: Dgital Communication Lectures

Symbols Produced

A B B A A A B A B A

Bit stream produced

1 0 0 1 1 1 0 1 0 1

Bit stream received

1 0 0 1 1 1 1 1 0 1

In a digital communication system, due to the effect of noise, errors are introduced. As a result, 1 may become a 0 and 0 may become a 1.

Page 27: Dgital Communication Lectures

Generic Communication System as proposed by Shannon

Information Source

Source Encoder

Modulator

Channel Encoder

Information Sink

Source Decoder

Demodulator

Channel Decoder

Modulating Signal

Demodulating Signal

Modulated

Signal

Page 28: Dgital Communication Lectures

Explanation of Each Block

Information Source: Produces the symbols

Source encoder: converts the signal produced by the information source into a data stream

Channel Encoder: add bits in the source-encoded data

Modulation: process of transforming the signal

Demodulator: performs the inverse operation of the modulator

Page 29: Dgital Communication Lectures

Explanation of Each BlockChannel Decoder: analyzes the received bit stream and detects and corrects the error

Source Decoder: converts the bit stream into the actual information

Information Sink: absorbs the information

Page 30: Dgital Communication Lectures

Types of Source Encoding

Source encoding is done to reduce the redundancy in the signal.

1. Lossless coding2. Lossy coding

The compression utilities we use to compress data files use lossless encoding techniques. JPEG image compression is a lossy technique because some information is lost.

Page 31: Dgital Communication Lectures

Channel Encoding

Redundancy is introduced so that at the receiving end, the redundant bits can be used for error detection or error correction

Page 32: Dgital Communication Lectures

Entropy of an Information Source

What is information?

How do we measure

information

??????

Page 33: Dgital Communication Lectures

Information Measure

Ii = - log2 P(i) bits

Where:

P(i) = probability of the ith message

Page 34: Dgital Communication Lectures

Entropy of an Information Source

H = log2 N bits per symbol

Where:N = number of symbols

Note: This applies to symbols with equal probability.

Page 35: Dgital Communication Lectures

Entropy of an Information Source

Example:Assume that a source produces the English letters (from A to Z, including space), and all these symbols will be produced with equal probability. Determine the entropy.

Ans. H = 4.75 bits/symbol

Page 36: Dgital Communication Lectures

Entropy of an Information Source

Where:H = entropy in bits per symbolIf a source produces (i)th symbol with a probability of P(i)

Page 37: Dgital Communication Lectures

Entropy of an Information Source

Example:Consider a source that produces

four symbols with probabilities of ½, ¼, 1/8, and 1/8, and all symbols are independent of each other. Determine the entropy.

Ans. 7/4 bits/symbol

Page 38: Dgital Communication Lectures

Entropy of an Information SourceExampleA telephone touch-tone keypad has the digits 0 to 9, plus the * and # keys. Assume the probability of sending * or # is 0.005 and the probability of sending 0 to 9 is 0.099 each. If the keys are pressed at a rate of 2 keys/s, compute the entropy and data rate for this source.

Ans: H = 3.38 bits/key; R = 6.76 bps

Page 39: Dgital Communication Lectures

Channel Capacity

The limit at which data can be transmitted through a medium

Where:

C = channel capacity (bps)W = bandwidth of the channel (Hz)S/N = signal-to-noise ratio (SNR) (unitless)

Page 40: Dgital Communication Lectures

Channel Capacity

Example:Consider a voice-grade line for

which W = 3100 Hz, SNR = 30 dB (i.e., the signal-to-noise ratio is 1000:1). Determine the channel capacity.

Ans: 30.898 kbps

Page 41: Dgital Communication Lectures

Shannon’s Theorems

In digital communication system, the aim of the designer is to convert any information into a digital signal, pass it through the transmission medium and, at the receiving end, reproduce the digital signal exactly.

Page 42: Dgital Communication Lectures

Shannon’s Theorems

Requirements:

To code any type of information into digital format

To ensure that the data sent over the channel is not corrupted.

Page 43: Dgital Communication Lectures

Source Coding Theorem

States that “the number of bits required to uniquely describe an information source can be approximated to the information content as closely as desired.”

NOTE:Assigning short code words to high-

probability symbols and long code words to low-probability symbols results in efficient coding.

Page 44: Dgital Communication Lectures

Channel Coding Theorem

States that “the error rate of data transmitted over a bandwidth limited noisy channel can be reduced to an arbitrary small amount if the information rate is lower than the channel capacity.”

Page 45: Dgital Communication Lectures

Example: Consider the example of a source producing the symbols A and B. A is coded as 1 and B as 0.

Symbols Produced

A B B A B

Bit Stream

1 0 0 1 0

Transmitting……………111000000111000

101000010111000 …………Received

Page 46: Dgital Communication Lectures

NOTE

Source coding is used mainly to reduce the redundancy in the signal, whereas channel coding is used to introduce redundancy to overcome the effect of noise.

Page 47: Dgital Communication Lectures

Lesson 3

Review of Probability

Page 48: Dgital Communication Lectures

Probability Theory

Rooted in situations that involve performing an experiment with an outcome that is subject to chance.

Random Experiment – the outcome can differ because of the influence of an underlying random phenomenon or chance mechanisms (if the experiment is repeated)

Page 49: Dgital Communication Lectures

Features of Random Experiment The experiment is repeatable under

identical conditions. On any trial of experiment, the outcome

is unpredictable. For a large number of trials of the

experiment, the outcomes exhibit statistical regularity. That is, a definite average pattern of outcomes is observed if the experiment is repeated a large number of times.

Page 50: Dgital Communication Lectures

Axioms of Probability

Sample point, sk Sample space, S – totality of sample

points corresponding to the aggregate of all possible outcomes of the experiment (sure event)

Null set – null or impossible event Elementary event – single sample

point

Page 51: Dgital Communication Lectures

Consider an experiment, such as rolling a die, with a number of possible outcomes. The sample space S of the experiment consists of the set of all possible outcomes. S = { 1, 2, 3, 4, 5, 6 }

Event – subset of S and may consist of any number of sample points

A = { 2 , 4 }

Page 52: Dgital Communication Lectures

Probability System consists of the triple:

A sample space S of elementary events (outcomes)

A class E of events that are subsets of S A probability measure P( * ) assigned to

each event A in the class E, which has the following properties: P(S) = 1 0≤ P(A) ≤ 1 If A + B is the union of two mutually exclusive

events in the class E, then P(A + B) = P(A) + P(B)

Page 53: Dgital Communication Lectures

Elementary Properties of Probabilities

Property 1: P(A’) = 1 – P(A)The use of this property helps us investigate the

nonoccurrence of an event. Property 2: If M mutually exclusive events A1 ,

A2, … AM have the exhaustive property

A1 + A2 + … + AM = S Property 3: When events A and B are not

mutually exclusive, then the probability of the union event “A or B” equals

P(A + B) = P(A) + P(B) – P(AB)Where P(AB) is the joint probability

Page 54: Dgital Communication Lectures

Example:

1. Consider an experiment in which two coins are thrown. What is the probability of getting one head and one tail?

Page 55: Dgital Communication Lectures

Principles of Probability Probability of an event

Suppose that we now consider two different events, A and B, with probabilities

Disjoint events – if A and B cannot possibly occur at the same time

04/11/2023mambunquin 55

Expresses the additivity concept. That is, if two events are disjoint, the probability of their “sum” is the sum of the probabilities

Page 56: Dgital Communication Lectures

04/11/2023mambunquin 56

Principles of Probability Example:

Consider the experiment of flipping a coin twice. List the outcomes, events, and their respective probabilities.

Answers: Outcomes: HH, HT, TH, and TT Events: {HH}, {HT}, {TH}, {TT}

{HH, HT}, {HH, TH}, {HH, TT}, {HT, TH}, {HT, TT}, {TH, TT}{HH, HT, TH}, {HH, HT, TT}, {HH, TH, TT}, {HT, TH, TT}

{HH, HT, TH, TT}, and {0} Note: the comma within the curly brackets is read as “or”.

Probabilities:Pr{HH}, = Pr{HT} = Pr{TH} = Pr{TT} = 1/4 Pr{HH, HT} = Pr{HH, TH} = Pr{HH, TT} = Pr{HT, TH} = Pr{HT, TT} =

Pr{TH, TT} = 1/2 Pr{HH, HT, TH} = Pr{HH, HT, TT} = Pr{HH, TH, TT} = Pr{HT, TH, TT} =

3/4Pr{HH, HT, TH, TT} = 1Pr{0} = 0

Page 57: Dgital Communication Lectures

04/11/2023mambunquin 57

Principles of Probability

Random Variables – the mapping (function) that assigns a number to each outcome

Conditional Probabilities The probability of event A given that event

B has occurred

Two events, A and B, are said to be independent if

In set theory, this is known as

the intersection

Page 58: Dgital Communication Lectures

04/11/2023mambunquin 58

Principles of Probability Example:

A coin is flipped twice. Four different events are defined.

A is the event of getting a head on the first flip.B is the event of getting a tail on the second flip.C is the event of a match between the two flips.D is the elementary event of a head on both

flips.

Find Pr{A}, Pr{B}, Pr{C}, Pr{D}, Pr{A|B}, and Pr{C|D}. Are A and B independent? Are C and D independent?

Page 59: Dgital Communication Lectures

04/11/2023mambunquin 59

Principles of Probability Answers:

The events are defined by the following combination of outcomes.

A = HH, HT

B = HT, TT

C = HH, TT

D = HH

Therefore, Pr{A} = Pr{B} = Pr{C} = 1/2 and Pr{D} = 1/4

Pr{A|B} = 0.5 and Pr{C|D} = 1

Since Pr{A|B} = Pr{A} , the event of a head on the first flip is independent of that of a tail on the second flip.

Since Pr{C|D} ≠ Pr{C} , the event of a match and that of two heads are not independent.

Page 60: Dgital Communication Lectures

Lecture 4

Coding

Page 61: Dgital Communication Lectures

M1 = 1, M2 = 10, M3 = 01, M4 = 101

Rx = 101

Page 62: Dgital Communication Lectures

Uniquely Decipherability

No code word forms the starting sequence (known as prefix) of any other code word.

• M1 = 1, M2 = 01, M3 = 001, M4 = 0001

Note: The prefix restriction property is sufficient but not necessary for unique decipherability.

• M1 = 1, M2 = 10, M3 = 100, M4 = 1000

• not instantaneous

Page 63: Dgital Communication Lectures

Example 3.1

Which of the following codes are uniquely decipherable? For those that are uniquely decipherable, determine whether they are instantaneous.

(a) 0, 01, 001, 0011, 101(b) 110, 111, 101, 01(c) 0, 01, 011, 0110111

Page 64: Dgital Communication Lectures

Entropy Coding

A fundamental theorem exists in noiseless coding theory. The theorem states that:

For binary-coding alphabets, the average code word length is greater than, or equal to, the entropy.

Page 65: Dgital Communication Lectures

Example 3.2

Find the minimum average length of a code with four messages with probabilities 1/8, 1/8, 1/4, and 1/2, respectively.

Page 66: Dgital Communication Lectures

Variable-length Codes

One way to derive variable-length codes is to start with constant-length codes and expand subgroups.

Ex. 0, 1 (Expanding this to five code words by taking 1)0100101110111

Page 67: Dgital Communication Lectures

Ex. 00, 01, 10, 11 (Expanding any one of these four words into two words, say we chose 01)

000100111011

Page 68: Dgital Communication Lectures

2 Techniques for finding efficient variable-length codes

1. Huffman codes – provide an organized technique for finding the best possible variable-length code for a given set of messages

2. Shannon-Fano codes – similar to the Huffman, a major difference being that the operations are performed in a forward direction.

Page 69: Dgital Communication Lectures

Huffman Codes Suppose that we wish to code five words,

s1, s2, s3, s4, and s5 with probabilities 1/16, 1/8, 1/4, 1/16, and 1/2, respectively.

Procedure:1. Arrange the messages in order of decreasing probability.2. Combine the bottom two entries to form a new entry with

probability that is the sum of the original probabilities.3. Continue combining in pairs until only two entries remain.4. Assign code words by starting at right with the most

significant bit. Move to the left and assign bit if a split occurred.

Page 70: Dgital Communication Lectures

Example 3.3

Find the Huffman code for the following seven messages with probabilities as indicated:

S1 S2 S3 S4 S5 S6 S7

0.05 0.15 0.2 0.05 0.15 0.3 0.1

Page 71: Dgital Communication Lectures

Shannon-Fano Codes1. Suppose that we wish to code five

words, s1, s2, s3, s4, and s5 with probabilities 1/16, 1/8, 1/4, 1/16, and 1/2, respectively.

2. Find the Shannon-Fano code for the following seven messages with probabilities as indicated:

S1 S2 S3 S4 S5 S6 S7 0.05 0.15 0.2 0.05 0.15 0.3 0.1

Page 72: Dgital Communication Lectures

Lecture 6

Digital Transmission

Page 73: Dgital Communication Lectures

04/11/2023mambunquin 73

Information Capacity

It is a measure of how much information can be propagated through a communications system and is a function of bandwidth and transmission time Information Theory – a highly theoretical

study of the efficient use of bandwidth to propagate information through electronic communications systems

Page 74: Dgital Communication Lectures

04/11/2023mambunquin 74

Information Capacity

In 1928, R. Hartley of Bell Telephone Laboratories developed a useful relationship among bandwidth, transmission time, and information capacity

Hartley’s Law:

Where:I = information capacity (bits per second)B = bandwidth (hertz)t = transmission time (seconds)

Page 75: Dgital Communication Lectures

04/11/2023mambunquin 75

Information Capacity

Shannon limit for information capacity

or

Where:I = information capacity (bps)B = bandwidth (hertz)S/N = signal-to-noise power ratio (unitless)

Page 76: Dgital Communication Lectures

04/11/2023mambunquin 76

M-ary Encoding

M-ary is a term derived from the word binary. M represents a digit that corresponds to the number of conditions, levels, or combinations possible for a given number of binary variables.

Advantageous to encode at a level higher than binary where there are more than two conditions possible Beyond binary or higher-than-binary

encoding

Page 77: Dgital Communication Lectures

04/11/2023mambunquin 77

M-ary Encoding

Where:N = number of bits necessaryM = number of conditions, levels, or combinations possible with N bits

Rearranging the above expression

Page 78: Dgital Communication Lectures

Example

Calculate the number of levels if the number of bits per sample is:

(a)8 (as in telephony)(b)16 (as in compact disc audio systems)

Ans. (a) 256 levels (b) 65 536 levels

Page 79: Dgital Communication Lectures

Information Capacity

Shannon-Hartley Theorem:

Where:C = Information capacity in bits per secondB = the channel bandwidth in hertzM = number of levels transmitted

Page 80: Dgital Communication Lectures

Example

A telephone line has a bandwidth of 3.2 kHz and a signal-to-noise ratio of 35 dB. A signal is transmitted down this line using a four-level code. What is the maximum theoretical data rate?

Ans. 12.8 kbps

Page 81: Dgital Communication Lectures

Advantages of Digital Transmission

Noise Immunity More resistant to analog systems to additive

noise because they use signal regeneration rather than signal amplification

Easier to compare the error performance of one digital system to another digital system

Transmission errors can be detected and corrected more easily and more accurately

Page 82: Dgital Communication Lectures

Disadvantages of Digital Transmission Requires significantly more bandwidth than

simply transmitting the original analog signal Analog signals must be converted to digital

pulses prior to transmission and converted back to their original analog form at the receiver

Requires precise time synchronization between the clocks in the transmitters and receivers

Incompatible with older analog transmission systems

Page 83: Dgital Communication Lectures

Pulse Modulation

Consists of sampling analog information signals and then converting those discrete pulses and transporting the pulses from a source to a destination over a physical medium

Page 84: Dgital Communication Lectures

Sampling

In 1928, Harry Nyquist showed mathematically that it is possible to reconstruct a band-limited analog signal from periodic samples, as long as the sampling rate is at least twice the frequency of the highest frequency component of the signal.

Page 85: Dgital Communication Lectures

Sampling

Natural Sampling Flat-topped Sampling

Aliasing foldover distortion – distortion created by using too low a sampling rate when coding an analog signal for digital transmission

fa = the frequency of the aliasing distortion

fs = the sampling rate

fm = the modulating (baseband) frequency

Page 86: Dgital Communication Lectures

Example

An attempt is made to transmit a baseband frequency of 30 kHz using a digital audio system with a sampling rate of 44.1 kHz. What audible frequency would result?

14.1 kHz

Page 87: Dgital Communication Lectures

Pulse Modulation

Methods of Pulse Modulation Pulse Width Modulation (PWM) Pulse Position Modulation (PPM) Pulse Amplitude Modulation

(PAM) Pulse Code Modulation (PCM)

Page 88: Dgital Communication Lectures
Page 89: Dgital Communication Lectures

Pulse Code Modulation (PCM)

Dynamic Range (DR) of a system is the ratio of the strongest possible signal that can be transmitted and the weakest discernible signal

DR = 1.76 + 6.02 M dBD = fs M

WhereDR = dynamic range in dBM = number of bits per sampleD= data rate in bits per secondfs = sample rate in samples per second

Page 90: Dgital Communication Lectures

Example

Find the maximum dynamic range for a linear PCM system using 16-bit quantizing.

Calculate the minimum data rate needed to transmit audio with a sampling rate of 40 kHz and 14 bits per sample.

Ans. 98.08 dB, 560 kbps

Page 91: Dgital Communication Lectures

Alternative Formula DR

Where:DR = dynamic range (unitless ratio)Vmin = the quantum value (resolution)Vmax = the maximum voltage magnitude that can be discerned by the DAC in the receiver

Page 92: Dgital Communication Lectures

Resolution

Quantization error

Page 93: Dgital Communication Lectures

Relationship between DR and N in a PCM code

Where:N = number of bits in a PCM code,

excluding the sign bitDR = absolute value of dynamic range

For minimum number of

bits

Page 94: Dgital Communication Lectures

Example For a PCM system with the following

parameters, determine (a) minimum sample rate, (b) minimum number of bits used in the PCM code, (c) resolution, and (d) quantization error.

Maximum analog input frequency = 4 kHz

Maximum decoded voltage at the receiver = ±2.55 V

Minimum dynamic range = 46 dB

Page 95: Dgital Communication Lectures

Companding

Combination of compression at the transmitter and expansion at the receiver of a communications system

The transmission bandwidth varies directly with the bit rate. In order to keep the bit rate and thus required bandwidth low, companding is used.

Involves using a compressor amplifier at the input, with greater gain for low-level than for high-level signals. The compressor reduces the quantizing error for small signals.

Page 96: Dgital Communication Lectures

µ Law (mu law) Characteristic applied to the system

used by the North American telephone system

Where:vo = output voltage from the compressorVo = maximum output voltageVi = maximum input voltagevi = actual input voltageµ = a parameter that defines the amount of compression (contemporary systems use µ = 255)

Page 97: Dgital Communication Lectures

A Law

Characteristic applied to the system used by the European telephone system

Page 98: Dgital Communication Lectures

Example

A signal at the input to a mu-law compressor is positive with its voltage one-half the maximum value. What proportion of the maximum output voltage is produced?

Ans. 0.876 Vo

Page 99: Dgital Communication Lectures

Coding and Decoding

The process of converting an analog signal into a PCM signal is called coding and the inverse operation, converting back from digital to analog, is known as decoding.

Both procedures are often accomplished in a single IC device called a codec.

Page 100: Dgital Communication Lectures

Mu-Law Compressed PCM Coding

Segment Voltage Range (mV)

Step Size (mV)

0 0 – 7.8 0.488

1 7.8 – 15.6 0.488

2 15.6 – 31.25 0.9772

3 31.25 – 62.5 1.953

4 62.5 – 125 3.906

5 125 – 250 7.813

6 250 – 500 15.625

7 500 - 1000 31.25

Page 101: Dgital Communication Lectures

Example

Code a positive-going signal with amplitude 30% of the maximum allowed as a PCM sample.

Ans: 11100011

Convert the 12-bit sample 100110100100 into an 8-bit compressed code.

Ans: 11011010

Page 102: Dgital Communication Lectures

Example

1. Suppose an input signal to a µ-law compressor has a positive voltage and amplitude 25% of the maximum possible. Calculate the output voltage as a percentage of the maximum output.

2. How would a signal with 50% of the maximum input voltage be coded in 8-bit PCM, using digital compression?

3. Convert a sample coded (using mu-law compression) as 11001100 to a voltage with the maximum sample voltage normalized at 1 V.

Page 103: Dgital Communication Lectures

4. Convert the 12-bit PCM sample 110011001100 to an 8-bit compressed sample.

5. Suppose a composite video signal with a baseband frequency range from dc to 4 MHz is transmitted by linear PCM, using eight bits per sample and a sampling rate of 10 MHz. How many quantization levels are there? Calculate the bit rate, ignoring overhead. What would be the maximum signal-to-noise

ratio, in decibels? What type of noise determines the answer to part

(c)?

Page 104: Dgital Communication Lectures

Example

The compact disc system of digital audio uses two channels with TDM. Each channel is sampled at 44.1 kHz and coded using linear PCM with sixteen bits per sample. Find: the maximum audio frequency that can be

recorded (assuming ideal filters) the maximum dynamic range in decibels the bit rate, ignoring error correction and

framing bits the number of quantizing levels

Page 105: Dgital Communication Lectures

04/11/2023mambunquin 105

Digital Modulation/demodulation

Page 106: Dgital Communication Lectures

04/11/2023mambunquin 106

Digital Modulation

The transmittal of digitally modulated analog signals (carriers) between two or more points in a communications systems

Sometimes referred to as digital radio because digitally modulated signals can be propagated through Earth’s atmosphere and used in wireless communications systems

Page 107: Dgital Communication Lectures

Introduction

ASK FSK PSK

QAM

Page 108: Dgital Communication Lectures

04/11/2023mambunquin 108

Information Capacity, Bits, Bit Rate, Baud, and M-ary Encoding

Page 109: Dgital Communication Lectures

04/11/2023mambunquin 109

Baud and Minimum Bandwidth

Baud – rate of change of a signal on the transmission medium after encoding and modulation have occurred

Unit of transmission rate, modulation rate, or symbol rate

Symbols per second Reciprocal of the time of one output signaling

element

Where:Baud = symbol rate (baud per second)ts = time of one signaling element (seconds)

Page 110: Dgital Communication Lectures

04/11/2023mambunquin 110

Baud and Minimum Bandwidth

Signaling element – symbol that could be encoded as a change in amplitude, frequency, or phase

Note: Bit rate and baud rate will be equal only if timing is uniform throughout and all pulses are used to send information (i.e. no extra pulses are used for other purposes such as forward error correction.)

Page 111: Dgital Communication Lectures

04/11/2023mambunquin 111

Baud and Minimum Bandwidth

According to H. Nyquist, binary digital signals can be propagated through an ideal noiseless transmission medium at a rate equal to two times the bandwidth of the medium

The minimum theoretical bandwidth necessary to propagate a signal is called the minimum Nyquist bandwidth or minimum Nyquist frequency

Page 112: Dgital Communication Lectures

04/11/2023mambunquin 112

Baud and Minimum Bandwidth

Nyquist formulation of channel capacity:

Where:fb = channel capacity (bps)

B = minimum Nyquist bandwidth (hertz)M = number of discrete signal or voltage levels

Page 113: Dgital Communication Lectures

04/11/2023mambunquin 113

Baud and Minimum Bandwidth

With digital modulation, the baud and the ideal minimum Nyquist bandwidth have the same value and are equal to :

This is true for all forms of digital modulation except FSK.

Page 114: Dgital Communication Lectures

04/11/2023mambunquin 114

Example 1:A modulator transmits symbols, each of which has sixty-four different possible states, 10, 000 times per second. Calculate the baud rate and bit rate.

Given:M = 64Baud = 10 000 times per secondRequired:Baud rate and Bit rateSolution:Baud rate = 10 000 baud or 10 kbaud

fb = baud x N = 10 000 x log2 64 = 60 kbps

Page 115: Dgital Communication Lectures

04/11/2023mambunquin 115

Amplitude-Shift Keying Simplest digital modulation technique A binary information signal directly modulates

the amplitude of an analog carrier Sometimes called digital amplitude modulation

(DAM)

Where:vask (t) = amplitude-shift keying wave

vm (t) = digital information (modulating) signal (volts)A/2 = unmodulated carrier amplitude (volts)ωc = analog carrier radian frequency (radians per second, 2πfc t)

Page 116: Dgital Communication Lectures

04/11/2023mambunquin 116

Amplitude-Shift Keying

For logic 1, vm (t) = + 1 V

Page 117: Dgital Communication Lectures

04/11/2023mambunquin 117

Amplitude-Shift Keying

For logic 0, vm (t) = - 1 V

Page 118: Dgital Communication Lectures

04/11/2023mambunquin 118

Amplitude-Shift Keying The modulated wave is eitheror 0 The carrier is either “on” or “off” which

is why ASK is sometimes referred to as on-off keying (OOK)

Page 119: Dgital Communication Lectures

04/11/2023mambunquin 119

Amplitude-Shift Keying

Binary Input

DAM output

Page 120: Dgital Communication Lectures

04/11/2023mambunquin 120

Amplitude-Shift Keying

ASK waveform (baud) is the same as the rate of change of the binary input (bps)

Page 121: Dgital Communication Lectures

04/11/2023mambunquin 121

Example 2

Determine the baud and minimum bandwidth necessary to pass a 10 kbps binary signal using amplitude-shift keying.

Given:fb = 10 000 bps

N = 1 (for ASK)Required: Baud and BSolution:

B = fb / N = 10 000 / 1 = 10 000 Hz

Baud = fb / N = 10 000 / 1 = 10 000 baud per second

Page 122: Dgital Communication Lectures

04/11/2023mambunquin 122

Frequency-Shift Keying Low-performance type of digital

modulation A form of constant-amplitude angle

modulation similar to standard frequency modulation (FM) except the modulating signal is a binary signal that varies between two discrete voltage levels rather than a continuously changing analog waveform

Sometimes called binary FSK (BFSK)

Page 123: Dgital Communication Lectures

04/11/2023mambunquin 123

Frequency-Shift Keying

Where:vfsk (t) = binary FSK waveform

Vc = peak analog carrier amplitude (volts)

fc = analog carrier center frequency (hertz)

∆f = peak change (shift) in the analog carrier frequency (hertz)

vm (t) = binary input (modulating) signal (volts)

Page 124: Dgital Communication Lectures

04/11/2023mambunquin 124

Frequency-Shift Keying

Frequency-shift keying (FSK) is the oldest and simplest form of modulation used in modems.

In FSK, two sine-wave frequencies are used to represent binary 0s and 1s.

binary 0, usually called a space binary 1, referred to as a mark

Page 125: Dgital Communication Lectures

04/11/2023mambunquin 125

Frequency-Shift Keying

For Vm(t) = + 1 V

For Vm(t) = - 1 V

Page 126: Dgital Communication Lectures

04/11/2023mambunquin 126

Frequency-Shift Keying

Logic 1

Logic 0

Page 127: Dgital Communication Lectures

04/11/2023mambunquin 127

Frequency-Shift Keying Frequency deviation is defined as the

difference between either the mark or space frequency and the center frequency, or half the difference between the mark and space frequencies.

Where: ∆f = frequency deviation (hertz) |fm – fs | = absolute difference between the mark and space frequencies (hertz)

Page 128: Dgital Communication Lectures

04/11/2023mambunquin 128

Frequency-Shift Keying

Frequency-shift keying. (a) Binary signal. (b) FSK signal.

Page 129: Dgital Communication Lectures

04/11/2023mambunquin 129

Frequency-Shift Keying

FSK Bit Rate, Baud, and Bandwidth

If N = 1, then Baud = fb

Where:B = minimum Nyquist bandwidth (hertz)∆f = frequency deviation (hertz)fb = input bit rate (bps)

Page 130: Dgital Communication Lectures

04/11/2023mambunquin 130

Frequency-Shift Keying

Example: Determine (a) the peak frequency deviation, (b) minimum bandwidth, and (c) baud for a binary FSK signal with a mark frequency of 49 kHz, a space frequency of 51 kHz, and an input bit rate of 2 kbps.

Solution:(a)

(b) (c)

Page 131: Dgital Communication Lectures

04/11/2023mambunquin 131

Frequency-Shift Keying

Gaussian Minimum-Shift Keying Special case of FSK used in the GSM cellular radio

and PCS systems In a minimum shift system, the mark and space

frequencies are separated by half the bit rate

Where:fm = frequency transmitted for mark (binary 1)

fs = frequency transmitted for space (binary 0)

fb = bit rate

Page 132: Dgital Communication Lectures

04/11/2023mambunquin 132

Frequency-Shift Keying

If we use the conventional FM terminology, we see that GMSK has a deviation each way from the center (carrier) frequency, of

Which corresponds to a modulation index of

Page 133: Dgital Communication Lectures

04/11/2023mambunquin 133

Frequency-Shift Keying

Example 4: The GSM cellular radio system uses GMSK in a 200-kHz channel, with a channel data rate of 270.833 kb/s. Calculate:

(a) the frequency shift between mark and space

(b) the transmitted frequencies if the carrier (center) frequency is exactly 880 MHz

(c) the bandwidth efficiency of the scheme in b/s/Hz

Page 134: Dgital Communication Lectures

04/11/2023mambunquin 134

Frequency-Shift Keying

Solution:(a) fm – fs = 0.5 fb = 0.5 x 270.833 kb/s =

135.4165 kHz(b) fmax = fc + 0.25fb = 880 MHz + 0.25 x

270.833 kHz = 880.0677 MHzfmin = fc – 0.25fb = 880 MHz – 0.25 x 270.833 kHz = 879.93229 MHz

(c) The GSM system has a bandwidth efficiency of 270.833 / 200 = 1.35 b/s/Hz, comfortably under the theoretical maximum of 2 b/s/Hz for a two-level code.

Page 135: Dgital Communication Lectures

04/11/2023mambunquin 135

Phase-Shift Keying

Used when somewhat higher data rates are required in a band-limited channel than can be achieved with FSK

Another form of angle-modulated, constant-amplitude digital modulation

An M-ary digital modulation scheme similar to conventional phase modulation except with PSK the input is a binary digital signal and there are limited number of output phases possible

Page 136: Dgital Communication Lectures

04/11/2023mambunquin 136

Phase-Shift Keying

The input binary information is encoded into groups of bits before modulating the carrier.

The number of bits in a group ranges from 1 to 12 or more.

The number of output phases is defined by M (as described previously) and determined by the number of bits in the group (n).

Page 137: Dgital Communication Lectures

04/11/2023mambunquin 137

Phase-Shift Keying

Binary PSK Quaternary PSK

Offset QPSK 8-PSK 16-PSK

Page 138: Dgital Communication Lectures

04/11/2023mambunquin 138

Binary Phase-Shift Keying

Simplest form of PSK, where N =1 and M = 2. Two phases are possible (21 = 2) for the

carrier As the input digital signal changes state, the

phase of the output carrier shifts between two angles that are separated by 180°.

Other terms: Phase reversal keying (PRK) and biphase modulation

Form of square-wave modulation of a continuous wave (CW) signal

Page 139: Dgital Communication Lectures

04/11/2023mambunquin 139

Delta Phase-Shift Keying

Most modems use a four-phase system (QPSK or DQPSK)

Each symbol represents two bits and the BIT rate is TWICE the BAUD rate. (dibit system)

A system can carry twice as much data in the same bandwidth as can a single-bit system like FSK, provided the SNR is high enough

Page 140: Dgital Communication Lectures

04/11/2023mambunquin 140

Delta Phase-Shift Keying

Quadrature 01

00

10

11

DQPSK Coding

Phase Shift (Deg) Symbol0 00+90 01-90 10180 11

Pi/4 DQPSK Coding

Phase Shift (Deg) Symbol45 00135 01- 45 10- 135 11

0001

11

10

DQPSK

Pi/4 DQPSK

Quadrature

Page 141: Dgital Communication Lectures

Lecture 8

Error Control

Page 142: Dgital Communication Lectures

04/11/2023mambunquin 142

Background: Simple Codes

Code – a set of rules that assigns a code word to every message drawn from a dictionary of acceptable messages. The code words must consist of symbols from an acceptable alphabet.

1. Baudot Code2. ASCII Code3. Selectric Code

Page 143: Dgital Communication Lectures

04/11/2023mambunquin 143

Baudot Code

One of the earliest, and now essentially obsolete, paper-tape codes used in Teletype machines

Assigns a 5-bit binary number to each letter of the alphabet

Shift instruction – provided to circumvents the shortcomings of the primitive codes (i.e. 26 capital letters plus space, line feed, and carriage return, including the digits)

Page 144: Dgital Communication Lectures

04/11/2023mambunquin 144

ASCII Code American Standard Code for Information

Interchange Has become the standard for digital

communication of individual alphabet symbols Also used for very short range

communications, such as from the keyboard to the processor of a computer

Consists of code words of 7-bit length, thus providing 128 dictionary words

An eighth bit is often added as a parity-check bit for error detection

Page 145: Dgital Communication Lectures

04/11/2023mambunquin 145

Selectric Code One of many specialized codes that

have been widely used in the past The Selectric typewriter was the

standard of the industry before the days of electronic typewriters. Uses a 7-bit code to control the position of

the typing ball Although this permits 128 distinct code

symbols, only 88 of these are used.

Page 146: Dgital Communication Lectures

Example

Write the ASCII codes for the characters below. B b

Answer:10000101100010

Page 147: Dgital Communication Lectures

Asynchronous Transmission

Synchronizing the transmitter and receiver clocks at the start of each character

Simpler but less efficient than synchronous communication, in which the transmitter and receiver clocks are continuously locked together

Page 148: Dgital Communication Lectures

4.148

TRANSMISSION MODES

The transmission of binary data across a link can be accomplished in either parallel or serial mode. In parallel mode, multiple bits are sent with each clock tick. In serial mode, 1 bit is sent with each clock tick. While there is only one way to send parallel data, there are three subclasses of serial transmission: asynchronous, synchronous, and isochronous.

Page 149: Dgital Communication Lectures

4.149

Data transmission and modes

Page 150: Dgital Communication Lectures

4.150

Parallel transmission

Page 151: Dgital Communication Lectures

4.151

Serial transmission

Page 152: Dgital Communication Lectures

4.152

In asynchronous transmission, we send 1 start bit (0) at the beginning and 1 or more stop bits (1s) at the end of each

byte. There may be a gap between each byte.

Note

Page 153: Dgital Communication Lectures

Example For the following sequence of bits,

identify the ASCII-encoded character, the start and stop bits, and the parity bits (assume even parity and two stop bits).

11111101000001011110001000

AD

Page 154: Dgital Communication Lectures

4.154

Asynchronous here means “asynchronous at the byte level,”

but the bits are still synchronized; their durations are the same.

Note

Page 155: Dgital Communication Lectures

4.155

Asynchronous transmission

Page 156: Dgital Communication Lectures

4.156

In synchronous transmission, we send bits one after another without start or

stop bits or gaps. It is the responsibility of the receiver to group the bits.

Note

Page 157: Dgital Communication Lectures

Example For the following string of ASCII-

encoded characters, identify each character (assume odd parity):

01001111010101000001011011

OT

Page 158: Dgital Communication Lectures

4.158

Synchronous transmission

Page 159: Dgital Communication Lectures

Parallel and Serial Transmission

There are two ways to move binary bits from one place to another:

1. Transmit all bits of a word simultaneously (parallel transfer).

2. Send only 1 bit at a time (serial transfer).

Page 160: Dgital Communication Lectures

Parallel and Serial Transmission

Parallel Transfer Parallel data transmission is extremely

fast because all the bits of the data word are transferred simultaneously.

Parallel data transmission is impractical for long-distance communication because of: cost. signal attenuation.

Page 161: Dgital Communication Lectures

Parallel and Serial Transmission

Serial Transfer Data transfers in communication

systems are made serially; each bit of a word is transmitted one after another.

The least significant bit (LSB) is transmitted first, and the most significant bit (MSB) last.

Each bit is transmitted for a fixed interval of time t.

Page 162: Dgital Communication Lectures

Parallel and Serial Transmission

Serial data transmission.

Page 163: Dgital Communication Lectures

Parallel and Serial Transmission

Serial-Parallel Conversion Serial data can typically be transmitted

faster over longer distances than parallel data.

Serial buses are now replacing parallel buses in computers, storage systems, and telecommunication equipment where very high speeds are required.

Serial-to-parallel and parallel-to-serial data conversion circuits are also referred to as serializer-deserializers (serdes).

Page 164: Dgital Communication Lectures

Parallel and Serial Transmission

Parallel-to-serial and serial-to-parallel data transfers with shift registers.

Page 165: Dgital Communication Lectures

04/11/2023mambunquin 165

The Channel

Page 166: Dgital Communication Lectures

04/11/2023mambunquin 166

Introduction

Channel – what separates the transmitter from the receiver in a communication system.

Affects communication in two ways Can alter the form of a signal during its

movement from transmitter to receiver Can add noise waveforms to the original

transmitted signal

Page 167: Dgital Communication Lectures

04/11/2023mambunquin 167

The Memoryless Channel

A channel is memoryless if each element of the output sequence depends only upon the corresponding input sequency element and upon the channel characteristics

Channel

Page 168: Dgital Communication Lectures

04/11/2023mambunquin 168

The Memoryless Channel

Can be characterized by a transition matrix composed of conditional probabilities

Example: Consider the binary channel, where sin

can take on either of two values, 0 or 1. For a particular input, sout can equal either 0 or 1.

Page 169: Dgital Communication Lectures

04/11/2023mambunquin 169

The Memoryless Channel

The transition probability matrix is then

Note: In the absence of noise and distortion, one would expect [T] to be the identity matrix. The sum of the entries of any column of the transition matrix must be unity, since, given the value of the input, the output must be one of the two probabilities.

Page 170: Dgital Communication Lectures

04/11/2023mambunquin 170

The Memoryless Channel

Example: A digital communication system has a symbol alphabet

composed of four entries, and a transition matrix given by the following:

a. Find the probability of a single transmitted symbol being in error assuming that all four input symbols are equally probable at any time

b. Find the probability of a correct symbol transmission.c. If the symbols are denoted as A, B, C, and D, find the

probability that the transmitted sequence BADCAB will be received as DADDAB.

Page 171: Dgital Communication Lectures

04/11/2023mambunquin 171

The Memoryless Channel

Solution:a. Pe | 0 sent = P10 + P20 + P30 = ¼ + ¼ + ¼ = ¾

Pe | 1 sent = P01 + P21 + P31 = ½ + 1/6 + 1/6 = 5/6

Pe | 2 sent = P02 + P12 + P32 = 1/6 + ½ + 1/6 = 5/6

Pe | 3 sent = P03 + P13 + P23 = 1/6 + 1/6 + 1/3 = 2/3

Page 172: Dgital Communication Lectures

04/11/2023mambunquin 172

The Memoryless Channel

b.

c. P(DADDAB) = P31 P00 P33 P32 P00 P11

Page 173: Dgital Communication Lectures

04/11/2023mambunquin 173

The Memoryless Channel

An alternative way of displaying transition probabilities is by use of the transition diagram. The summation of probabilities leaving any node must be unity.

0 0

1 1

P00

P11

P01

P10

0 0

1 1

1 – p

p

p

1 – p

Binary Symmetric Channel

(BSC)

A special case of the binary memoryless channel, one in which the two conditional error probabilities are equal.

Page 174: Dgital Communication Lectures

04/11/2023mambunquin 174

The Memoryless Channel

The probability of error, or bit error rate (BER) following one hop is given by:

Shorthand notation

Page 175: Dgital Communication Lectures

04/11/2023mambunquin 175

The Memoryless Channel

Overall probability of correct transmission

A transmitted 1 will be received as 1 provided that no errors occur in either hop.

If an error occurs in each of the two hops, the 1 will be correctly received.

Page 176: Dgital Communication Lectures

04/11/2023mambunquin 176

The Memoryless Channel

Probability of correct transmission:

Probability of error:

Page 177: Dgital Communication Lectures

04/11/2023mambunquin 177

The Memoryless Channel

In general, the probability of error goes up linearly with the number of hops

Thus, for n binary symmetric channels in tandem, the overall probability of error is n times the bit error rate for a single BSC.

Page 178: Dgital Communication Lectures

04/11/2023mambunquin 178

The Memoryless Channel

Example:Suppose you were to design a

transmission system to cover a distance of 500 km. You decide to install a repeater station every 10km, so you require 50 such segments in your overall transmission path. You find that the bit error rate for each segment is p = 10-6 . Therefore, the overall bit error rate is:

Page 179: Dgital Communication Lectures

04/11/2023mambunquin 179

The Memoryless ChannelExample

Consider a binary symmetric channel for which the conditional probability of error p = 10-4 , and symbols 0 and 1 occur with equal probability. Calculate the following probabilities:

a. The probability of receiving symbol 0b. The probability of receiving symbol 1c. The probability that symbol 0 was sent, given

that symbol 0 is receivedd. The probability that symbol 1 was sent, given

that symbol 0 is received

Page 180: Dgital Communication Lectures

Answers:a. P(B0) = ½b. P(B1) = ½c. P(A0|B0)=1-10-4 d. P(A0|B1) = 10-4

Page 181: Dgital Communication Lectures

Distance between code words

The distance between two equal-length binary code words is defined as the number of bit positions in which the two words differ. Example: The distance between 000 and

111 is 3. The distance between 010 and 011 is 1.

Page 182: Dgital Communication Lectures

04/11/2023mambunquin 182

Distance between code words

Suppose that the dictionary of code words is such that the distance between any two words is at least 2.

0000, 0011,0101, 0110,

1010, 11001111

Page 183: Dgital Communication Lectures

04/11/2023mambunquin 183

Distance Relationships for 3-bit code

010

011

001

000

110

111

101100

Page 184: Dgital Communication Lectures

04/11/2023mambunquin 184

Minimum Distance Between Codes, Dmin

Dmin – 1 bits can be detected

Dmin (even) = (Dmin / 2) – 1 can be corrected

Dmin (odd) = (Dmin / 2) – 1/2 can be corrected

Page 185: Dgital Communication Lectures

04/11/2023mambunquin 185

Example: Find the minimum distance for the

following code consisting of four code words:0111001, 1100101, 0010111,

1011100

How many bit errors can be detected? How many bit errors can be corrected?

Page 186: Dgital Communication Lectures

04/11/2023mambunquin 186

Code Length

Where

Page 187: Dgital Communication Lectures

04/11/2023mambunquin 187

Algebraic Codes One simple form of this is known as single-

parity-bit check code.

Message Code Word

000 0000

001 0011

010 0101

011 0110

100 1001

101 1010

110 1100

111 1111

Page 188: Dgital Communication Lectures

Error Detection

Redundancy Checking VRC (character parity) LRC (message parity) Checksum CRC

Page 189: Dgital Communication Lectures

04/11/2023mambunquin 189

Consider code words that add n parity bits to the m message bits to end up with code words of length m + n bits.

ai = original message; ci = parity check bit:

Code word = a1 a2 a3 . . . am c1 c2 c3 . . . cn

Note: out of 2m+n code words, 2m are used

[H] = 0

Page 190: Dgital Communication Lectures

Linear Block Codes

Generator Matrix:

Code Word

Syndrome

Where

Page 191: Dgital Communication Lectures

Linear Block Codes

Page 192: Dgital Communication Lectures

Linear Block Codes

Page 193: Dgital Communication Lectures

CRC

Determine the BCS for the following data and CRC generating sequence:

Data, G = 10110111CRC P = 110011

Answer: BCS = 1011011101001

Page 194: Dgital Communication Lectures

Cyclic Codes

Page 195: Dgital Communication Lectures

LRC and VRC

Determine the VRCs and LRC for the following ASCII-encoded message: THE CAT. Use odd parity for the VRCs and even parity for the LRC

Page 196: Dgital Communication Lectures

Solution

Char T H E SP C A T LRC

HEX 54 48 45 20 43 41 54 2F

B6 1 1 1 0 1 1 1 0

B5 0 0 0 1 0 0 0 1

B4 1 0 0 0 0 0 1 0

B3 0 1 0 0 0 0 0 1

B2 1 0 1 0 0 0 1 1

B1 0 0 0 0 1 0 0 1

B0 0 0 1 0 1 1 0 1

VRC 0 1 0 0 0 1 0 0

Page 197: Dgital Communication Lectures

Checksum

Page 198: Dgital Communication Lectures

Sender site: The message is divided into 16-bit

words. The value of the checksum word is set

to 0. All words including the checksum are

added using one’s complement addition.

The sum is complemented and becomes the checksum.

The checksum is sent with the data.

Page 199: Dgital Communication Lectures

Receiver site: The message (including checksum) is

divided into 16-bit words. All words are added using one’s

complement addition. The sum is complemented and

becomes the new checksum.

If the value of checksum is 0, the message is accepted; otherwise, it is rejected.

Page 200: Dgital Communication Lectures

Error Correction

Retransmission ARQ

FEC Hamming Code

Page 201: Dgital Communication Lectures

Example For a 12-bit data string of

101100010010, determine the number of Hamming bits required, arbitrarily place the Hamming bits into the data string, determine the logic condition of each Hamming bit, assume an arbitrary single-bit transmission error, and prove that the Hamming code will successfully detect the error.

4, 8, 9, 13, 17

Page 202: Dgital Communication Lectures

Example Determine the Hamming bits for the

ASCII character “B”. Insert the Hamming bits into every other bit location starting from the left.

Determine the Hamming bits for the ASCII character “C” (use odd parity and two stop bits). Insert the Hamming bits into every other location starting from at the right.

0010