Principles of Communications - xidian.edu.cnweb.xidian.edu.cn/ychwang/files/20150107_215253.pdf ·...

Post on 28-Jan-2020

4 views 0 download

Transcript of Principles of Communications - xidian.edu.cnweb.xidian.edu.cn/ychwang/files/20150107_215253.pdf ·...

Principles of CommunicationsChapter I: Introduction

Yongchao Wang

Email: ychwang@mail.xidian.edu.cn

Xidian University State Key Lab. on ISN

September 24, 2014

�á�ë�Ö8◮ Ü�§ùwA§5y�Ï&�n�Eâ6§ÜSµÜS>f�E�ÆÑ��"◮ ��&, 5Principles of Communications6£=©¤, �®µ>fó�Ñ��§2010.7.

◮ ��&§ùwA§5Ï&�n6£18�¤§�®µI�ó�Ñ��"◮ John Proakis§5Digital Communications6(Sixth Edition).

◮ The slides used in this class can be downloaded fromweb.xidian.edu.cn/ychwang

2 / 25

Historical Review

Communication: Transmission and Exchange.In this class, we focus on transmission technology.

◮ Origin of ancient communication: Beacon-fire

◮ Two modes of information communication◮ transferred by manpower or mechanical method, such as postal

service;◮ transferred by electricity or optics, named telecommunication.

◮ Development of modern communication◮ (Land) Mobile communication: 3G/4G;◮ Satellite communication;◮ Fiber communication;◮ ..., etc;

3 / 25

Message, information , and signal

◮ Message: speech, letters, figures, images etc...

◮ Information: effective content of message. “quantity ofmessage”6= “information content”.

◮ Signal: the carrier of message.

Measurement of InformationThe information content I contained in a message should have thefollowing attributes:

◮ I is a function of the occurrence probability P (x) of themessage, i.e., I = I[P (x)].

◮ P (x) is smaller, then I is larger.

◮ The information content contained in the message consistingof such independent events will equal to the sum of theinformation content of the message of each independentevent, i.e., I[P (x1), · · · , P (xn)] = I[P (x1)] + · · ·+ P [(xn)].

4 / 25

Definition of Information

Usually, we use the following definition of information

I = loga1

P (x)= − loga P (x)

.

About parameter a

◮ If a = 2, the unit of I is bit;

◮ If a = e(natural logarithm, e = 2.71828...), the unit of I isnat;

◮ If a = 10, the unit of the I is hartley.

5 / 25

Difference between analog/digital signals

◮ Analog: parameter is with analog information;

◮ Digital: parameter is with digital information;

AAnnaalloogg ssiiggnnaallss

DDiiggiittaall ssiiggnnaallss

tt

s(t)

tt

s(t)

s(t)

tt

Symbol

tt

s(t)

Figure: Comparison of analog signals and digital signals.

6 / 25

Advantages of digital communication

1. Error correcting techniques can be used.

2. Digital encryption can be used.

3. Digital communication equipment:◮ Design and manufacture are easier.◮ Weight and size are smaller.

4. Digital signal can be compressed by source coding to reduceredundancy.

5. Output S/N increases with bandwidth according toexponential law.

6. Correct decision may be achieved, suitable for relay system.

7. Different kinds of analog and digital message can beintegrated to transmit.

8. Finite number of possible values of signals.

7 / 25

Digital Communication Model

Transmitter Receiver

Info

rm

atio

n so

urce

Ch

an

nel c

od

ing

Mo

du

latio

n

Ch

an

nel

Co

mp

ressio

n

Dem

od

ula

tion

Info

rm

atio

n

En

cryp

tion

Ch

an

nel d

eco

din

g

Co

mp

ressio

n

En

cryp

tion

Noise

Synchronization

Source

coding

Source

decoding

Figure: Digital communication system model.

8 / 25

Functions of the Units

◮ Source coding and decoding: Compression (PCM and ∆M , inchapter 4) and Encryption.

◮ Channel coding: error control coding: convolutional coding,turbo coding, LDPC coding.

◮ Modulation: ASK, PSK, FSK, APK (in chapter 6).

◮ Channel: fading channel, additional white gaussian noise(AWGN).

◮ Synchronization: carrier sync, bit sync, group sync. (inchapter 7)

9 / 25

Efficiency and Reliability of Digital Communication System

Efficiency measures

◮ Transmission Rate◮ Symbol rate (RB): The number of symbols transmitted in unit

time (s). Baud, symbols/s.◮ Information rate (Rb): The number of bits transmitted in unit

time (s). bit/s.◮ For a M-ary system, Rb = log2 M ·RB .◮ For a discrete information resource, each entry is with

probability P (xi) and independent. Its entropy (averageinformation content per symbol) is

H(x) = −

n∑

i=1

P (xi) log2 P (xi).

Then, we have Rb = HRB

◮ The utilization factor of frequency band: η = RB

Bor η = Rb

B.

Baud/Hz, bit/s/Hz.

10 / 25

Efficiency and Reliability of Digital Communication System

Reliability measures: Error Probability

◮ Symbol error probability Pe (Two definitions)◮ obtain in practice (simulation):

Pe =the number of the received symbols in error

the total number of the transmitted symbols

.◮ obtain in theory (analytic expression): the error probability of

transmitted symbol through the channel.

◮ Bit error probability Pb.

◮ Relation between Pe and Pb: Pe ≥ Pb. If Gray code is appliedto M-ary symbol, we have

Pb ≈Pe

log2 M

.

11 / 25

Wireless Channel

The transmission of signals is transmitted/received by thepropagation of electromagnetic waves in space.

◮ Division of frequency band.

◮ Ground wave propagation: frequency ≤ 2MHz. diffractionability.

◮ Sky wave propagation◮ reflected by the ionosphere: frequency 2 ∼ 30MHz.◮ short distance line-of-sight: ground relay ≤ 50km.◮ long distance line-of-sight: satellite communication,

stratosphere communication (HAPS: high altitude platformstation, 3 ∼ 22km).

◮ Scattering: Troposphere scattering/Meteor-tail scattering.◮ Besides, Cellular communications.

12 / 25

Wired Channel

◮ open wires: rarely used now.

◮ symmetrical cables: telephone.

◮ coaxial cables: CATV.

◮ fiber, invented by Chinese (p&Nobel Prize laureate, alsonamed “Father of Fiber Communications”). Transmission lossis about 0.2dB/km. Transmission rate: could be more than100Gbit/s.

13 / 25

Channel Models

There are different definitions of channels for the discussion ofcommunication system performance.

Encoder Modulator

Media

DemodulatorDecoder

Modulation Channel

Coding Channel

T-Transformer

R-Transformer

Input

Output

14 / 25

Modulation Channel

( )f +( )ie t

( )n t

( )oe t

The relationship between the input ei(t) and the output eo(t) canbe expressed as

eo(t) = f [ei(t)] + n(t).

Normally, f [ei(t)] can be expressed as the convolution of unitimpulse response and input signal, i.e.,

eo(t) = ei(t) ∗ c(t, τ) + n(t)

According to the characteristics of the c(t, τ) ↔ C(ω) in the rangeof signal frequency band, we have

◮ C(ω) is constant, i.e, eo(t) = cei(t) + n(t).◮ C(ω) is time-invariant, i.e., eo(t) = ei(t) ∗ c(t) + n(t).◮ C(ω) is time-variant, i.e., eo(t) = ei(t) ∗ c(t, τ) + n(t).◮ examples in the practice.

15 / 25

Coding Channel

P(0/0)

P(1/1)

P(0)

P(1)

P(1/0)

P(0/1)

◮ Coding channel is digital channel or discrete channel. Itsinputs and outputs are discrete/digital signals. The influenceon the signal is to change input digital sequences into anotherkind of output digital sequences. Due to channel noise orother factors,it will cause output errors. Therefore therelationship between input, output digital sequences can berepresented by a group of transition probability.

◮ P (0) and P (1) are the prior probability of sending “0” and“1”; P (0/0) and P (1/1) are correct transition probability;P (1/0) and P (0/1) are error transition probability.

◮ The bit error rate (BER) is Pb = P (0)P (1/0) + P (1)P (0/1).16 / 25

Constant Parameter Channel

◮ The influence that constant parameter channel on signals issure or changed extremely slow. The amplitude-frequencycharacteristic and phase-frequency characteristic are used todescribe the influence. H(ω) = |H(ω)|e−jϕ(ω).

◮ Ideal constant parameter channel characteristic:◮ |H(ω)| = K;◮ ϕ(ω) = ωτ . Or ∂ϕ(ω)

∂ω= τ .

◮ What do they mean? Pictures? Impulse response h(t)=?

◮ Amplitude frequency distortion: attenuation, for differentfrequency components, are different.

◮ Phase frequency distortion: delay, for different frequencycomponents, are different.

17 / 25

Random Parameter Channel

◮ the transmission attenuation varies with time;

◮ transmission delay varies with time;

◮ multipath effect.

Assuming that the transmitting signal be A cosω0t, and let itpropagate to the receiver over n paths, the received signal

R(t) =

n∑

i=1

ri(t) cosω0[t− τi(t)] =

n∑

i=1

ri(t) cos ω0t+ ϕi(t).

Let Xc(t) =

n∑

i=1

ri(t) cosϕi(t) and Xs(t) =

n∑

i=1

ri(t) sinϕi(t), we

obtain R(t) = Xc(t) cosω0t−Xs sinω0t = V (t) cos[ω0t+ ϕi(t)].

◮ What are the characteristics of the current R(t)?

18 / 25

Random Parameter Channel

+

two path channel model

( )oe t( )ie t

0

2

!

3

!

4

!

"

( )H j

Two path channel-Selective frequency fading channel

◮ Impulse response, h(t) = δ(t) + δ(t − τ);

◮ Transfer function, H(jω) = 1 + e−jωτ ;

◮ Some frequency (fading to zero): π/τ , 3π/τ · · · ;

◮ To avoid fading, transmitted signal should locate between thetwo zero points;

◮ Coherent bandwidth: ∆f = 1/τ . Pulse width 3τ < Ts < 5τ .

◮ How about multi-path?

19 / 25

Noise

Noise always exists in communication systems. Since such noise issuperimposed on the signal, it is called additive noise.

Category

◮ According to the sources: man-made noise and natural noise.

◮ According to the characteristics: temporal/frequency◮ impulse noise:◮ Narrow band noise:◮ Fluctuation noise: thermal noise power

P = 4kTB,

where k = 1.38× 10−23 is called the Boltzmann constant, T isthe thermodynamic temperature (oK);

◮ In the communication system, the noise is usually additivewhite gaussian noise.

20 / 25

Channel Capacity

Definition

◮ Channel capacity is the tightest upper bound on the rate ofinformation that can be reliably transmitted over acommunications channel.

◮ We define two kinds of generalized channelmodelsµmodulation model and coding channel.

◮ Modulation channel is continuous, which should becharacterized by channel capacity of continuous channel¶

◮ Code channel is discrete§which is characterized by channelcapacity of discrete channel.

◮ Here we only study the former.

21 / 25

Shannon Formula

◮ For some continuous channel, frequency bandwidth is B(Hz),input signal is x(t)§the noise n(t) is Gaussian and white, andthe output is

y(t) = x(t) + n(t)

◮ Power of input signal is S, power of channel noise is N§meanand variance of n(t) is zero and σ2

n§respectively, and powerspectrum density is n0. Channel capacity is characterized by

C = B log2

(

1 +S

N

)

(b/s) N = n0B

◮ The above is Shannon formula.

◮ It gives the maximum transmission rate under the givenparameters.

◮ ������DDDÑÑÑ���ÇÇÇ���uuu���uuu&&&���NNNþþþ§§§KKKooo���±±±ééé������«««&&&���???èèè���ªªª§§§¢¢¢yyy���������DDDÑÑѶ¶¶eeeDDDÑÑÑ���ÇÇÇ���uuu&&&���NNNþþþ§§§KKKØØØ���UUU¢¢¢yyy���������DDDÑÑÑ"22 / 25

Some results from Shannon formula

◮ Channel capacity C is increased while increasing signal powerS. lim

S→∞

C = limS→∞

B log2

(

1 +S

n0B

)

→ ∞

◮ Channel capacity C is increased while reducing noise power.

limN→0

C = limN→0

B log2

(

1 +S

N

)

→ ∞

◮ Channel capacity C is increased while increasing signalbandwidth.

limB→C

= limB→∞

B log2

(

1 +S

n0B

)

≈ 1.44S

n0

◮ Shannon formula only gives the upper bound of informationtransmission rate. But don’t show how to realization.

23 / 25

Applications

◮ For the desired channel capacity C§B and S/N havedifferent choice"

◮ If increasing channel frequency bandwidth, we can use smallersignal-to-noise ratio S/N , and vice versa.

◮ If signal-to-noise ratio S/N is fixed, increasing B can reducetransmission time.

◮ If channel capacity C is fixed, we can use (B1, S1/N1) and(B2,S2/N2), i.e.,

B1 log2

(

1 +S1

N1

)

= B2 log2

(

1 +S2

N2

)

◮ This gives great freedom for designing system.

24 / 25

ExampleA picture consists of 2.25× 106 pixels. Each pixel is expressed by16 levels and each level is with equal possibility. Assume thechannel B = 3KHz§S/N = 30dB. Determine minimum time oftransmitting the picture.

◮ Information in each pixel

Ii = log21

P (Ni)= log2 16 = 4bit

◮ Information of the whole picture

Itotal = 2.25 × 106 × Ii = 9× 106bit/s◮ Channel capacity

C = B log2

(

1 +S

N

)

= 3×103×log2(1+103) ≈ 3×104bit/s

◮ Minimum time

tmin =ItotalC

= 300s

25 / 25