Copyrights @ Higher Education Commission

215

Transcript of Copyrights @ Higher Education Commission

Page 1: Copyrights @ Higher Education Commission
Page 2: Copyrights @ Higher Education Commission

Copyrights @ Higher Education Commission

Islamabad

Lahore Karachi Peshawar Quetta

All rights are reserved. No part of this publication may be reproduced, or transmitted, in any form

or by any means – including, but not limited to, electronic, mechanical, photocopying, recording,

or, otherwise or used for any commercial purpose what so ever without the prior written permission

of the publisher and, if publisher considers necessary, formal license agreement with publisher may

be executed.

Project: “Monograph and Textbook Writing Scheme” aims to develop a culture of writing and to

develop authorship cadre among teaching and researcher community of higher education

institutions in the country. For information please visit: www.hec.gov.pk

HEC – Cataloging in Publication (CIP Data):

Umrani, Fahim Aziz.

Rudiments of Electronic Communication Systems/ Fahim Aziz Umrani,

Saima Mehar and Abdul Waheed Umrani

1. Electronic Communication. 2. Communication Systems-

Basics.

I. TITTLE. II. Saima Mahar. III. Umrani, Abdul Waheed

ISBN: 978-969-417-216-3

621.382 ddc23.

First Edition: 2019

Electronic Version Only

Published By: Higher Education Commission – Pakistan

Disclaimer: The publisher has used its best efforts for this publication through a rigorous system

of evaluation and quality standards, but does not assume, and hereby disclaims, any liability to any

person for any loss or damage caused by the errors or omissions in this publication, whether such

errors or emissions result from negligence, accident, or any other cause.

ii

Page 3: Copyrights @ Higher Education Commission

Contents

Preface . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Organization of Text . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

Learning Features . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

Acknowledgments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

1 Introduction to Communication Systems . . . . . . . . . . . . . . . . . . . . . . . 17

1.1 What is Telecommunication? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181.2 History of Communication Systems . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181.3 Communication Systems Terminology . . . . . . . . . . . . . . . . . . . . . . . . . . . . 191.4 The Electromagnetic Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201.5 Modes of Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 231.6 Data Transmission Mediums . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24

1.6.1 Guided Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25

1.6.2 Unguided Media . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27

1.7 The dB in Communication Systems Application . . . . . . . . . . . . . . . . . . . . . 281.8 Scope and Application Areas of Telecommunication . . . . . . . . . . . . . . . 28

I Part ONE - Telecommunication Techniques

2 Amplitude Modulation Demodulation . . . . . . . . . . . . . . . . . . . . . . . . . 37

2.1 What is Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

Page 4: Copyrights @ Higher Education Commission

2.2 Need of Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38

2.3 Modulation Schemes . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39

2.4 Amplitude Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

2.4.1 How AM Works? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41

2.4.2 Modulation Index and Percentage of Modulation . . . . . . . . . . . . . . . . 42

2.4.3 The AM Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

2.4.4 Bandwidth of AM Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 46

2.4.5 Power of AM Signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 47

2.5 Types of Amplitude Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 48

2.5.1 Double-Sideband Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

2.5.2 Single Sideband Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

2.5.3 Vestigial Sideband Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 49

2.5.4 Quadrature Amplitude Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

2.6 Amplitude Demodulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

2.6.1 AM Detection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

2.6.2 Superhetrodyne Receivers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

2.6.3 Balanced Modulator . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 51

2.7 Pros and Cons of AM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

3 Angle Modulation & Demodulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 59

3.1 Introduction to Angle Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

3.2 Introduction to Frequency Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60

3.2.1 How Frequency Modulation Works? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61

3.2.2 Frequency Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62

3.2.3 Power in the FM Wave . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3.2.4 Direct & Indirect FM Generation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

3.3 Introduction to Phase Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

3.3.1 Working of Phase Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67

3.3.2 Mathematical Representation of PM . . . . . . . . . . . . . . . . . . . . . . . . . . 68

3.3.3 Phase Deviation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

3.4 Modulation Index in Angle Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 68

3.5 Sidebands in Angle Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69

3.6 Relationship Between PM and FM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 70

3.7 FM & PM Demodulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

3.7.1 Slope Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 71

3.7.2 Ratio Detector . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

3.7.3 Phase Locked Loop (PLL) Demodulator . . . . . . . . . . . . . . . . . . . . . . . . 72

3.8 AM and FM: A Comparison . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

Page 5: Copyrights @ Higher Education Commission

4 Digital Modulation & Demodulation . . . . . . . . . . . . . . . . . . . . . . . . . . . 79

4.1 Introduction to Digital Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

4.1.1 What is Digital Communication System . . . . . . . . . . . . . . . . . . . . . . . . . 80

4.1.2 Why Digital Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 80

4.2 Elements of a Digital Communication System . . . . . . . . . . . . . . . . . . . . . . 82

4.3 Introduction to Digital Signal Processing . . . . . . . . . . . . . . . . . . . . . . . . . . . 84

4.3.1 Key Components of a Digital Signal Processor . . . . . . . . . . . . . . . . . . . 84

4.3.2 Analog-to-Digital Conversion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 85

4.3.3 Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 86

4.3.4 Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

4.4 Serial and Parallel Transmission . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

4.4.1 Serial Transmission of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 88

4.4.2 Parallel Transmission of Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

4.5 Analog-to-Digital (Pulse) Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 89

4.5.1 Pulse Amplitude Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91

4.5.2 Pulse Duration Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

4.5.3 Pulse Position Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

4.5.4 Pulse Code Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92

4.5.5 Differential Pulse Code Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

4.5.6 Delta Modulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93

4.6 Companding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 94

4.7 Digital-Digital Modulation: Line Coding Schemes . . . . . . . . . . . . . . . . . . . 94

4.7.1 Unipolar Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95

4.7.2 Polar Line Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97

4.8 Comparison Between Line Coding Schemes . . . . . . . . . . . . . . . . . . . . . . . 98

4.9 Digital-Analog Modulation: Keying Techniques . . . . . . . . . . . . . . . . . . . . . 98

4.9.1 Amplitude Shift Keying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

4.9.2 Frequency Shift Keying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

4.9.3 Phase Shift Keying . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100

4.10 Constellation Diagram . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

5 Multiplexing and Multiple Access Techniques . . . . . . . . . . . . . . . . . 109

5.1 Introduction to Multiplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

5.2 Need of Multiplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 110

5.3 Multiple Access . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 111

5.4 Types of Multiplexing/Multiple Access . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

5.5 Frequency Division Multiplexing (FDM) . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

5.5.1 Working of FDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112

5.5.2 Features of FDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

5.5.3 Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

Page 6: Copyrights @ Higher Education Commission

5.6 Time Division Multiplexing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

5.6.1 Working of TDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115

5.6.2 Features of TDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

5.7 Code Division Multiplexing/Multiple Access . . . . . . . . . . . . . . . . . . . . . . 118

5.7.1 Working Principles of CDMA . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118

5.7.2 Code Spreading and De-Spreading . . . . . . . . . . . . . . . . . . . . . . . . . . 120

6 Spread Spectrum Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 125

6.1 Introduction to Spread Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1266.2 Why Use Spread Spectrum? . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

6.2.1 Working Principle of Spread Spectrum . . . . . . . . . . . . . . . . . . . . . . . . 128

6.3 Classification of Spread Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1286.4 Direct Sequence Spread Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 128

6.4.1 Working Principle of DSSS . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

6.4.2 Performance of DSSS in the Presence of Interference . . . . . . . . . . . . . 130

6.5 Frequency Hopping Spread Spectrum . . . . . . . . . . . . . . . . . . . . . . . . . . . 1316.6 Introduction to Pseudo-Noise Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . 132

6.6.1 Pseudo Random Properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 132

6.6.2 Maximal Length Sequence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

7 Orthogonal Frequency Division Multiplexing . . . . . . . . . . . . . . . . . . . 137

7.1 Basics of Wideband and Narrowband Communication . . . . . . . . . . . . . 138

7.1.1 Narrowband Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

7.1.2 Wideband Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

7.2 Challenges in Wireless Communication . . . . . . . . . . . . . . . . . . . . . . . . . . 138

7.2.1 Coherence Bandwidth . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

7.3 Delay Spread & Inter Symbol Interference . . . . . . . . . . . . . . . . . . . . . . . . 1397.4 History of OFDM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1417.5 Basics of Orthogonal Frequency Division Multiplexing . . . . . . . . . . . . . . 141

7.5.1 Description of an OFDM System . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 142

7.5.2 Orthogonality of Sub-Carriers . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

7.6 OFDM Based 4G Technologies . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143

7.6.1 Long Term Evolution-LTE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

7.6.2 WiMAX . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

8 Introduction to Information Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . 149

8.1 Introduction to Information Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

8.1.1 History of Information Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 150

8.2 Information Content of a Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

8.2.1 Common Sense Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

8.2.2 Technical Approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151

Page 7: Copyrights @ Higher Education Commission

8.3 Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1528.4 Source Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1548.5 Huffman Coding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1558.6 Shannon-Hartley Capacity Theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 155

II Part TWO - Telecommunication Systems

9 The Basics of Satellite Communication . . . . . . . . . . . . . . . . . . . . . . . . 161

9.1 Introduction to Satellite Communication . . . . . . . . . . . . . . . . . . . . . . . . . . 1629.2 Terms Related to Satellite Communication . . . . . . . . . . . . . . . . . . . . . . . . 1629.3 Satellite Orbits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

9.3.1 Low Earth Orbit (LEO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163

9.3.2 Medium Earth Orbit (MEO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

9.3.3 Geostationary Earth Orbit (GEO) . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 164

9.3.4 Comparison of LEO, MEO GEO . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

9.4 Position of Satellite Orbits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 165

9.4.1 Gravity and Orbit Positioning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

9.4.2 Circular and Elliptical Orbits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166

9.4.3 Concept of Longitude and Latitude . . . . . . . . . . . . . . . . . . . . . . . . . . 168

9.4.4 Concept of Elevation and Azimuth . . . . . . . . . . . . . . . . . . . . . . . . . . . 168

9.5 How Satellite Communication Works? . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1689.6 Application of Satellite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

9.6.1 Communication Satellite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

9.6.2 Digital Satellite Radio . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

9.6.3 Navigation Satellite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 170

9.6.4 Surveillance Satellite . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 171

10 Introduction to Optical Communication System . . . . . . . . . . . . . . . 175

10.1 Introduction Optical Communication . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17610.2 Basics of Light . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 176

10.2.1 Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

10.2.2 Refraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177

10.3 Block diagram of Optical Fiber Communication . . . . . . . . . . . . . . . . . . . 17810.4 Optical Fiber waveguide . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

10.4.1 Optical Fiber Cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 179

10.4.2 Formation of Fiber Optic Cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

10.4.3 Characteristics of Fiber Cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 180

10.5 Transmission of Light through Fibre Optics . . . . . . . . . . . . . . . . . . . . . . . . . 181

10.5.1 Ray Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181

10.5.2 Limitations of Ray Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 182

10.5.3 Mode Theory . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183

Page 8: Copyrights @ Higher Education Commission

10.6 Types of Fiber Optic Cable . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184

10.6.1 Single Mode Step Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 184

10.6.2 Multimode Step Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

10.6.3 Multimode Grade Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 185

10.7 Optical Sources . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186

10.7.1 Light Emitting Diode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186

10.7.2 LASERs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187

10.8 Optical Detectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

10.8.1 PN Photodiode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

10.8.2 PIN Photodiode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

10.8.3 Avalanche Photodiode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 189

10.9 Pros and Cons of Optical Fiber Technology . . . . . . . . . . . . . . . . . . . . . . . 190

11 Introduction to RADARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197

11.1 Introduction to RADARs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19711.2 Operation of RADAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197

11.2.1 Speed Measurement . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198

11.3 RADAR Frequency Bands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19811.4 Pulse RADAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199

11.4.1 Pulse Repetition Frequency . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 199

11.4.2 Maximum Unambiguous Range . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 200

11.5 Continuous Wave RADAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 20011.6 Applications of RADAR . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201

Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

Page 9: Copyrights @ Higher Education Commission

Preface

The first edition of Rudiments of Electronic Communication System is devised to providethe basic knowledge that is useful for the study of the modern principles of CommunicationSystem. This book is the result of number of years of experience of teaching at the MehranUniversity of Engineering & Technology, Jamshoro. The contents of this edition targetthe courses of Communication System taught in the Department of Telecommunication,Computer System engineering and Department of Electrical Engineering of Mehran UET,Jamshoro.Communication Systems is a vast field and with the rapid advancement in its different studyareas, it becomes challenging for the students at the under-grad level to grasp the basics ofthe subject. Rudiments of Electronic Communication Systems aims to build the foundationof the subject in the most effective manner.The text of the book is organized in such a way that the reader finds it interesting to readand understand. Moreover, the language of the book is kept as simple as possible so that thetext seems to be self-explanatory. This book can be used as pre-requisite for many of theadvanced courses in the Communication systems or can be simply used as a mini-refresherif you plan to have a quick review of the concepts related to Communication systems and itsfundamental study areas.

Page 10: Copyrights @ Higher Education Commission
Page 11: Copyrights @ Higher Education Commission
Page 12: Copyrights @ Higher Education Commission

12

Organization of Text

The first edition of the book is organized into two major parts and accompanied with 11chapters. The first first chapter of this book which is placed outside of the two parts serves asintroductory chapter to the telecommunication technology and is not included in any parts.• Chapter-1 Introduction to Electronic Communication: This chapter provides the

basics of the communication and telecommunication.

This is followed by the first part called Part One - Telecommunication Techniques containsseven chapters. This section contains seven chapters describing the basic telecommunicationtechniques in a manner suitable for the fresh student taking the course of communicationsystems.• Chapter-2 starts the first part of this chapter discussing amplitude modulation and

demodulation. Chapter-3 to Chapter-4 discuss the basics of modulation along withits different variants such as Angle Modulation discussed in Chapter 3 and DigitalModulation discussed in Chapter 4

• Chapter-5 This chapter discusses the basic concepts of multiplexing and multipleaccess along with its variants.

• Chapter-6 discusses the Spread Spectrum Techniques in detail and Chapter-7 providesthe basics of Orthogonal Frequency Division Multiplexing.

• Chapter-08: Discusses a very important field of study i.e. Information Theory

While Part two - Telecommunication Systems contains three chapter describing majortelecommunication systems such as Satellite, Fiber Optics and Radar systems. Each ofwhich is devised carefully to make sure it fully explains the corresponding concept• Chapter-9 provides the basics of Satellite Communication.

• Chapter-10 discusses Fiber Optic Communication

• Chapter-11 discusses the basic concepts of Radar systems namely pulse radars andcontinuous wave radars.

Page 13: Copyrights @ Higher Education Commission

Learning Features

The book is accompanied with the following learning features:• Multiple Choice Questions/Best Choice Questions: The MCQS entailed at the end

of each chapter allows the students to evaluate themselves based on what they havelearned in each chapter.

• Review Questions: These are the set of questions related to each chapter which teststhe student’s theoretical ability against each chapter.

• Miscellaneous Questions: These set are the set of questions whose answers may notbe explicitly present in the text but can be answered after the research on the internet.

• Brain Buzzers: These are the thought-provoking questions that judge the student’sability to think totally out of the box.

• Remarks

R Important comments are highlighted at the end of each topic

• Chapter Summary: Each chapter ends with a comprehensive summary reinforcingthe main concepts.

Page 14: Copyrights @ Higher Education Commission
Page 15: Copyrights @ Higher Education Commission

Acknowledgments

All the praises and thanks be to God who is the Lord of the universe.

O Allah, let Your Blessings come upon Muhammad (Peace Be Upon Him) andthe family of Muhammad (Peace Be Upon Him), as you have blessed Ibrahimand his family.

After thanking Almighty and sending salutations to the holy Prophet (peace be upon him,We would like to thank our students, our teachers, our friends, our neighbours and last butnot the least our families for always managing to support, guide, and be kind enough to usso that we are able to do this humble work. Special thanks to our student Mr. Faisal AhmedDahri who has helped tremendously bringing this work onto LaTeX. Thanks to Dr. FaheemYar Khuawar who helped me with setting up the book structure.

Page 16: Copyrights @ Higher Education Commission
Page 17: Copyrights @ Higher Education Commission

1. Introduction to Communication Systems

Learning Focus

This chapter attempts to build the foundation of the communication and telecommunicationsby keeping things to the point yet beneficial to the reader. The chapter covers the followingobjectives:

1. To familiarize the reader with the concept of communication in general and that oftelecommunication in particular.

2. To provide a brief history of the telecommunications, so that the reader can get toknow from where it all started and where it is likely to head to.

3. To present the technical terms that the reader will come across during the entire studyof the subject.

4. To provide the reader with the knowledge of what a communication system comprisesof i.e. its basics elements.

5. To provide a brief yet informative overview of the Electromagnetic spectrum.6. To discuss the modes of communication available and the types of the transmission

media.7. To acquaint the reader with the scope of the subject and its value in the market.

Page 18: Copyrights @ Higher Education Commission

18 Chapter 1. Introduction to Communication Systems

1.1 What is Telecommunication?A quality that separates human beings from other living creatures is their ability to communicatetheir thoughts and feelings. Speaking in a broader sense, the word communication is a simple actof transferring information from one place to the other or from one entity to the other.

The word Communication has been derived from the two Latin words Communis or Communi-care. Communis is a noun which means common, communality or sharing whereas, Communicareis a verb which means making something common.

Merriam Webster defines Communication as:

"A process by which information is exchanged as between individualsthrough a common system of symbols, signs or behaviors".

Another, term mostly used in conjunction with communication is Telecommunication. Theword telecommunication is mostly used in technological perspective. It can be defined is:

"The exchange of information over significant distances by electronicsmeans".

Telecommunication makes use of channels to carry information over a wired or wirelessmedium.

1.2 History of Communication SystemsThe art of communication is as old as the human itself. Not only humans but every living beingcommunicates or connects with the other living being in one way or the other. It is quite difficultto mark from where actually the process of communication initiated, however, this section coversbrief yet important landmarks that the subject has gone through.The Ancient Era: the first ever method of communication that the prehistoric man adopted was

by making use of fire or smoke signals and drum beats to convey their messages to theneighboring tribes or clans. The signals they used for communicating with one another werepredefined and each signal had some special meaning that could deliver information such as“safety”, “danger”, “triumph” or an outbreak from some the enemy.

The First Ever Mail: During the 6th century BC, Cyrus the Persian Emperor introduced first everpostal system to communicate over the entire empire.

The Pigeon Post: This is a very well-known method of communication that was introduced backin 5th century by the Persians and Syrians. Pigeons were chosen to carry messages becausethey can find their way back to their nests irrespective of the distance.

Maritime Flag Semaphore: Before 15th Century, it was quite cumbersome to communicate be-tween ships. To overcome this problem, flag semaphores were introduced that consist of aspecial code that involved the position of two hand-held flags. These positions representedsome letter or number that made it easy for the ships to communicate.

Optical Telegraphs: This form of communication was introduced by two French inventors, TheChappe brothers in the 1970s. Optical telegraphs consist of pendulums structured on a hightower. The telegraph would swing its mechanical alarms and send messages from one towerto other.

The Telephones: the first ever telephone as we all know was invented by Alexander Graham Bellin 1876.Since then the revolution of telephone is constantly evolving.

Page 19: Copyrights @ Higher Education Commission

1.3 Communication Systems Terminology 19

1.3 Communication Systems Terminology

The entire study of communication systems is based on certain terms which, if understood properly,can make the study of the subject quite interesting.

1. Channel: Any media through which the information is sent and received.2. Noise: A noise is a random or undesired signal that exists in the communication system.3. Transmitter: A transmitter is an electronic device that is used to transmit the signals for

communication. It converts the signal in a form that is suitable to be transmitted over thecommunication channel.

4. Receiver: Receiver is the mirror image of the transmitter; it converts the signal back into itsoriginal form.

5. Information: An information is the message, initiated from some information source that isto be conveyed to the receiver end of the communication system.

6. Modulation: In telecommunications, modulation is the process of superimposing or mixingthe information signal onto the carrier wave to change certain characteristics of the carriersignal which is usually some high frequency signal.

7. Demodulation: It is the process of recovering the information signal from the modulatedcarrier wave.

8. Multiplexing: In telecommunications, multiplexing is the process of combining varioussignals and sending them over a communication medium as a single signal.

9. De-multiplexing: De-multiplexing is the process in which the multiplexed signals areseparated at the receiver’s end.

10. Multiple Access: In telecommunications, multiple access, as the name suggests, is theprocess by which various users have access to the same channel by dividing the entirechannel bandwidth based on certain parameters.

11. Amplifier: An amplifier is an electronic device used to raise the amplitude of the signals.12. Modem: Modem is the mix of two terms, modulator and de-modulator. It is an electronic

equipment that is used to transfer data to or from a computer by making use of cables ortelephone lines.

13. Antenna: An antenna is the part of the telecommunication equipment that is used to sendand receive the signals.

14. Gain: it is defined as the ratio of the signal’s output to the signal’s input.15. Bit Error Rate: It is a parameter calculated by dividing the number of erroneous bits by the

total number of bits transferred during some time interval.16. Baud Rate: Baud rate is the parameter defined as the rate at which the information is

conveyed over the communication channel.17. Bandwidth: In telecommunications, bandwidth is some range of frequencies (that form a

band) through which the data can be transferred.18. Crosstalk: It is defined as the electric or magnetic field disturbance of one channel that has

effect on the signal of an adjacent channel.The communication systems have undergone various changes and are likely to be subjected to

more changes in the coming years but whatever the form of communication is, whether it takesplace between two persons conversing face to face or the communication is between two computers,the basic building blocks remain same. Typically, a communication system has:

1. Transmitter2. Channel3. ReceiverFor example, in case of two people conversing face to face, the person1 initiating the process of

communication can be viewed as a transmitter that is it transmits the information or the message,channel in this case, can be air through which the message travels and receiver can be the person2

Page 20: Copyrights @ Higher Education Commission

20 Chapter 1. Introduction to Communication Systems

who receives the information or the message. Keeping this scenario in mind, we can now proceedto the formal functional block diagram of a communication system shown in Fig. 1.1.

Figure 1.1: Block diagram of a communication system.

Information Source: As the name suggests, the information source is the entity that containsthe message to be conveyed. The information source can be thought of as a database ofmessages. To elaborate further, an information source may be some physical item whoseinformation can be read (such as reading the tags or labels attached to an object in a typicalRadio Frequency Identification System) and transmitted for further processing.

Transmitter: A formal definition of transmitter has been presented in section 1.3. To simplify, thispart of the communication system, transforms the signal in a form suitable for transmissionover the communication channel.

Channel: A channel is the path way either wired or wireless that is used to carry the informationsignals sent to it by the transmitter. However, a formal definition of channel has beenpresented in section 1.3. Humans naturally have used air as the communication medium tospeak and listen; however, distinct signals can also be propagated through vacuum water,glass, plastic, human body and various other mediums.

Receiver: This part of the communication system receives the information from the channel anddelivers it to the destination entity.

Destination Entity: This is the last section of any communication system. The destination entitycan be same as the information entity in that it also contains the messages but the differenceis that the destination entity only receives the messages. However, one destination entity canserve as the information source for another communication system.

Noise: This term has already been presented in section 1.3. It is nothing but a signal that isunwanted and its presence in the communication channel may alter certain parameters of theinformation signal because of which the information signal at the destination end might beslightly different from the signal at the source end. An ideal communication is always noisefree.

1.4 The Electromagnetic SpectrumWhen we hear the word ‘light’, we picture something that is visible to our eyes and many of usrefer to only the visible light. However, visible light is not the only form of light that seems toexist. If we observe our daily activities such as tuning radio, sending text messages, using remoteto switch TV channels or using a microwave oven to quickly heat up the food, then we can come toknow that we are surrounded by many different types of light that are simply not perceptible tobare eye. These different forms of light together form the Electromagnetic Spectrum or simply theEM spectrum as shown in Fig. 1.2.

The theory of the EM spectrum was put forward by James Clark Maxwell, an English scientist.He claimed that electric fields and magnetic fields can be combined to form the Electromagnetic

Page 21: Copyrights @ Higher Education Commission

1.4 The Electromagnetic Spectrum 21

Figure 1.2: Electromagnetic Spectrum.

waves. Furthermore, a fluctuating magnetic field will cause a fluctuating electric field which, inturn, causes a fluctuating magnetic field and the process keeps on going. This theory suggests thefact that the EM waves are oscillatory and vary sinusoidally. The oscillations can occur either atextremely low frequency or at extremely high frequency.

The discussion of EM spectrum usually entails two concepts; Frequency and Wavelength. Theseconcepts are related to the definition of a Signal and are discussed below:Signal: A signal is a function which represents a physical quantity. It can be a function of one,

two or more variables. For example, temperature in Jamshoro , Sindh during a week which isa function of time, Rainfall in Pakistan during year which is a function of frequency and yourvoice signal which is a function of both time and frequency. Typically a signal is a functionof time, frequency and space. A signal is graphically represented by a sinusoidal wave asshown in Fig. 1.3

R A signal is a description of evolution of a physical pheomenon.

According to Fourier series all signals are made up of sinusoids and therefore sine wave isconsidered as fundamental wave. Mathematically it is represented as Eq. (1.1):

x(t, f ,φ) = A× sin(2π× f × t +φ) (1.1)

where A is the amplitude, f is the frequency and φ is the phase of the sine wave.Frequency: Frequency, if considered from a common point of view, then it is the number of

occurrences of a specific phenomenon. However, in electronics, frequency is defined as thenumber of cycles of a repetitive wave that occurs in a certain time period. Cycle, however isdefined as the one positive and one negative alteration of the sinusoidal wave. The unit offrequency is Hertz (Hz), named after Heinrich Hertz, a German physicist who applied theMaxwell’s theory to the radio waves production and reception. This accounts for the factthat, if 1000 cycles occur in 1 second, then the frequency is 1000 Hz.

Wavelength: This parameter is used to define the length of one cycle and is measured in meters.Wavelength is usually denoted by the Greek letter Lambda, λ . To further simplify, wavelengthis the distance between the adjacent peaks of the wave.

Page 22: Copyrights @ Higher Education Commission

22 Chapter 1. Introduction to Communication Systems

Figure 1.3: Sinusoidal Wave in Time domain.

Relationship between Frequency f & Lambda λ : Frequency of a wave, denoted by f and mea-sured in Hertz (Hz) and its wavelength denoted by λ and measured in meters are related bythe following equation (1.2):

c = f λ (1.2)

Wherec = speed of light i.e. 3×108 m/s

Relationship between Frequency f & Energy: The frequency and Energy E are related by fol-lowing (1.3):

E = h f (1.3)

where h is the Planck’s constant and is equal to 6.626176×10−34.

Exercise 1.1 A sine wave has a time period of 5 seconds, find its frequency, wavelength, energyand power.

Waves in EM SpectrumAs discussed earlier, an EM spectrum is consisting of many kinds of waves of which some canbe serve the purpose of communication and others cannot. The waves in EM spectrum can bestudied either in the order of shortest frequency and longest wavelength or vice versa. This sectionhowever, presents a brief discussion on each of the wave included in the EM spectrum along withtheir dangers.Radio Waves: Radio waves have the longest wavelength. These waves are supposed to be larger

than a football field. These waves are often used to transmit data such as these waves carry

Page 23: Copyrights @ Higher Education Commission

1.5 Modes of Communication 23

signals for the television or cellular phones and hence can be used for the communicationpurposes. Radio waves are usually given off by stars and gases in the space. However,prolong exposure of the radio waves can cause leukemia, they can also damage tissues andcan harm biological functions.

Microwave: The next wave in series is the Microwaves. These have comparatively shorter wave-length and greater frequency than the radio waves. These are the same waves that are used tocook the food. Microwaves can also be used for the purpose of communication as in, it cantransfer information from one place to the other and can penetrate through light rain, snow,clouds and smoke. These waves require line of sight to communicate and are unfeasible foruse by the mobile companies. Prolong exposures to these waves can cause Cataracts. Also,microwave from mobile phone can influence various parts of brain.

Infrared Waves: These waves are called IR for short. Infrared waves are supposed to lie betweenvisible and microwave portion of the EM spectrum. Infrared radiations can be experiencedevery day in form of heat because hot bodies such as sun, gives off these radiations. Shortand near IR waves are found in remote controls and do not give off heat as the far IRradiations. Cameras also make use of IR waves instead of the ordinary light and passiveinfrared radiations can be used for security purposes. Nevertheless, prolong contact to thesewaves can cause overheating.

Visible Waves: These waves are visible to the human eye. Visible waves can be experiencedas the color of rainbows. Each color has some particular wavelength as in, red has thelongest wavelength and violet has the shortest wavelength. The color of rainbows known asROYGBIV, together form what is known as the white light. Visible waves are used in smartlightning, light bulbs and the sun itself is the source of visible light. Longer exposures to thevisible light can harm the retina of the eye and the sense of hearing.

Ultraviolet Waves: Ultraviolet waves are known as UV for short. They possess shorter wavelengththe visible waves and greater frequency. These waves cannot be seen by the human eyehowever certain kind of insects, bumblebee, for instance can see these waves. Scientists havedivided UV waves into near UV which is closest to optical light, Extreme UV which is closeto X-Rays and Far UV which is an intermediate to the near and Extreme UV. UV waves areused to kill microbes that are tiny living organisms such as fungi, bacteria etc. which cannotbe seen with the naked eye. These waves are also used for the sterilization of the surgicalequipment in the hospitals. However, large doses can cause skin cancer and sun burns.

X-Rays: X-Rays tend to be more like a particle than a wave because of the fact that they havereally short wavelength and really high frequency. These waves can pass through almostanything and endure tremendous amount of energy. Their usage can be visualized at airportsecurity checks to see through the luggage. However, large doses can cause cell damage andcancer.

Gamma Rays: Gamma rays have the shortest wavelength and highest frequency. These waves aregenerated as a result of nuclear explosions and radioactive atoms. These, too, can be used tokill microbes and hence are to sterilize the equipment. One common use of gamma rays isfound in the radio therapy for the cancer patients because gamma rays are supposed to killthe living cells hence overcoming the need of painful surgery. However, prolong exposurecan cause mutation.

1.5 Modes of CommunicationDepending on the ones requirements and needs of data transmission, the electronic communicationcan have different types. One way of communication might require sending the data from thetransmitter to the receiver but not requiring any data back from the receiver, another way canbe sending the data to the destination and getting the data back from the destination but not

Page 24: Copyrights @ Higher Education Commission

24 Chapter 1. Introduction to Communication Systems

simultaneously or the transmission or reception can simultaneously be from both the ends. Thisdiscussion thus, suggests the fact that electronic communication can take one of the three basicforms (See Fig. 1.4):

Figure 1.4: Types of modes of communication.

Simplex: As the name suggests, this is the simplest form of the electronic communication. Simplexis also called one-way communication because of the fact that the data or the message is sentonly in one direction. In this case, the message is sent by the transmitter and the receiver isthe destination of the message. The receiver, in this case cannot send any message back to thetransmitter. Example of simplex communication includes, radio stations and TV broadcasts.

Half-Duplex: In this case, the exchange of information is two-way that is, the transmitter can sendthe message to the receiver and receiver, in turn, can communicate with the transmitter butthe communication cannot take place at the same time. During the course of communication,the communicating parties take turns for transmission and reception of data. The process ofcommunication can be interrupted or halted if both the parties attempt to send at the sametime. The best and the most common example of half duplex is walkie talkie.

Full-Duplex: Full-duplex, like half duplex, supports the two-way communication and in additionto that provides the facility of communicating simultaneously. This allows both the transmitterand receiver to take part in the communication at the same time without any interruption.The best example of full duplex can be seen while two people are communicating over thetelephone. In this case, both the persons can talk and listen simultaneously.

1.6 Data Transmission Mediums

As discussed earlier, communication is the exchange of data between two entities. However, totransmit data from one place to other, it is necessary to establish some transmission path betweenthe transmitter and the receiver. This transmission path is also known as data transmission, digitaltransmission or simply digital communication and is defined as the physical transfer of data overthe communication channel. The process of data transmission takes place by converting the datainto electromagnetic signals and then transferring those signals through some transmission medium.

A transmission media is said to be seemly for the transmission of data if it possesses thefollowing characteristics:

1. It should be capable of providing an appropriate quality of the data exchange over longerdistances. For instance, in case of voice communication, the quality of communicationdepends on the quality of voice.

Page 25: Copyrights @ Higher Education Commission

1.6 Data Transmission Mediums 25

2. It should be able to maintain the original form of data before and after the transmission. Thatis, there shouldn’t be any substantial changes in the data when it reaches its destination.

3. It should be free of transmission impairments. Practically, it is not possible for a transmissionmedia to be free of transmission impairments, but an appropriate transmission media wouldhave minimum number of impairments.

In order to possess the above-mentioned characteristics, every type of transmission mediashould address the following challenges:Bandwidth: The greater the bandwidth of the signal, the higher the data rate can be achieved.Transmission Impairments: These can include the noise or attenuation. These losses can put a

limit to the distance the data travels.Interference: Signals in coinciding or overlapping frequency band can hamper and can garble a

signal.Receivers: The greater the number of receivers present in the system, the more are the chances of

signal distortion. This can also impose limit on the data rates.Since, we are now clear with what actually a transmission media is, what challenges it should

address and what properties it should possess, we can now proceed to the classification of thetransmission path shown in Fig. 1.5.

Figure 1.5: Classification of transmission media.

1.6.1 Guided MediaAs the name suggests, these communication media, guide or dictate the transmission of data. In thiscase, the electrical or optical data travel within some confined boundary of cable or wire. The wireor cable can be made up of copper or can be a fibre optic cable. The signal travelling in a guidedmedia cannot travel outside the cable’s vicinity. The guided media can be further classified into:

1. Twisted pair.2. Co-axial Cable3. Fibre optic cable

Page 26: Copyrights @ Higher Education Commission

26 Chapter 1. Introduction to Communication Systems

Twisted Pair CableThe name of the cable tells a lot about its physical appearance. A twisted pair cable or TP consistof two insulated wires that are twisted around one another in regular spiral pattern. The wires aretwisted to diminish the cross talk and electromagnetic inference. Practically, a number of thesepairs are packaged where each pair has a unique color code and are wrapped in another protectivelayer. The older telephone networks and local area network installations (LAN) made use of thetwisted pair cable. The characteristics of TP cable include:

1. They cost less and are widely used.2. They are used for the transmission of both analog and digital signals.3. To transmit analog signals, amplifiers are needed every 5 km to 6 km and for the digital

signal transmission, amplifiers are needed ever 2 km to 3 km.4. The transmission distance, bandwidth and data rate are limited.The twisted pair cable has two variants; shielded TP and unshielded TP.

1. Shielded Twisted Pair Cable (STP): As the name implies, this variant of the TP has a shieldor an outer covering around the ordinary twisted pair wires. This shield or outer coveringfunctions as the ground. This extra layer of covering reduces the interference and cross talkbut this also adds weight to the TP cable thus making it bulky.

2. Unshielded Twisted Pair Cable (UTP): this variant does not make use of any outer coveringor coating to prevent interference and cross talk. It makes use of media filters to block theinterference. The unshielded TP cable is light-weight, less expensive and flexible and thusit is more preferred over the shielded TP cable. The unshielded TP cable has the followingcategories as identified by the Electronics Industries Association (EIA):

(a) Cat-3: This category includes all the UTP cables that have transmission characteristicsspecified up to 16 MHz.the cable length ranges from 7.5 to 10 cm.

(b) Cat-4: This category includes all the UTP cables that have transmission characteristicsspecified up to 20MHz.

(c) Cat-5: This category includes all the UTP cables that have transmission characteristicsup to 100 MHz. The TP wire length ranges from 0.6 cm to 0.85 cm.

Co-axial CableCo-axial cable is also known as Co-ax cable for short. It was conceived by Oliver Heaviside, andEnglish engineer and mathematician. It is used by the cable TV. The coaxial cable is consisting ofthickly insulated copper wires and it can carry comparatively larger volumes of data than a TwistedPair. If a cross-section of the cable is taken, then it can be seen that it has an inner copper conductorand an outer shield. The two are in turn separated by some dielectric insulating medium to avoidsignal losses. The cable guides the transmission of electrical signals using an inner conductor andan insulating layer that is enclosed by a shield. The diameter of a coaxial cable ranges from 1-2.5cm. These cables can transmit both analog and digital signals. When compared to the twisted paircables, the coaxial cables are less susceptible to the interference and cross talk. Coaxial cable iswidely used in long distance telephone transmission, local area network (LAN) and cable internet.

Fibre Optic CableA detailed discussion on Fiber optic cable is available in Chapter 10.The name optics comes fromthe fact that the cable is built using the optical or the light based technology. The fibre optic cablemakes use of glass threads to carry the data for transmission over longer distances. These glassthreads are as thin as the human hair and are capable of transmitting light. The glass threads arepackaged together and are called optical cables. A fibre optic cable typically has three parts:

1. Core: it is the inner most section of the cable that is consist of thin constituents or strandsof glass or plastic where the light signals travel. The core has the diameter of 8 to 100micrometer.

Page 27: Copyrights @ Higher Education Commission

1.6 Data Transmission Mediums 27

2. Cladding: Each fibre is contained by its own cladding. It is too made up of glass or plasticbut possess optical properties that are different from those possessed by the core. Thecladding causes the light to remain within the boundaries of the core by the mechanism oftotal internal reflection. In short, cladding is employed to make the light reflect into the corerather than refracting.

3. Jacket: The jacket can be thought of as a covering to the bundles of cladded fibre. Jacketis the polymer layer usually PVC or polyurethane. The jacket provides protection againstmoisture, crushing and other environmental factors that may otherwise harm the data trans-mission.

A single strand can carry around 25,000 telephone calls. So, an entire cable therefore, can transmitmillions of calls. Moreover, they have significantly larger bandwidth as compared to metal cablesand thus can provide higher data transmission rates typically in Gbps. Fibre optic cable is used inTV, subscriber loops and LANs. They are more secure than coaxial or fibre optic cable and are easyto install. The optical fibre cable is less prone to interference and cross talk and are lightweight.They only support digital transmission of data. The fiber optic cable is available in two modes:

1. Single Mode Cable: In this variant, the signals travel in single direction without colliding orbouncing against the walls of the cable. Single mode cable has the diameter of around 5-10microns and are typically used by cable TV, internet and telephone networks.

2. Multimode Cable: This variant can carry significantly more data than a single mode cable.A multimode cable is ten times bigger than the single mode cable. In multimode variant, thesignals can take more than one direction while travelling through the cable and can bounceagainst the walls of the cable. These cables can carry information over relatively shorterdistances.

TWISTED PAIR COAXIAL CABLE FIBER OPTICS

It is inexpensive and alreadyin use.

Often used in place of TP totransfer important links in net-work since it is faster.

They are faster, lighter andsuitable for transferring largeamount of data.

TP wires are slow. High trans-mission causes interference.

More interference-free trans-mission medium than TP.

Light transmission has widerbandwidth thus enablingtransmission rate of hundredsof megabits per sec.

Limiting factors: skin effectand radiation effect.

It is thick and hard to wire inmany buildings.

Difficult to install.

Table 1.1: Comparison of Twisted Pair, Coaxial Cable and Fibre Optic Cable

1.6.2 Unguided MediaThe unguided media or the unbounded media as the name infers, does not guides the transmissionof the data. The data in travelling via the unguided media does not makes use of any cable but freespace. The data are sent and received by the antenna. The antenna can be mounted in various waysso as to receive the data preferably from one direction (unidirectional antennas) or from all thedirections (omni-directional). Since, the unguided media does not makes use of any wires, it is alsoknown as wireless transmission. The unguided transmission of data can be done by:

1. Radio waves2. Micro waves3. Optical Wireless

Radio waves: This kind of transmission makes use of radio waves to transmit data from one

Page 28: Copyrights @ Higher Education Commission

28 Chapter 1. Introduction to Communication Systems

place to the other. These waves broadcast through sky and can cover larger distances. Thefrequency of radio waves typically ranges from 30MHz to 1GHz. Cordless phones, AM andFM radio and Television makes use of radio waves.

Micro waves: They are usually used in unicast communication but have higher frequency ascompared to the radio waves. Microwaves are suitable for one-one communication and areused in cellular phones. The microwave transmission can either be terrestrial or satellite.Terrestrial makes use of radio frequency spectrum. The transmitter, in this case, is a parabolicdish and is mounted at a significant height to catch the best possible frequency. Satellitemicrowave on the other hand, allows transmission and reception of data between two earthstations that are incapable of communicating via conventional means.

Optical Wireless: Recently due to the overcrowding of radio and microwave spectrum and tech-nological advancements in the use of light as communication medium, Optical Wireless hasemerged as a complimentary technology. Optical wireless operates in the range of Visible orInfrared spectrum. Though it is not as mature as its radio/microwave counterpart but opticalwireless nonetheless offers significant advantages. Some of these advantages include beingfree spectrum, interference free, high bandwidth and data rates, inherently secure and lowpower and low cost. Some of the applications of technologies using optical wireless includeInfrared based wireless local area networks, Li-Fi (Light Fidelity), Free space optics etc.

1.7 The dB in Communication Systems Application

The decibel (dB) is used to measure sound level, but it is also widely used in electronics, signals andcommunication. The dB is a logarithmic way of describing a ratio. The ratio may be power, soundpressure, voltage or intensity or several other things. Any ratio or linear value can be converted intodB by using Eq. (1.4):

db = 10log10

(Out putInput

)(1.4)

1.8 Scope and Application Areas of Telecommunication

The world has now shrunk to the size of the palm. One can connect to his/her peers living at the otherend of the world in no time. Around a few decades back, long distance communication would taketremendous amount of time and if, by any chance there would exist some means of communicatingin a shorter span of time, then it would cost a huge deal of money. As the technology gainedgrounds, there came an astonishing change in the ways one would communicate. Large telephoneset have been replaced by mobile sets that are as small as one’s palm. Computers, which occupiedthe entire room have shrunk to the size of laptop and even smaller. Such microchips have beendevised that could be injected into human beings to keep track of their locations. These changeshave occurred so rapidly and are still taking place with every blink of the eye that it is almostimpossible to believe that we belong to the same human race that used to make use of pigeonsand drumbeats to convey information. The word telecommunication in older days was confinedto the use of telephones only. But now it includes a whole lot of other technologies other thantelephones. The telecom industry today has many service providers, such as telephone companies,mobile service providers, internet service providers, cable operators etc. With the growing numberof service providers in every nook and corner of the town, the demand of telecommunicationengineering and engineers have amplified. A telecommunication engineer should be skilled enoughto design the communication equipment and system. Not only this, a telecommunication engineercan work in the areas of testing, quality, control, manufacturing, management etc. The field of

Page 29: Copyrights @ Higher Education Commission

1.8 Scope and Application Areas of Telecommunication 29

telecom is so widespread that it has room for technicians, technical writers, technical salesman andtrainers. Technicians mostly have expertise in practical applications, troubleshooting, repair etc.Technical writers, on the other hand, should possess professional writing skills and is responsiblefor providing necessary documentation that helps the consumer to get an insight of the product theyhave purchased. Technical salesman should have profound knowledge about the equipment he isselling. He/she should be able to communicate with the customer and should be able to addresstheir demands. A trainer is the person who learns about the market and the equipment under thesupervision of other engineers and technicians. This prevalent scope of telecom has applications inthe following areas:

1. Internet Service Provider Companies: These companies are responsible for providinginternet access for the personal or business use. They provide the subscribers with an internetconnection that allows them to communicate with the web. It is not necessary that all thesubscribers around a particular town will share the same internet service provider.

2. Landline Phone Service Providers: These companies are responsible for providing thephone service using wires either metallic or fibre optics. However, these service providersare being replaced by mobile phone service providers.

3. Cable TV: A cable TV provides you the facility to watch TV via radio frequency signalsthat are transmitted either by the coaxial cable or fibre optic cable.

Page 30: Copyrights @ Higher Education Commission

30 Chapter 1. Introduction to Communication Systems

Chapter Summary• Communication is act of transferring information from one place to the other. Telecommuni-

cation, on the other hand, is the act of conveying information using electronic means.• A communication system consists of transmitter that transmits the information, a channel

which is the path along which the data actually travels and a receiver which is the mirrorimage of the transmitter and is the destination of the information.• The theory of the EM spectrum was put forward by an English scientist called James Clark

Maxwell. He claimed that electric fields and magnetic fields can be combined to form theElectromagnetic waves.• Frequency is the number repetitive cycles of the wave that occurs in a given time period.• Wavelength define the length of one cycle and is measured in meters.• The frequency and wavelength are related by the equation c = f λ

• The EM spectrum includes radio waves, microwaves, infrared waves, visible waves ultravioletwaves, X-Rays and Gamma rays.• Radio waves have the longest wavelength and shortest frequency whereas Gamma rays have

shortest wavelength and highest frequency.• There are three modes of communication; simplex-which is one-way, half duplex-two way

but not simultaneous and full duplex-two way and simultaneous.• To carry data from one place to other some path is needed. Such a path is called the

transmission path or transmission media.• The transmission media can be guided or unguided.• The guided media is also called wired transmission and the signals in this type of transmission

remain confined to the boundaries of the cable. Coaxial cables, twisted pair and optical fibrecable come under the umbrella of guided media.• The unguided media on the other hand, is the wireless transmission of data. It allows the

data to travel in free space. Radio waves, microwaves, terrestrial microwave and satellitemicrowave come under the category of unguided media.• The field of telecommunications have developed a lot over years and with the widespread

use of mobiles, internet, computers the telecommunication industry requires a high demandfor the telecom engineers, technicians, technical writers etc.

Page 31: Copyrights @ Higher Education Commission

1.8 Scope and Application Areas of Telecommunication 31

Multiple Choice Questions1. Communication is the process of:

(a) Exchanging information betweentwo entities.

(b) Merging the information obtainedfrom two entities.

(c) Delivering information to another en-tity.

(d) Receiving information from anotherentity.

2. Exchange of information through elec-tronic means is:

(a) Telepathy(b) Telecommunication(c) Teledensity(d) Communication

3. Speed of light in km/h(a) 3×108 km/h(b) 1.07×109 km/h(c) 3×109 km/h(d) 1.07×108 km/h

4. The Chappe brothers introduced:(a) Optical Telegraph(b) Optical Fibre Communication(c) Optical Telecommunication(d) Optical Teledensity

5. Modulation is the process of:(a) Increasing the frequency of the infor-

mation signal.(b) Increasing the frequency of carrier

signal only.(c) Superimposition of the information

signal onto the carrier(d) Decreasing the amplitude of infor-

mation signal.

6. In communication system, noise is notlikely to affect the signal:

(a) At the transmitter(b) In the channel(c) In the information source(d) At the destination

7. Modem transfers data to or from computer

through:(a) Telephone lines(b) Cables(c) Telephone lines or cables(d) Optical Fiber Cable

8. Cross-talk is:(a) Electronic or electric disturbance(b) Electronic or magnetic disturbance(c) Electric or magnetic disturbance(d) Electromagnetic or electronic distur-

bance

9. Microwaves have:(a) Shorter wavelength and greater fre-

quency(b) Shorter wavelength and shorter fre-

quency(c) Greater wavelength and greater fre-

quency(d) Greater wavelength and shorter fre-

quency

10. Gamma rays have:(a) Shortest wavelength and highest fre-

quency(b) Highest wavelength and greater fre-

quency(c) Shorter wavelength and highest fre-

quency(d) Higher frequency and shortest wave-

length

11. Full Duplex allows:(a) Two-way communication only(b) Two-way and simultaneous commu-

nication(c) Simultaneous communication

12. Indicate false statement. Modulation isused to:

(a) Reduce the bandwidth used(b) Separate the different kinds of trans-

mission(c) Ensures that the information signal

is transferred to long distances(d) Allow the use of predictable antenna

Review Questions

1. State and explain the origin and the meaning of the word ‘Communication’.2. Define the term ‘Telecommunication’.

Page 32: Copyrights @ Higher Education Commission

32 Chapter 1. Introduction to Communication Systems

3. Briefly discuss the methods of communication of the ancient era.4. Who invented the first postal system and when?5. How the Maritime Flag Semaphore overcame the problem of communication between the

ships?6. What is transceiver?7. What is the name for the unwanted and undesirable interference that is added to the signal

being transmitted?8. What is another name for the communication medium? Also, state its definition.9. Define ‘Bit Error Rate’ and ‘Baud Rate’.

10. Illustrate and explain the block diagram of the simple communication system.11. State and explain the two major processes associated with the EM spectrum?12. What are the radio-waves? Also mention its limitations.13. Define short and near infrared waves accompanied with an example of each type.14. List the uses and the drawbacks of the following:

(a) Microwave(b) Ultraviolet waves(c) X-Rays(d) Gamma Rays

Miscellaneous Questions

1. What type of electronic signals are continuously varying voice and data signals?2. What terms are often used to refer original voice or data signals?3. What technique sometimes be used to make an information signal compatible with the

medium over which it is being transmitted.4. What is approximate frequency range of human voice?5. What are frequencies above 1GHz called?6. Mention two forms in which intelligence signals can exist.7. State three common sources of interference or noise.8. Name two methods of transmitting visual data over telephone network.9. Mention two ways radio is used in a telephone system.

10. How the cross-talk can be eliminated?

Brain Buzzer

1. What is the speed of light in inches per nanoseconds, feet per microsecond and meters persecond?

2. Brainstorm two approaches either guided or unguided that would be pragmatic for communi-cation.

3. Enlist two common house-hold remote control units and state frequency ranges used foreach.

Bibliography

1. S., Haykin, Communications Systems. Wiley, 2000.2. J. G., Proakis and M., Salehi, Fundamentals of Communication Systems. Prentice Hall, 2004.3. M. B., Pursley, Introduction to Digital Communications. Prentice Hall, 2003.4. R. E., Ziemer and W. H., Tranter, Principles of Communication: Systems, Modulation and

Noise. Wiley, 2001.

Page 33: Copyrights @ Higher Education Commission

1.8 Scope and Application Areas of Telecommunication 33

5. J. R., Barry, E. A., Lee, and D. G., Messerschmitt, Digital Communication. Kluwer AcademicPublishers, 2004.

6. S., Benedetto and E., Biglieri, Principles of Digital Transmission; with Wireless Applications.Springer, 1999.

7. U., Madhow, Fundamentals of Digital Communication. Cambridge University Press, 2008.8. J. G., Proakis and M., Salehi, Digital Communications. McGraw-Hill, 2007.9. J. M., Wozencraft and I. M., Jacobs, Principles of Communication Engineering. Wiley, 1965;

reissued by Waveland Press in 1990.10. B. P., Lathi, Linear Systems and Signals. Oxford University Press, 2004.11. A. J., Viterbi and J. K., Omura, Principles of Digital Communication and Coding. McGraw-

Hill, 1979.12. R. E., Blahut, Digital Transmission of Information. Addison-Wesley, 1990.13. J. D., Gibson, ed., The Mobile Communications Handbook. CRC Press, 2012.14. G., Foschini, “Layered space–time architecture for wireless communication in a fading

environment when using multi-element antennas,” Bell-Labs Technical Journal, vol. 1, no. 2,pp. 41–59, 1996.

15. K., Sayood, Introduction to Data Compression. Morgan Kaufmann, 2005.16. D. P., Bertsekas and R. G., Gallager, Data Networks. Prentice Hall, 1991.17. A., Kumar, D., Manjunath, and J., Kuri, Communication Networking: An Analytical Ap-

proach. Morgan Kaufmann, 2004.18. S. K., Mitra, Digital Signal Processing: A Computer-based Approach. McGraw-Hill, 2010.19. G., Ungerboeck, “Adaptive maximum-likelihood receiver for carrier-modulated data-transmission

systems,” IEEE Transactions on Communications, vol. 22, pp. 624–636, 1974.20. S., Kay, Fundamentals of Statistical Signal Processing, Volume I: Estimation Theory. Prentice

Hall, 1993.21. F. M., Gardner, Phaselock techniques. Wiley, 2005.22. A. J., Viterbi, Principles of Coherent Communication. McGraw-Hill, 1966.23. R., Best, Phase Locked Loops: Design, Simulation, and Applications. McGraw-Hill, 2007.24. B., Razavi, Phase Locking in High-Performance Systems: From Devices to Architectures.

Wiley-IEEE Press, 2003.25. R. D., Yates and D. J., Goodman, Probability and Stochastic Processes: A Friendly Introduc-

tion for Electrical and Computer Engineers. Wiley, 2004.26. J. W., Woods and H., Stark, Probability and Random Processes with Applications to Signal

Processing. Prentice Hall, 2001.27. A., Leon-Garcia, Probability and Random Processes for Electrical Engineering. Prentice

Hall, 1993.28. A., Papoulis and S. U., Pillai, Probability, Random Variables and Stochastic Processes.

McGraw-Hill, 2002.29. J. B., Johnson, “Thermal agitation of electricity in conductors,” Physical Review, vol. 32, pp.

97–109, 1928.30. H., Nyquist, “Thermal agitation of electric charge in conductors,” Physical Review, vol. 32,

pp. 110–113, 1928.31. D., Abbott, B., Davis, N., Phillips, and K., Eshraghian, “Simple derivation of the thermal

noise formula using window-limited Fourier transforms and other conundrums,” IEEETransactions on Education, vol. 39, pp. 1–13, 1996.

32. R., Sarpeshkar, T., Delbruck, and C., Mead, “White noise in MOS transistors and resistors,”IEEE Circuits and Devices Magazine, vol. 9, pp. 23–29, 1993.

33. V. A., Kotelnikov, The Theory of Optimum Noise Immunity. McGraw-Hill, 1959.34. G. D., Forney, “Maximum-likelihood sequence estimation of digital sequences in the presence

Page 34: Copyrights @ Higher Education Commission

34 Chapter 1. Introduction to Communication Systems

of intersymbol interference,” IEEE Transactions on Information Theory, vol. 18, pp. 363–378,1972.

35. H. V., Poor, An Introduction to Signal Detection and Estimation. Springer, 2005.36. H. L. V., Trees, Detection, Estimation, and Modulation Theory, Part I. Wiley, 2001.37. R. E., Blahut, Modem Theory: An Introduction to Telecommunications. Cambridge Univer-

sity Press, 2009.38. T. M., Cover and J. A., Thomas, Elements of Information Theory. Wiley, 2006.39. C. E., Shannon, “A mathematical theory of communication,” Bell Systems Technical Journal,

vol. 27, pp. 379–423, 623–656, 1948.40. A., Paulraj, R., Nabar, and D., Gore, Introduction to Space-Time Wireless Communications.

Cambridge University Press, 2003.41. H., Jafarkhani, Space-Time Coding: Theory and Practice. Cambridge University Press, 2003.42. R. J., McEliece, The Theory of Information and Coding. Cambridge University Press, 2002.43. R. E., Blahut, Algebraic Codes for Data Transmission. Cambridge University Press, 2003.44. S., Lin and D. J., Costello, Error Control Coding. Prentice Hall, 2004.45. T. K., Moon, Error Correction Coding: Mathematical Methods and Algorithms. Wiley, 2005.46. A., Goldsmith, Wireless Communications. Cambridge University Press, 2005.47. D., Tse and P., Viswanath, Fundamentals of Wireless Communication. Cambridge University

Press, 2005.48. E., Telatar, “Capacity of multi-antenna Gaussian channels,” AT&T Bell Labs Internal Techni-

cal Memo No BL0112170-950615-07TM, June 1995.49. S., Alamouti, “A simple transmit diversity technique for wireless communications,” IEEE

Transactions on Selected Areas in Communications, vol. 16, pp. 1451–1458, 1998.50. H., Bolcskei, D., Gesbert, C. B., Papadias, and A. J., van der Veen, eds., Space-Time Wireless

Systems: From Array Processing to MIMO Communications. Cambridge University Press,2006

51. J., Walrand and P., Varaiya, High Performance Communication Networks. Morgan Kaufmann,2000.

52. A. V., Oppenheim, A. S., Willsky, and S. H., Nawab, Signals and Systems. Prentice Hall,1996.

53. A. V., Oppenheim and R. W., Schafer, Discrete-Time Signal Processing. Prentice Hall, 2009.

Page 35: Copyrights @ Higher Education Commission

I2 Amplitude Modulation Demodulation 37

3 Angle Modulation & Demodulation . . . 59

4 Digital Modulation & Demodulation . . 79

5 Multiplexing and Multiple Access Tech-niques . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 109

6 Spread Spectrum Communication . . . 125

7 Orthogonal Frequency Division Multiplex-ing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

8 Introduction to Information Theory . . . 149

Part ONE -Telecommunication

Techniques

Page 36: Copyrights @ Higher Education Commission
Page 37: Copyrights @ Higher Education Commission

2. Amplitude Modulation Demodulation

Learning Focus

This chapter starts off with the discussion of the most fundamental concept in communicationsystems i.e. Modulation. The primary focus of this chapter is Amplitude Modulation. Thechapter will focus on the following objectives:

1. To acquaint the reader with the term modulation, its definition and the concept.2. To emphasize and elaborate the need of modulation.3. To make the reader familiar with the different variety of analog modulation schemes

available.4. To acquaint the reader with the concept of the simplest kind of modulation scheme i.e.

amplitude modulation and its working.5. To elaborate the concept of Amplitude Modulation in mathematical terms.6. To introduce concept of Modulation Index, Percentage of Modulation, Sidebands and

Types of Amplitude Modulation.

Page 38: Copyrights @ Higher Education Commission

38 Chapter 2. Amplitude Modulation Demodulation

2.1 What is Modulation

As discussed in Chapter 1, communication systems are used to carry information from one place toother. Moreover, modes of communication (Refer section 1.5) and data transmission medium (Refersection 1.6) were also discussed in chapter 1. However, the question: how actually information issent over the communication medium, remains unanswered. You should note that this should notbe confused with the transmission medium because the two concepts are entirely different.

Transmission Media, as discussed in Chapter 1, simply refers to the transmission path neededto transmit information from transmitter to the receiver. However, the answer to the question abouthow information is carried, points out to one of the important concepts of the CommunicationSystems i.e. Modulation.

This section only focuses on the formal definition of Modulation; however, the detailed studyof the concept is provided in the upcoming sections.

Modulation, in most simple terms, can be defined as mixing of signals i.e. superimpositionof the information signal or the message signal such as audio, text, video, image etc. on a carriersignal. Modulation can also be defined as:

The process of superimposing a low frequency signal usually called the mes-sage or information signal on a high frequency signal, known as a carriersignal such that certain characteristics of the carrier signal are varied.

2.2 Need of Modulation

If you closely look at these definitions, then you can conclude the following important conceptsregarding Modulation:

1. Information signals or message signals are low frequency signals and thus cannot cover longerdistances. This means that if these signals are directly transmitted over some communicationchannel, then they might not be able to cover longer distances and thus, information cannotbe conveyed to the receiver end. Therefore, these signals should be mixed or superimposedwith some high frequency signal called the carrier signal. The name carrier, refers to thesignal that actually carries i.e. conveys or transports the information signal.

2. Because of the entire modulation process, certain characteristics of the carrier signals arevaried. This leads to the discussion of a very important topic that is discussed in the sectionsto come. Readers usually get confused that whether the characteristics of information signalare changed or the carrier. A simple way to remember this is by keeping in mind that if,characteristics of information signal will be varied, and then obviously, the information atthe transmitter will not be same as that the receiver is intending to receive.

The purpose of mentioning the above two points was to build the concept in the reader’s mindabout what the concept of modulation entails and why it is important. Now that the conceptof modulation is clear, we can proceed to why it is supposed to be the backbone of all thecommunication systems. Before going on with a formal discussion about the need of the modulation,let us consider a daily life example that might help in getting a clearer picture:

Suppose you want to send a note to your friend who lives opposite to you. You obviouslycan’t just stand in front of your friend’s bedroom window and throw the note inside. It, obviously,will not land into his room that way. What you’ll do? You should convey this important piece ofinformation to him and the paper is too weak to fly into his room. A paper plane won’t work eitherbecause your friend’s window might be closed, or you might not be a pro at flying paper planes.The only way you can deliver your information is by wrapping up your note around something that

Page 39: Copyrights @ Higher Education Commission

2.3 Modulation Schemes 39

could travel a longer distance i.e. stone. Once you do this, your information will travel a longerdistance and you will be able to convey what you wanted to.

Similarly, information or message signals such as audio, text, video, images etc. have lowfrequency and thus cannot cover longer distances. So, to overcome this, some way should bedevised which can make the information or message signal to be transmitted or conveyed overlonger distances. This can be done by covering or mixing the information signal with some highfrequency signal. This high frequency signal, as stated earlier, is called the carrier signal.

There are some other reasons that account for the need of modulation:Antenna Height: One important reason that modulation is needed is to reduce the size of

antenna. To transmit efficiently, the transmitting antenna should have the length that is at leastequal to a quarter of the wavelength (λ ) of the signal being transmitted. That is, antenna heightmust be a multiple of λ/4 where:

λ =cf

(2.1)

Where,c = velocity of lightf = frequency of the signalλ = wavelength

Example 2.1.

Suppose that an EM signal of 15 KHz, what would be the antenna height?Solution: minimum antenna length = λ

4 = c4 f =

3×108

4×15×103 = 5000m

Exercise 2.1 Consider a modulated signal at 1 MHz and 20 MHz, calculate the height ofantenna requried to transmis such signals.

R It can be therefore summarized that Modulation is required:1. To enable long distance communication2. To make communication system design simpler i.e., to reduce the Antenna height.3. To enable simultaneous multiuser communication

2.3 Modulation SchemesFrom what has been discussed so far, it is evident that the process of modulation involves twosignals. One being the information signal or the message signal which can be audio, video, textetc. which is the low frequency signal and other one is the carrier signal which is used to carry theinformation signal and is of higher frequency thus, capable of covering longer distance than theinformation signal. The information signal is also known as the baseband signal or the modulatingsignal. The waveform of such a signal is unpredictable. For example, you cannot predict thewaveform of your speech signal. During the process of modulation, the baseband or the informationsignal is mixed with the high frequency sinusoidal wave called the carrier wave. The carrier wavethen carries the information signal to the receiver end of the communication system. As a resultof the entire modulation process, certain characteristics of the carrier wave such as its amplitude,

Page 40: Copyrights @ Higher Education Commission

40 Chapter 2. Amplitude Modulation Demodulation

frequency or phase is varied. This accounts for the fact that there are three different fundamentaltypes of modulation schemes:

1. Amplitude Modulation: This is the simplest form of the Modulation. In this case, theamplitude of the carrier is varied according to the instantaneous amplitude of the modulatingsignal.

2. Frequency Modulation: As the name suggests, in this modulation scheme, the frequencyof the carrier wave is varied per the amplitude of the information signal. In other words, theamplitude of the information signal dictates the change in frequency of the carrier wave. Itshould be noted that the amplitude of the carrier wave remains constant.

3. Phase Modulation: In this case, the frequency and amplitude of the high frequency carrierwave remain unchanged, however the phase of the carrier signal is varied in accordance withthe amplitude of the information signal.

Therefore essentially all modulation types basically involve varying one of the characteristicsof the carrier signal such as amplitude, frequency and phase with respect to the amplitude of theinformation signal. The other classification of modulation schemes however is based on the natureof input and output being analog or digital. In this manner there are four types of modulationschemes as shown in Fig. 2.1.

Figure 2.1: Taxanomy of Modulation Schemes

This chapter is solely dedicated to the discussion of Amplitude Modulation. However, Fre-quency and Phase Modulation are discussed in the next chapter.

2.4 Amplitude Modulation

Amplitude modulation, also known as AM for short, as discussed earlier, is the simplest kind ofthe modulation scheme. Amplitude modulation has been in use since the earliest days of radiotechnology i.e. it was the first modulation scheme that was used to carry voice by the radio.Amplitude Modulation was the result of radiotelephone experiments carried out by Roberto Landell

Page 41: Copyrights @ Higher Education Commission

2.4 Amplitude Modulation 41

De Moura and Reginald Aubrey Fessenden. AM is usually used to transmit information over aradio carrier.

2.4.1 How AM Works?Amplitude Modulation or AM follows a very simple working principle:

The amplitude of the carrier signal which is a high frequency signal,changes in accordance with the amplitude of the modulating signal or theinformation signal.

Figure 2.2: Amplitude Modulation - (A) Information Signal (B) Carrier Signal (C) AM ModulatedSignal

Keeping this simple principle in mind, we can elaborate the working of AM further. Duringamplitude modulation, the information signal is imposed onto the carrier signal, such that theamplitude of the carrier signal changes per the amplitude of the information signal.

R It should be noted that, during AM, the carrier frequency and phase remain constant and onlyits amplitude is varied in accordance with the amplitude of modulating signal.

The information signal as shown in Fig. 2.2A has both positive and negative peaks and so doesthe carrier wave. Therefore, an increase in the amplitude of the information signal results in thecorresponding increase in the carrier signal (see Fig. 2.2C). Similarly, a decrease in the amplitudeof the information signal causes a corresponding decrease in the amplitude of the carrier wave. Inshort, both the positive and negative peaks of the carrier are varied according to the amplitude ofthe modulating or the carrier wave. The AM Waveform: Fig. 2.3 shows simple AM circuit. Herea radio frequency carrier is applied at A and a modulating audio tone at B. The circuit consist ofnon-linear device (e.g. diode transistor). The two signals “mix” in this circuit and produce theAM waveform shown at C. Notice that both negative and positive peaks of the output waveformcorrespond exactly to modulating tone’s waveform.

Page 42: Copyrights @ Higher Education Commission

42 Chapter 2. Amplitude Modulation Demodulation

Figure 2.3: The basic method of obtaining amplitude modulation

Concept of an Envelope: A term that is usually related to the discussion of theAM is envelope. Envelope of the carrier wave is nothing but an imaginary lineconnecting the positive and negative of the carrier wave. Connecting positiveand negative peaks in such a way gives the exact shape of the modulatingsignal.

2.4.2 Modulation Index and Percentage of ModulationThe two important concepts related to Amplitude Modulation are Modulation Index and Percentageof Modulation. In amplitude modulation the amplitude of modulating waveform should be lessthan that of amplitude of carrier to avoid the distortion. This point is explained later in this chapter.

Mathematically it is given as:

Vm <Vc

Therefore, it is important to establish some relationship between Vm and Vc. This relationship isknown as Modulation Index. It is represented as:

m =Vm

Vc(2.2)

However, it is also known as modulating factor, which varies from 0 to 1 (i.e., 0 < m < 1).Multiplying the modulation index with 100 yields percentage of modulation. Percentage ofmodulation is the degree of modulation normally expressed as a percentage from 0 to 100.

Considering the Fig. 2.2B that shows an un-modulated carrier has 0% modulation. For amodulated waveform such as Fig. 2.4 the modulation index can be measured as:

Percentage of Modulation = % =Emax−Emin

Emax +Emin∗100 (2.3)

R For a given transmitter power, a high percent of modulation will produce a stronger audiotone in the receiver.

In Fig. 2.4 if the value of EMIN is zero that shows the same carrier is modulated 100%. Here,the amplitude of waveform falls to zero for an instant during each cycle of modulating wave. Also,the amplitude increases to 100 Volts peak-to-peak once during each cycle. In Fig. 2.4 if the value

Page 43: Copyrights @ Higher Education Commission

2.4 Amplitude Modulation 43

Figure 2.4: Measuring the percent of modulation

of EMIN is lets say 20 Volts and value of EMAX is 60 Volts the carrier is modulated to 50%. Theaverage peak-to-peak amplitude remains same i.e. 40-Volts.

For example,

Percentage of Modulation = % =60−2060+20

∗100 = 50%

Example 2.2.

Consider an AM signal, whose Vmax(p−p) displayed on the oscilloscope screen is 6 divisionsand Vmin(p−p) is 1.2 division. Workout the modulation index.Solution:

minimum antenna length =λ

4=

c4 f

=3×108

4×1×106 = 75 meters

Thus, modulation reduces the antenna height to 75 meters.

2.4.3 The AM EquationNow that the basic concept of Amplitude Modulation is clear, we are now able to proceed with themathematical explanation of the concept. Before proceeding with the derivation of the AM equation,it should be noted that we are considering the amplitude of the carrier signal to be varied with timehence the signals are said to be in time domain. Consider an information signal superimposed ontoa high frequency carrier signal as shown in Fig. 2.5.

Now consider a high frequency carrier signal represented by Eq. (2.2):

Vc = vcsin2π fct (2.4)

In the equation above,

Page 44: Copyrights @ Higher Education Commission

44 Chapter 2. Amplitude Modulation Demodulation

Figure 2.5: Information signal superimposed onto Carrier signal

vc = instantaneous value of the carrier sine wave voltage at some specific time in the cycle.Vc = is the constant peak value of the unmodulated carrier sine wave. Its value is measured

between zero and maximum amplitude of either positive going or negative going alterations.fc = is the frequency of the carrier sine wave.t = represents a point in time.

Similarly, the modulating or the information signal is also a sine wave and can be given as:

vm =Vmsin2πfmt (2.5)

In the equation, above,vm = instantaneous value of the information signal or the modulating signal at some specific time

in the cycle.Vm = is the peak value of the information signal.fm = represents the frequency of the information signal.

For the derivation of the amplitude modulation equation, it should be noted that relativeamplitude of the carrier sine wave and the information signal are important because the informationsignal uses peak value of the carrier signal as its reference point instead of zero. Also, the conceptof envelope suggests the fact that the envelope varies above (positive peak) and below (negativepeak) of the carrier sine wave. This means that, zero reference line of the information signal or themodulating signal coincides with the peak of unmodulated carrier. Generally, the amplitude of themodulating wave is kept lesser than the amplitude of the carrier i.e.,

Vm <Vc (2.6)

This is because, if Vc will be less than Vm, then this might lead to distortion. Therefore, toavoid this, the amplitude of the modulating wave is always less than the amplitude of the carrierwave. Speaking in terms of wave forms, the peak value of the modulating signal should be lessthan the peak value of the carrier. It has been discussed that, the modulating uses the peak value ofthe carrier as its reference point so this implies that the value of the modulating signal is added to

Page 45: Copyrights @ Higher Education Commission

2.4 Amplitude Modulation 45

or subtracted from the carrier sine wave. Let A be the value instantaneous value at either the top orthe bottom of the envelope. We can compute A as:

A = vc + vm (2.7)

Substituting the value of vm from Eq. (2.5),

A =Vc +Vmsin2πfmt (2.8)

Using Eq. (2.2), Eq. (2.8) can be written as:

A =Vc +mVcsin2π fmt =Vc(1+msin2π fmt) =Vc(1+msinωmt) (2.9)

It should be noted that, the second part of the equation (Vmsin2π fmt)(sin2π fct) must be presentfor amplitude modulation to occur. This also accounts for the fact that AM is the product of carrierand the modulating signal.

The Eq. (2.9) is the representation of the fact that instantaneous value of the modulating signalor the information signal adds up to the peak value of the carrier sine wave. The instantaneousvoltages of the resulting amplitude-modulated wave is:

vAM = Asinθ = Asinωct (2.10)

Substituting (2.9) into (2.10)

vAM =Vc(1+msinωmt)sinωct (2.11)

Therefore,

vAM =Vcsinωct+mVcsinωmtsinωct (2.12)

Using the following trigonometric relation

sinxsiny = 12 [cos(x− y)− cos(x+ y)]

The equation can be rearranged as:

vAM =Vcsinωct+mVc[cos(ωc−ωm)− cos(ωc +ωm)] (2.13)

If you closely observe, then the R.H.S of the Eq. (2.13) represents the concept of sidebandsin its second and third term as shown in Fig. 2.6. Two sidebands are obtained that are supposedto lie above or below the baseband signal. The sideband that lies above the baseband signal iscalled the upper-sideband (USB) and the one that lies below the baseband signal is called thelower-sideband. The lower sideband is represented as LSB = f c f m. It is named so because thissideband has the range of frequencies that appear below the carrier frequency. Similarly, theupper-side band is represented as USB = f c+ f m where fc is the frequency of carrier sine waveand fm is the frequency of the modulating wave. The Upper Sideband is named so because it hasfrequencies that appear above the carrier frequency. These sidebands are important because theycarry the actual information to be transmitted.

Page 46: Copyrights @ Higher Education Commission

46 Chapter 2. Amplitude Modulation Demodulation

Figure 2.6: Frequency Content of AM wave

R It should be noted that the upper-sideband and lower-sideband are mirror images of oneanother so both carry the same information.

It is readily apparent from the bar charts shown in Fig. 2.6 that, with amplitude modulation, thetransmitted signal is actually a band of frequencies rather than just a carrier. The carrier containsno information. If we transmit or receive just a carrier, no information would be conveyed. In AMsystems, both the carrier and the sidebands must be transmitted and received.

Example 2.3.

A sinusoidal carrier frequency of 1.1 M-Hz is amplitude modulated by a sinusoidal voltageof frequency 20 K-Hz. Workout the frequency of upper-side bands and the lower sidebands.Solution:Given that; fc = 1.1 MHz and fm = 20 kHzUpper Side band, USB = fc + fm

Upper Side band, USB = 1.1×106 +20×103

Upper Side band, USB = 1120 kHzLower Side band, LSB = fc− fm

Lower Side band, LSB = 1.1×106−20×103

Lower Side band, LSB = 1080 kHz

2.4.4 Bandwidth of AM SignalThe bandwidth of an AM signal extends from the lowest sideband frequency to the highest sidebandfrequency. Therefore, the bandwidth is always twice the highest modulating frequency, i.e.,

BW = 2 fm

Thus, if the highest modulating frequency is 15 kHz, then the bandwidth will be 30 kHz. Incase of square wave, the bandwidth is twice the highest harmonic contained in the wave.

Example 2.4.

Page 47: Copyrights @ Higher Education Commission

2.4 Amplitude Modulation 47

An AM station communicates modulating frequencies upto 20 MHz. If the AM station iscommunicating at a frequency of 970 M-Hz, workout the total bandwidth occupied by theAM station.Solution:Upper Side band, USB = fc + fm = 970 + 20 = 990 MHzLower Side band, LSB = fc− fm = 970 - 20 = 950 MHzBW = USB - LSB = 990 - 950 = 40 MHz, orBW = 2 fm = 2 ×20 = 40MHz.Bandwidth occupied by AM station is 40 MHz.

2.4.5 Power of AM SignalWe know that modulated wave is composed of three sine components namely; carrier, uppersideband and lower sideband. Thus, the total power in the modulated wave will be

PT = PC +PUSB +PLSB

From Eq. (2.13), the above equation in terms of voltages can be represented as:

PT =V 2

CR

+V 2

USBR

+V 2

LSBR

Where all three voltages are r.m.s. values, and R is the resistance (e.g. antenna resistance), inwhich power is dissipated.

PC = Power of Carrier =V 2

CR

=(VC/√

2)2

R=

V 2C

2RSimilarly,

PUSB = PLSB =V 2

SBR

=(mVC/2)2

R√

2

PSB = 2(

m2

4V 2

C2R

)By substituting these values, we get:

PT =V 2

C2R

+2(

m2

4V 2

C2R

)Since PC =V 2

C/R,

PT = PC +2(

m2

4PC

)

PT = PC

(1+

m2

2

)Hence,

PT/PC = 1+m2/2 (2.14)

Page 48: Copyrights @ Higher Education Commission

48 Chapter 2. Amplitude Modulation Demodulation

R It is interesting to note that the maximum power in AM wave is PT = 1.5PC when 100%modulated (m=1).

Current in AM WaveThe situation that often arises in AM is that the modulated and the unmodulated currents are easilymeasurable, and it is then easy to calculate the modulation index from them. Let IC be the carrier(unmodulated) current and IT be the total, or modulated current of an AM transmitter, both beingr.m.s value. If R is the resistance in the which these currents flow then,

PT

PC=

I2T R

I2CR

=

(I2T

I2C

)= 1+

m2

2

From this, we get

IT

IC=

√1+

m2

2(2.15)

Example 2.5.

A 40 kilo-Watt carrier is to be amplitude modulated to a level of 85%. What is the carrierafter modulation?Solution:Total power is given by:

Pt = Pc

(1+ m2

2

)Pt = 50×103

(1+ 0.852

2

)Pt = 68062.5.

———————————————————————Pt = Pc

(1+ m2

2

)68062.5 = Pc

(1.36125

)Pc = 50,000

Pc = 50Kilo−Watt

Carrier power is 50 Kilo-Watt

2.5 Types of Amplitude ModulationIt has been discussed in pervious section that as a result of an overall Amplitude Modulation process,sidebands are generated. The two sidebands generated are called the Upper Sideband (USB) andthe Lower Sideband (LSB). The frequency-domain spectrum of an amplitude modulated waveformis plotted against frequency (x-axis) and amplitude (y-axis) with carrier being in the middle andUSB and the LSB being on left and right respectively. The carrier, in the amplitude modulation,contains two-third of the transmitted power and contains no information. The actual information iscontained in the sidebands.

Based on how these sidebands are dealt, to improve the efficiency of an amplitude modulatedwaveform, the overall Amplitude Modulation process can have the following types:

Page 49: Copyrights @ Higher Education Commission

2.5 Types of Amplitude Modulation 49

2.5.1 Double-Sideband ModulationDouble Sideband with Suppressed Carrier DSB-SCDouble sideband with suppressed carrier also known as DSB-SC for short, is an AM variant inwhich, the frequencies i.e. sidebands produced as a result of an AM process lie above and below thecarrier but the carrier itself is reduced. In other words, the carrier wave is not transmitted in DSB-SC.This allows the power to be distributed among the two sidebands. Since the power is distributedamong the sidebands only, the transmission of information is more efficient as compared to theDouble Sideband with Carrier. The Double Sideband with Suppressed Carrier has the advantage oflow power consumption. It is mostly used in the analog TV systems to transmit color informationand it is used in stereo FM radio. 022A disadvantage of DSB-SC is that it is normally a low-levelmodulation technique. This means that it is generated at a low signal and then must be amplified toa level suitable for transmission.

Double Sideband with Reduced Carrier DSB-RCDouble Sideband with Reduced Carrier also known as DSB-RC for short, is quite similar to theDSB-SC in that the frequencies i.e. sidebands that are produced lie above and below the carrier.The difference is that the carrier is reduced to a fixed level below that which is provided by themodulator.

2.5.2 Single Sideband ModulationSingle Sideband Modulation also called SSB for short, is the AM modulation variant that makesuse of only one sideband. Single Sideband Modulation makes even more efficient use of thepower spectrum because the carrier is suppressed or is reduced at a significant level and one of thesideband is also reduced. The other sideband is reduced because both the sidebands carry the sameinformation and are nothing but mirror images of one another. This allows the entire power to betransmitted to a single sideband thus making an effective use of the entire power spectrum. Also,the amplitude modulation process along with carrier and both the sidebands, make use of the twicethe bandwidth of the original baseband or the information signal but with the carrier and one of thesidebands being suppressed, allows to evade this doubling of power. The forms of SSB Modulationare discussed below:• Single Sideband with Suppressed Carrier SSB-SC:As the name implies, SSB-SC has a single

sideband whereas the carrier and the other sideband is suppressed. The suppression of carrierduring the transmission requires it to be replicated at the receiver end so that it can reproducethe exact signal. It should be noted that if there are slight changes while reinserting the carrier,then it may lead to the changes in the signal which is reproduced at receiver. For example, ifan audio is being transmitted using SSB-SC then slight variation during the reinsertion of thecarrier may cause the pitch of the audio signal to be raised. SSB-SC is has its applications intwo way radio and frequency division multiple access.• Single Sideband with Reduced Carrier:As discussed above, SSB-SC requires the carrier to

be re-inserted at the receiver end and slight variation during the carrier reinsertion processmay cause changes in the signal which is reproduced at the receiver end. This problem canbe resolved by inserting a small amount of the original carrier while transmitting the originalsignal giving the entire SSB modulation a whole new form called Single Sideband withReduce Carrier or SSB-RC. SSB-RC allows the receiver end to reproduce the signal exactlyas it was at the transmitter end.

2.5.3 Vestigial Sideband ModulationThe term vestigial in VSB itself explains a lot about the modulation scheme itself. In this modulationscheme one of the sideband is removed completely. However, the other sideband is not completely

Page 50: Copyrights @ Higher Education Commission

50 Chapter 2. Amplitude Modulation Demodulation

suppressed but it is filtered so that some specific range of frequencies can be transmitted instead oftransmitting all frequencies. This filtration is done in order to transmit the signal in most efficientway. The need of vestigial sideband arises when both double sideband and single sideband fail toprovide efficient use of power. Double sideband, as discussed earlier, does not make an efficientuse of the power spectrum because it allows transmission of both the sideband. This is not requiredbecause both the sidebands are mirror image of one another and carry the same information. On theother hand, single sideband offers hitches when the information signal possesses wide bandwidthor it has low frequency components. Thus, VSB can be thought of as a solution to the problemsarose by the double sideband modulation or the single sideband modulation. Vestigial sidebandmodulation is used in TV signal.

2.5.4 Quadrature Amplitude Modulation

This form of AM is quite different from other that have been discussed so far. In QAM, twocarriers are modulated that are shifted in phase by 90. Since QAM is both; an AM techniqueand has the phase shift property, it can be thought of as the mix of two modulation schemesi.e. Amplitude Modulation and Phase Modulation (discussed later in next chapter). QuadratureAmplitude Modulation or QAM, is the variant of the AM that exists in both analog and digitalforms. With the analog format of QAM, one can send several analog signals in one carrier. On theother hand, digital form of QAM has its applications in radio communication ranging from cellularcommunications such as LTE (Long Term Evolution) to wireless communication systems includingWiMAX and Wi-Fi. Digital QAM is also called Quantized QAM. Digital format of QAM cancarry higher data rates than simple amplitude modulation and phase modulation. QAM offers manydisadvantages in that it is more prone to noise.

2.6 Amplitude Demodulation

The process of modulation, as discussed earlier, is employed to transmit a low frequency signalfrom transmitter to the receiver. For this purpose, the sinusoid wave is modulated to carry thelow-frequency signal. However, once the signal arrives at the destination, it is important to revert theprocess to extract the original data. To accomplish this, the process of demodulation is employed.Demodulation or AM detection as the name suggests is the process through which the modulationis extracted from the original incoming information signal. The process of demodulation can beachieved using a number of methods; each having its own pros and cons.

2.6.1 AM Detection

The Fig. 2.7 depicts the format of an AM signal which is necessary to understand the demodulationprocess.

The form of an amplitude modulated waveform is discussed in detail in previous sections.The AM waveform consist of a carrier frequency that acts as a reference and two sidebands orside-frequencies called the Upper Sideband and the Lower Sideband. Each sideband is the mirrorimage of the other, To demodulate or detect an AM signal, two steps are employed:

1. Creating the baseband signal: the baseband signal can be obtained in a number or waysbut the simplest way to obtain the baseband signal is to employ a simple diode and rectifythe signal.

2. Filter:The process of filtering leaves out any unwanted high frequency elements from theprocess of demodulation.

Page 51: Copyrights @ Higher Education Commission

2.6 Amplitude Demodulation 51

Figure 2.7: Spectrum of an Amplitude Modulated, AM Signal

2.6.2 Superhetrodyne Receivers

Invented by US engineer during first world war in around 1918 known as Edwin Armstrong, itis used in almost all radio receivers. Superheterodyne receiver sometimes known as superhetutilizes frequency mixing and convert the received signal to an intermediate frequency. Infact theliteral meaning of the word hetrodyning is to mix two different frequencies in such a way that theyproduce difference between the two frequency commonly known as beat frequency. The convertedintermediate frequency is typically fixed which is later easily processed than the actually receivedcarrier frequency.

Since in AM the carrier signal is mixed with the information signal in a way it is a hetrodyneprocess. The resulting sidebands produced in AM are the beat frequencies. In superhetrodyning onlythe lower sideband is used essentially creating a beat frequency lower than the original frequency.Using low frequency makes the processing of signals using simpler electronic components possible.

2.6.3 Balanced Modulator

From the discussions in the previous sections, we know that an amplitude modulated waveformconsist of a carrier and two sidebands. The information is carried in the two sidebands. We furtherdiscussed various types of AM. To quickly recap, Single Sideband (SSB) modulation is where onlyone of the sideband is transmitted. Double Sideband Suppressed Carrier (DSB-SC) is wheresidebands are transmitted, and carrier is suppressed or filtered. Double Sideband with Carrier(DSB-C) or full AM where carrier and both the sidebands are transmitted. The primary functionof balanced modulator is to suppress the unwanted carrier component of the AM wave and leavesbehind the sum (upper sideband) and difference (lower sideband) frequencies at the output. To putsimply, a balanced modulator produces an AM waveform that contains two sidebands without thecarrier. Moreover, since both sidebands are the mirror image of one another, the final output of thebalanced modulator can be phase shifted to suppress one of the sidebands to yield single sidebandAM waveform.

The Fig. 2.8 shows the block diagram of a balanced modulator:The two identical AM modulators are arranged in a balanced manner to suppress the carrier.

One of the inputs to the balanced modulator is the carrier signal represented as c(t) = Accos(2ct).The other input is modulating signal m(t) is applied to the upper AM modulator. The lower AMmodulator receives m(t) but with opposite polarity i.e. −m(t). The output of the upper modulator

Page 52: Copyrights @ Higher Education Commission

52 Chapter 2. Amplitude Modulation Demodulation

Figure 2.8: Spectrum of an Amplitude Modulated, AM Signal

is represented as:s1(t) = Ac [1+Kam(t)]cos(2π fct)

The output of the lower modulator is represented as:

s2(t) = Ac [1−Kam(t)]cos(2π fct)

When s2(t) is subtracted from s1(t), the DSB-SC s(t) is obtained. The sum block in the diagramabove takes in positive input s1(t) and negative input s2(t) to yield s(t).

s(t) = Ac [1+Kam(t)]cos(2π fct)−Ac [1−Kam(t)]cos(2π fct)

s(t) = 2AcKam(t)cos(2π fct)

The standard equation of DSB-SC is

s(t) = Acm(t)cos(2π fct)

The output of the sum block is compared with the standard equation of the DSB-SC:

2AcKam(t)cos(2π fct) = Acm(t)cos(2π fct)

Scaling factor 2Ka is obtained.

2.7 Pros and Cons of AMWe are now a lot clear about the amplitude modulation. From what have been discussed so far,amplitude modulation seems to be a very simple modulation technique which is easy to understand,implement and has a wide range of applications. By now, you must have a rough picture about theadvantages and disadvantages of the amplitude modulation. However, the purpose to include thissection was to provide in one place, the entire concept of the amplitude modulation and its pros andcons. The Table 2.1 below shows the advantages and disadvantages of using the AM:

Page 53: Copyrights @ Higher Education Commission

2.7 Pros and Cons of AM 53

ADVANTAGES DISADVANTAGES

Quite simple to implement. Makes inefficient use ofpower and thus has small op-erating range.

AM receivers are inexpensiveand consist of a few compo-nents only.

It requires a bandwidth thatis equal to twice the highestaudio frequency.

An amplitude modulatedwaveform can be demod-ulated using a very fewcomponents.

Since AM detectors are sensi-tive to the noise, therefore, anAM signal is prone to noise.

Table 2.1: Advantages and Disadvantages of AM

Chapter Summary• Communication is act of transferring information from one place to the other. Telecommuni-

cation, on the other hand, is the act of conveying information using electronic means.• A communication system consists of transmitter that transmits the information, a channel

which is the path along which the data actually travels and a receiver which is the mirrorimage of the transmitter and is the destination of the information.• The theory of the EM spectrum was put forward by an English scientist called James Clark

Maxwell. He claimed that electric fields and magnetic fields can be combined to form theElectromagnetic waves.• Frequency is the number repetitive cycles of the wave that occurs in a given time period.• Wavelength define the length of one cycle and is measured in meters.• The frequency and wavelength are related by the equation c = f λ

• The EM spectrum includes radio waves, microwaves, infrared waves, visible waves ultravioletwaves, X-Rays and Gamma rays.• Radio waves have the longest wavelength and shortest frequency whereas Gamma rays have

shortest wavelength and highest frequency.• There are three modes of communication; simplex-which is one-way, half duplex-two way

but not simultaneous and full duplex-two way and simultaneous.• To carry data from one place to other some path is needed. Such a path is called the

transmission path or transmission media.• The transmission media can be guided or unguided.• The guided media is also called wired transmission and the signals in this type of transmission

remain confined to the boundaries of the cable. Coaxial cables, twisted pair and optical fibrecable come under the umbrella of guided media.• The unguided media on the other hand, is the wireless transmission of data. It allows the

data to travel in free space. Radio waves, microwaves, terrestrial microwave and satellitemicrowave come under the category of unguided media.• The field of telecommunications have developed a lot over years and with the widespread

use of mobiles, internet, computers the telecommunication industry requires a high demandfor the telecom engineers, technicians, technical writers etc.

Page 54: Copyrights @ Higher Education Commission

54 Chapter 2. Amplitude Modulation Demodulation

Multiple Choice Questions

1. Modulation is done in:(a) Transmitter(b) Radio Receiver(c) Between transmitter and radio re-

ceiver(d) None of Them

2. In AM wave, useful power is carried by:(a) Carrier(b) Sidebands(c) Both (a) and (b)(d) None of them.

3. In AM, the ——————————- ofthe carrier is varied according to thestrength of signal.

(a) Amplitude(b) Phase(c) Frequency(d) None of them

4. Quadrature Amplitude Modulation(QAM) is a combination of:

(a) ASK and FSK(b) ASK and PSK(c) FSK and PSK(d) None of above

5. In an AM wave, majority of power is in—————————–

(a) Lower Sideband(b) Upper Sideband(c) Carrier(d) None of them.

6. In communication system, noise is notlikely to affect the signal:

(a) At the transmitter(b) In the channel(c) In the information source(d) At the destination

7. If modulation index is greater than 1:(a) The baseband signal is not preserved

in the envelope of AM signal.(b) The recovered signal is distorted.

(c) It is called over-modulation(d) All of them

8. In AM, which of the following conveys noinformation

(a) Lower Sideband(b) Carrier(c) Upper sideband(d) None of Them

9. With AM, how much of the power is withcarrier?

(a) One-third(b) Three-Quarters(c) Two-Third(d) One-fourth

10. A 50Kw carrier is to be amplitude modu-lated to a level of 85%. What is the carrierafter modulation?

(a) 50 Kw(b) 5 Kw(c) 8 Kw(d) 25 Kw

11. An AM broadcast station is allowed totransmit modulating frequencies up to10kHz. If the AM station is transmitting ata frequency of 975 kHz, what is the totalbandwidth occupied by AM station?

(a) 10 kHz(b) 15 kHz(c) 20 kHz(d) Cannot be determined.

12. The frequency of AM radio signals whosewavelength is 96m is

(a) 3125 kHz(b) 3.2×10−7(c) 3125 MHz(d) None of them

13. The antenna length of a signal is modu-lated at 10 MHz

(a) 3700 meters(b) 375 meters(c) 37.5 meters(d) 3750 meters

Review Questions

1. Define Modulation and state its importance.2. Explain how the carrier varies in accordance with the information signal in AM.

Page 55: Copyrights @ Higher Education Commission

2.7 Pros and Cons of AM 55

3. What happens when the modulation percentage exceeds 100?4. State and explain AM working principle.5. What is the ideal relationship between the modulating signal voltage Vm and carrier voltage

Vc?6. State the name of the signals generated by modulation process.7. State the main difference between the Double Sideband with Suppressed Carrier and Double

Sideband with Reduced Carrier8. Single Sideband makes efficient use of power spectrum. Support the above statement with

valid explanation. Also, state how Single Sideband modulation is more beneficial thanconventional AM?

9. Derive the equation for AM power?10. How does Vestigial Sideband Modulation scheme makes efficient use of power when com-

pared to Double Sideband and Single Sideband Modulation scheme.

Miscellaneous Questions

1. An AM wave, when displayed on oscilloscope displayed values of Vmax = 4.9 and Vmin = 2.0.Compute the percentage of modulation.

2. Compute the amplitude of modulating signal in order to obtain 70 percent modulation of acarrier of Vc = 45 V.

3. Work out the carrier power when a broadcast radio transmitter radiates 15 kW when themodulation percentage is 65.

4. A 300W carrier is modulated to a level of 70%. Calculate the total power in modulated wave?5. A 45-kW carrier is to be modulated to a depth of 80%. What is the carrier power after

modulation?6. The antenna current of an AM transmitter is 7 Amperes, when only the carrier is sent but it

increases to 7.93 amperes when the carrier is modulated by a sine wave. Workout:• Percentage of modulation• Antenna current when modulation changes to 0.7

7. The antenna current yielded by an un-modulated carrier of 2.5 A into an antenna with aresistance of 90Ω. When the carrier is, amplitude modulated, the antenna current rises to 2.7A. What is the percentage of modulation?

8. Given the following frequencies, work-out the wavelength in each case:• 1535 k-Hz AM radio signal.• 104.1 M-Hz FM radio signal• 1.90 G-Hz cell phone signal.

9. Workout the antenna length for each of the following signals if modulated at:• 15 M-Hz• 10 M-Hz• 20 M-Hz

10. A 400W, 100K-Hz carrier is modulated to a depth of 60% by the modulating frequency of1K-Hz. Work-out:• The total power transmitted.• Sideband components of AM wave.

11. An AM station can transmit modulating frequencies up to 15 K-Hz. If the AM station istransmitting at a frequency of 970 K-Hz, Workout the:• Sideband components of AM wave• Total bandwidth.

Page 56: Copyrights @ Higher Education Commission

56 Chapter 2. Amplitude Modulation Demodulation

Brain Buzzer

1. What are the Bandwidth requirements of a voice signal of 2 kHz and a binary data signalwith a rate of 2 k-Hz.

2. For maximum amplitude of information transmission, what can be the ideal percentage ofmodulation of information?

3. With AM, can information signal or message signal have a higher frequency than that ofcarrier signal? .

Bibliography

1. Kwonhue Choi; Huaping Liu, "Amplitude Modulation," in Problem-Based Learning inCommunication Systems Using MATLAB and Simulink , , IEEE, 2016, pp.

2. John W. Leis, "Modulation and Demodulation," in Communication Systems Principles UsingMATLAB , , Wiley, 2019, pp.155-267

3. S., Haykin, Communications Systems. Wiley, 2000.4. J. G., Proakis and M., Salehi, Fundamentals of Communication Systems. Prentice Hall, 2004.5. M. B., Pursley, Introduction to Digital Communications. Prentice Hall, 2003.6. Electronic Communications System: Fundamentals Through Advanced - Wayne Tomasi (5th

Edition, ISBN-13: 978-0-13-049492-4)7. Telecommunications - Warren Hioki (4th Edition, ISBN-10: 013020031X or ISBN-13:

978-0130200310)8. Modern Digital and Analog Communication Systems - B. P. Lathi, Zhi Ding (4th Edition,

ISBN-10: 0195331451 or ISBN-13: 978-0195331455)9. Digital Communications - Ian A. Glover, Peter M. Grant (3rd Edition, ISBN-10: 0273718304

or ISBN-13: 978-0273718307)10. Communication Systems - Simon Haykin, Michael Moher (5th Edition, ISBN-10: 8126521511

or ISBN-13: 978-8126521517)11. R. E., Ziemer and W. H., Tranter, Principles of Communication: Systems, Modulation and

Noise. Wiley, 2001.12. J. R., Barry, E. A., Lee, and D. G., Messerschmitt, Digital Communication. Kluwer Academic

Publishers, 2004.13. S., Benedetto and E., Biglieri, Principles of Digital Transmission; with Wireless Applications.

Springer, 1999.14. Richard W. Middlestead, "Amplitude Shift Keying Modulation , demodulation and perfor-

mance ," in Digital Communications with Emphasis on Data Modems: Theory, Analysis,Design, Simulation, Testing, and Applications, Wiley, 2017, pp.

15. J. D., Gibson, ed., The Mobile Communications Handbook. CRC Press, 2012.16. G., Foschini, “Layered space–time architecture for wireless communication in a fading

environment when using multi-element antennas,” Bell-Labs Technical Journal, vol. 1, no. 2,pp. 41–59, 1996.

17. K., Sayood, Introduction to Data Compression. Morgan Kaufmann, 2005.18. Jyrki T. J. Penttinen, "Modulation and Demodulation," in The Telecommunications Hand-

book: Engineering Guidelines for Fixed, Mobile and Satellite Systems , , Wiley, 2013,pp.

19. George Kennedy, Bernard Davis, “Electronic Communication Systems, McGraw-Hill Educa-tion - Europe, 1990, ISBN10 0071126724

20. U., Madhow, Fundamentals of Digital Communication. Cambridge University Press, 2008.21. J. G., Proakis and M., Salehi, Digital Communications. McGraw-Hill, 2007.22. J. M., Wozencraft and I. M., Jacobs, Principles of Communication Engineering. Wiley, 1965;

Page 57: Copyrights @ Higher Education Commission

2.7 Pros and Cons of AM 57

reissued by Waveland Press in 1990.23. B. P., Lathi, Linear Systems and Signals. Oxford University Press, 2004.24. A. J., Viterbi and J. K., Omura, Principles of Digital Communication and Coding. McGraw-

Hill, 1979.25. R. E., Blahut, Digital Transmission of Information. Addison-Wesley, 1990.

Page 58: Copyrights @ Higher Education Commission
Page 59: Copyrights @ Higher Education Commission

3. Angle Modulation & Demodulation

Learning Focus

The chapter begins with an in-depth overview of the concept of ‘Modulation’ and furtherproceeds with discussion of Angle Modulation. Angle Modulation includes both; frequencymodulation and phase modulation. The chapter has the following objectives:

1. To acquaint the reader with the concept of Frequency Modulation and its working.2. To familiarize reader with the concept of Frequency Deviation.3. To introduce the concept of Modulation Index and Percentage of Modulation in the

context of Frequency Modulation and Phase Modulation.4. To acquaint the reader with basics of Phase Modulation, its working and mathematical

derivation.5. To introduce the concept of sidebands in FM and PM.6. To describe the relationship between PM and FM.

Page 60: Copyrights @ Higher Education Commission

60 Chapter 3. Angle Modulation & Demodulation

3.1 Introduction to Angle ModulationBefore starting off with the details of the frequency and phase modulation, let’s have a quickrecap of the modulation. As discussed in Chapter 02, modulation is the process of mixing orsuperimposing low frequency signals also called baseband signal such as audio, video, text etc.,to a high frequency signal called the carrier wave so that the information can be carried over thelonger distances. There are two signals that take part in the modulation process. One being thebaseband signal and the other is the carrier signal. The baseband signal is a low frequency signaland the carrier signal is a high frequency signal. The waveform of an information signal is usuallyunpredictable. During the modulation process the information signal or the baseband signal issuperimposed on the high frequency carrier wave. The carrier wave simply carries the informationsignal. As a result of the entire modulation process, certain characteristics of the information signalare varied in accordance with the information signal.

The process of modulation is employed because the audio signals are usually low frequencysignal and thus cannot travel over longer distances. So, in order to make them cover longer distances,they are usually sent along with some high frequency signal which carries them hence the namecarrier signal.

Apart from the discussion about the basics of modulation, Chapter 02 presented brief descriptionabout the various modulation schemes available and mainly focused on amplitude modulation. Thischapter, however, focuses on other variants of the analog modulation i.e. Frequency and phasemodulation, collectively known as angle modulation. Angle modulation (frequency modulationand phase modulation) is a non-linear analog modulation variant. It is employed because it ismore resistant to noise and interference as compared to the amplitude modulation. With amplitudemodulation, one gets to use the bandwidth of limited scope i.e. Amplitude modulation is confinedto provide limited transmission capabilities. On the other hand, angle modulation allows bettertransmission performance by making efficient use of the bandwidth. Although angle modulationprovides improved transmission facility and resistance to noise, the overall (transmitter and receiver)design of the circuit becomes complex making the entire implementation of angle modulated circuitdifficult. Furthermore, it is more cumbersome to analyze the spectrum of an angle modulatedwaveform.

R Generally in angle modulation frequency or phase pf carrier wave is modulated in accordancewith the amplitude of information signal.

3.2 Introduction to Frequency ModulationAs discussed earlier that the carrier signal is varied according to the information signal or thebaseband signal. These variations can be seen as the change in the amplitude of the carrier wave(Amplitude Modulation discussed in Chapter 02 or can be seen as the change in the frequency orthe phase of the carrier wave.

The Frequency Modulation or FM (along-with Phase Modulation), provides a substitute forthe Amplitude Modulation or AM for superimposing information signal to the carrier wave. TheFrequency Modulation as the name suggests, changes or varies the carrier signal frequency inconjunction with the amplitude of the information signal. This means, the amplitude of theinformation signal or the baseband signal dictates the change in the frequency of the carrier wave.It should be noted that the amplitude and the phase of the carrier wave remain constant duringthe Frequency Modulation process. Amplitude Modulation had its impact for quite a long time.Frequency Modulation, although existed but stayed behind the curtains for many years becauseit was believed that the narrower the bandwidth, the more the system is noise and interferencefree. This myth made Amplitude Modulation to stay dominant for an extensive period of time.

Page 61: Copyrights @ Higher Education Commission

3.2 Introduction to Frequency Modulation 61

However, the efforts of American scientist Major Erwin Armstrong made the scientists change theirmind. He did a lot of practical as well as theoretical work on the circuit design of the frequencymodulation during 1935. With his efforts, he was able to come up with a noise resistant type ofradio transmission.

3.2.1 How Frequency Modulation Works?As discussed earlier, frequency modulation works by superimposing the baseband or the messagesignal on the carrier wave in such a way that the frequency of the carrier varies according to theamplitude of the baseband or the message signal. However, the working of the FM entails moredetail than just mentioned above. The entire working of the FM is as follows:

Figure 3.1: How frequency modulation works

The information or the modulating waveform is shown in Fig. 3.1(A), while the unmodulatedcarrier is shown in Fig. 3.1 (B). The resultant frequency modulated waveform is shown in Fig.3.1(C).

As shown in the figure, at time T0, when the baseband signal is not applied to the carrier, thefrequency of the carrier remains same as before. This means that carrier frequency is nothing but aconstant amplitude sine wave at its resting frequency. In other words, the modulated waveformappears at its center frequency. When the modulating signal which is a low frequency signal isapplied to the carrier wave, the variations in the carrier frequency can then be seen.

The carrier reaches its maximum frequency when the modulating signal reaches its maximumamplitude, at time T1. At time T2, the modulating signal returns to 0 and the carrier returns to itscenter frequency. After T2, the modulating signal goes negative. This forces the carrier below itscenter frequency. The carrier returns to its center frequency when the modulating signal returns to0 volts at time T4. Between time T4 and time T8 the modulating signal repeats its cycles. It is worthnoting that modulating signal returns to its center frequency each time the modulating signal passesthrough 0 volts.

To recap the entire process, a rise in the amplitude of the information signal causes a corre-sponding increase in the speed of the carrier wave i.e. the cycles of the carrier wave appear quiteclose to one-another. When the information signal reaches its negative peak, there is corresponding

Page 62: Copyrights @ Higher Education Commission

62 Chapter 3. Angle Modulation & Demodulation

decrease in the frequency of the carrier wave i.e. The cycles of carrier waves appear far apart fromone another. While when no information signal is applied, the original carrier frequency will beobtained in the output.

R Angle modulations are generally more resistant to noise as compared to amplitude modulation.

3.2.2 Frequency DeviationFrom the process mentioned above, it can be observed that every time the baseband signal strikesthrough 0-Volts, the modulated waveform appears at its center frequency i.e., the signal starts offwith the resting position, goes all the way up, strikes back to the 0-Volts and then goes all the waydown causing corresponding changes in the frequency of the carrier wave.

The process is revisited to set the basis for a very important concept related to frequencymodulation i.e. Frequency Deviation. We know by now that the carrier frequency equally variesabove and below the center frequency. The amount by which this frequency changes is called theFrequency Deviation, denoted by, δ .

This definition may sound quite vague at the very first glance. But if you just, for a moment,focus on the two words separately, you will grasp this concept more easily. Frequency deviationin its simplest terms is the amount by which the carrier frequency is varied from its unmodulatedvalue. Frequency deviation can also be thought of as the shift in the carrier frequency or frequencyswift in the carrier above or below the center frequency. For example, let’s assume that a carriercontinuously swings from 100 MHz, down to 99.9 MHz, back to 100 MHz and then up to 100.1MHz and back to 100 MHz. the frequency deviation is± 0.1 MHz or±100 KHz. It should be notedthat the rate at which the frequency deviation occurs depends on the frequency of the modulatingsignal called the deviation rate. The rate of frequency deviation is determined by frequency ofmodulating signal. For example, if the modulation signal is a 1-KHz audio tone, the carrier willswing above and below its center frequency 1000 times each second. A 10 KHz audio tone willcause the carrier to deviate ±100 KHz, but this time at the rate of 10,000 times each second. Thus,the frequency of the modulating signal determines the rate of frequency deviation but not theamount of deviation. The amount that the carrier deviates from its frequency is determined by theamplitude of the modulating signal. A high amplitude audio tone may cause a deviation of ±100KHz. A lower amplitude tone of the same frequency may cause a deviation of only ±50 KHz.Maximum frequency deviation occurs at the maximum amplitude of the modulating signal and viceversa.

R Thus, we can sum-up the characteristics of FM waveform as:1. It is constant in amplitude but varies in frequency.2. The rate of the carrier division is same as the frequency of modulating signal.3. The amount of carrier deviation is directly proportional to the amplitude of modulating

signal.

Mathematical Representation of FMFrom the Fig. 3.2, it is seen that the instantaneous frequency f of the FM wave is given by:

f = fc(1+ kVmcoswmt) (3.1)

Where,fc = unmodulated carrier frequency.k = proportionality constant

Page 63: Copyrights @ Higher Education Commission

3.2 Introduction to Frequency Modulation 63

Figure 3.2: Frequency Vs time in FM

Vmcoswmt = Instantaneous modulation voltageNote: cosine is preferred for simplicity in calculations.The maximum deviation for this particular system will occur when the cosine term that has its

maximum value ±1. Under these conditions the instantaneous frequency will be:

f = fc(1+ kVm) (3.2)

So that the maximum deviation will be given by:

δ = kVm fc (3.3)

The instantaneous amplitude of the FM signal will be given by a formula of the form:

v = Asinθ (3.4)

Figure 3.3: Frequency modulated Vectors

As Fig. 3.3 shows, θ is the angle traced out by the vector A in time t. If A were rotation withconstant angular velocity, for example, ρ , this angle θ would be give by ρt (in radians). In thisinstance the angular velocity is anything but constant. It is governed by formula for ω obtainedfrom equation 3.1. In order to find θ , ω must be integrated w.r.t. time. Thus,

θ =∫

ω dt =∫

ωc(1+ kVmCosωmt (3.5)

Page 64: Copyrights @ Higher Education Commission

64 Chapter 3. Angle Modulation & Demodulation

θ = ωc(1+kVmCosωct

ωm) = ωct +

kVmωcsinωmtωm

(3.6)

θ = ωct +kVm fcsinωmt

fm(3.7)

θ = ωct +σ

fmsinωm (3.8)

The derivation utilized, in turn, is the fact that ωc is constant, the formula∫

cosnxdx= sinnxn and Eq.

(3.3). Which has shown that δ = kVm fc. Eq. (3.4) may now be substituted into Eq. (3.6) to give theinstantaneous value of the FM voltage therefore,

v = Asin(ωct + σ

fmsinωmt)

v = Asin(ωct +m f sinωmt) (3.9)

It is important to note that as modulating frequency decreases, and the modulating voltage am-plitude δ remains constant, the modulation index increases. This will be the basis for distinguishingFM from PM. Note that m f which is the ratio of two frequencies measured in radians.

Equation (vi) can be expanded using Bessel Function, as:

v = A

J0(m f )sinωct + J1(m f )[sin(ωc +ωm]t− sin(ωc−ωm)t+

J2[m f sin(ωc +2ωm]t− sin(ωc−2ωm)t+

J3[m f sin(ωc +3ωm]t− sin(ωc−3ωm)t+

J4[m f sin(ωc +4ωm]t− sin(ωc−4ωm)t + ...

(3.10)

Example 3.1.

For the frequency modulated signal v(t) = 10cos(10× 108 + 10sin1220t) Find carrierfrequency and modulation frequency.Solution: Expression for FM signal is:

v(t) = Acos(ωct +m f sinωmt)

Carrier Frequency

ωc = 2π fctfc = ωc

fc = 10∗108

2π= 159.2MHz

Modulation Frequency

ωm = 2π fmtfm = ωm

fc = 12202π

= 194.2Hz

The carrier frequency and the modulating frequency are 159.2 M-Hz and 194.2 Hzrespectively.

Page 65: Copyrights @ Higher Education Commission

3.2 Introduction to Frequency Modulation 65

3.2.3 Power in the FM WaveThe total power in an FM wave is distributed in the carrier and the sideband components. If we sumthe power in carrier and the entire sidebands for any given modulation index, it will equal the totalpower of the unmodulated carrier. Thus, it can be shown that for an unmodulated carrier (m f = 0)

PT = (Vc)2

R

where,PT = The total rms power of the unmodulated waveVc = The rms voltage of the carrierR = Resistance for the load

For a modulated carrier,

Example 3.2.

Given the FM signal v(t)= 10cos(15×108+15sin3000t), find the the total power dissipatedby FM wave in a resistor of 20Ω.Solution: FM equation is given as:

v(t) = Acos(ωct +m f sinωmt)

Comparing with given equation, A=10Total power is given by:

P = V 2rmsR

= (10√

2)2

20

P = 2.5

The power dissipated by FM wave is 2.5 W

3.2.4 Direct & Indirect FM GenerationFM modulators can be divided into Direct FM modulators and Indirect FM modulators. DirectFM involves changing or varying the frequency of the carrier directly by the modulating input. Itgenerates wide band FM signal by using voltage controlled oscillator. The name direct FM comefrom the fact that wide band FM is generated directly. The output signal produced by voltagecontrolled oscillator is proportional to the input signal. The block diagram (See Fig. 3.4) of directFM is shown below:

Figure 3.4: Block Diagram of Direct FM Generation

Page 66: Copyrights @ Higher Education Commission

66 Chapter 3. Angle Modulation & Demodulation

The voltage-controlled oscillator receives modulating signal m(t) as the input and the oscillatorproduces wideband FM (WBFM) wave.

The Indirect FM generation works by varying the phase of the carrier based on the input. Thename indirect comes from the fact that it produces the wideband FM wave by first producing anarrow-band FM and then frequency multipliers are used to generate the wideband FM wave. Theblock diagram of indirect FM generation is shown in Fig. 3.5:

Figure 3.5: Diagram of Indirect FM Generation

The process proceeds in two stages: During the first stage, modulating signal is fed into theNarrow-band FM modulator to generate NBFM wave. It should be noted that modulation index ofnarrowband FM is less than one. To generate the required modulation index of greater than one,frequency multiplier value is chosen properly.

If the modulation index of narrowband FM is β < 1 is applied as the input to the frequencymultiplier then it produces a signal whose modulation index is n times and the frequency also ntimes the frequency of wideband FM wave.

3.3 Introduction to Phase Modulation

Once the idea of Amplitude Modulation and the Frequency Modulation is clear, it is easy to figureout what Phase Modulation or PM, for short, will entail. Phase Modulation or PM occurs whenthe phase of an unmodulated constant-frequency carrier wave is varied in accordance with theamplitude of the message signal. The change in the amplitude of the modulating signal, dictates acorresponding change in the phase of the carrier wave. To state simply, the more there is change inthe amplitude of the message signal, the more the phase shift in the carrier wave can be seen. Atthis point, you might be wondering how the change in amplitude of the baseband signal will causea corresponding change in the phase of the carrier wave. The answer to this ambiguity is quitesimple: When the amplitude of the information signal goes high or positive, there is a lagging phaseshift in the carrier wave. Similarly, when the information signal goes negative, there is a leadingphase shift in the carrier wave. The process of the phase modulation can somewhat be related to theprocess of the frequency modulation in that the amount of phase lag and the delay of the carrieroutput increases as the modulating signal or the baseband signal goes positive. This causes thecarrier wave of constant frequency to appear as widely spaced cycles or having a lower frequency.Reverse happens when there is a leading phase shift as the baseband or the modulating signal goesnegative. The constant frequency carrier wave appears as compressed cycles or having a relativelyhigher frequency. The above concept may sound a little ambiguous to most of the readers but tryto grasp it for now. We will come back the relationship between FM and PM shortly where theconcept is discussed in much detail.

Page 67: Copyrights @ Higher Education Commission

3.3 Introduction to Phase Modulation 67

3.3.1 Working of Phase ModulationUp till now we have discussed that the change in amplitude of the baseband or the modulatingsignal causes change in the phase of the carrier. This concept seems quite fine and simple but stillthis does not explain the entire working of the process. Moreover, most of us remain confusedwith the term phase. So, let’s first see what actually phase is: We know by now that the oscillatingcarrier wave is in the form of the sine wave. The amplitude of this wave starts from an initial point,goes all the way up (i.e. positive) and then goes all the way down (i.e. negative) and hits back to thestarting point. This movement, is termed as one complete cycle of the sine wave. Obviously, thereare many other such cycles that make up the entire sine wave. This motion can be closely related tothe movement of the point around the circle. The phase, at a given point, in this case, is the anglebetween the initial or the starting point and the point on the waveform. This explanation, quiteclearly defines the concept of phase and we are now in a state to move to the detailed working of thephase modulation. For the sake of simplicity, we will consider square wave as our modulating signalbecause then it will be quite easy to demonstrate the change in phase. Consider the modulatingsignal shown in Fig. 3.6:

Figure 3.6: Phase Modulation

From the Fig. 3.6(a), it can be seen that there is variation in amplitude at time t1, t2, t3, t4 andt5. Now, keeping in mind that the phase of carrier will change according to the amplitude of themodulating signal, let’s take this discussion to the next step. Between the initial point i.e. t0 and t1,there is no change in the amplitude of the baseband signal. Thus, there is no change in the phase ofthe carrier signal too (see Fig. 3.6(b)). At time t1, the amplitude of the modulating wave rises from0 to E1. This causes a corresponding phase change in the carrier wave. This phase change staysuntil the amplitude is again varied at t2 and reaches from E1 to E2 causing a corresponding phasechange in the carrier wave.

The phase attained at t2 stays until the modulating signal goes negative at time t3 and itsamplitude strikes E3. This causes the phase of the carrier wave to be decreased from what wasattained at t2. This happens so, because the amplitude of the modulating signal goes negative. The

Page 68: Copyrights @ Higher Education Commission

68 Chapter 3. Angle Modulation & Demodulation

phase stays same between the time interval t3 and t4. From t4, the amplitude of the modulatingsignal goes positive i.e. rises up to E1, causing a corresponding change in the phase of the carrier.During the time interval t4 and t5, the phase remains constant as there is no change in the amplitudeof the modulating signal. However, the amplitude decreases at time t5 and so does the phase of thecarrier wave. Beyond t5, no change in the phase of carrier is observed, because there is no changein the modulating signal.

3.3.2 Mathematical Representation of PMOnce the theoretical knowledge about the phase modulation is quite clear, let’s have a look at themathematical description: Consider the equation below:

v = Asin(ωct +φ)

If the phase φ in the equation above is varied in such a way that its magnitude is proportional tothe amplitude of the baseband signal, the resulting wave form is the one which is phase modulated.We can write a phase modulated waveform as:

V = Asin(ωt +φmsinωmt) (3.11)

where, φm is the maximum value of phase shift caused by particular modulating or the basebandsignal. For PM the modulation index denoted by Mp is proportional to the modulating signalamplitude i.e.,

Mp = φm

Thus, we can write as:

V = Asin(ωt +Mpsinωmt) (3.12)

3.3.3 Phase DeviationThe maximum difference between the instantaneous phase angle of the modulated wave and thephase angle of the unmodulated carrier. Phase deviation is proportional to modulating signal m(t).

3.4 Modulation Index in Angle Modulation

An important concept, Modulation Index, was introduced in Chapter 02 for the amplitude mod-ulation. It should be noted that the term Modulation Index does not have a generic definition.For Amplitude Modulation, the modulation Index was represented as the relationship between theamplitude of the modulating waveform and the amplitude of the carrier wave. It was stated inChapter 02 that the amplitude of the modulating waveform should be less than the amplitude of thecarrier wave.

However, when speaking in terms of Angle Modulation, the definition of the modulation indexchanges. In the context of Angle Modulation, modulation index is defined as the ratio of frequencydeviation to the modulating frequency. Mathematically given as:

m f =δ

fm(3.13)

Page 69: Copyrights @ Higher Education Commission

3.5 Sidebands in Angle Modulation 69

Where, δ , is the frequency deviation. As defined in section 3.2.2 it is the amount by whichthe carrier frequency is varied from its unmodulated value. While the modulating factor in AM islimited between 0 and 1 it must be emphasized, that the modulation index in angle modulation canreach quite high numerical values.

A few communication systems, particularly those that are FM based, impose limits on boththe frequency deviation and the modulating signal. In FM broadcasting, the maximum allowedfrequency deviation is 75 KHz and the maximum allowed modulating frequency is 15KHz yieldinga modulation index of 5. In cases, similar to as mentioned above, where the limits are imposed onthe maximum allowable frequency deviation and modulating frequency, the modulation index iscalled deviation ratio. It should also be kept in mind that since modulation index is simply a ratio,it does not have any unit.

Example 3.3.

An FM broadcast system shows maximum deviation of 70 kHz, and the maximum audiofrequency is 15 kHz. Work out the deviation ratio.Solution: The deviation ratio is given by:

deviationratio = fd(max)fd(min)

deviationratio = 7015 = 4.66

The deviation ratio is 4.66

3.5 Sidebands in Angle Modulation

The discussion on sidebands, in context of Amplitude Modulation, was presented in chapter 02.Sidebands are also known as side frequencies. They occur when the carrier wave is modulated byinformation signal or the modulating signal. In case of Amplitude Modulation, the modulationprocess yields two sidebands that lie above and below the carrier frequency. The sideband that liesabove the carrier signal is called the upper-sideband and the one that lies below the carrier signalis called the lower-sideband. These side frequencies are represented as the sum and difference ofthe carrier and the modulating frequency. The reason to mention this recap here was to refresh inreader’s mind, the concept of the sidebands in general and sidebands resulted in the Amplitudemodulation and to build the foundation of the sidebands in angle modulation. It was discussed inthe very start of the chapter that amplitude modulation provides one with the bandwidth of limitedscope. Whereas, the angle modulation overcomes this limitation by providing better transmissionperformance and making efficient use of the bandwidth. At this point, you might have wonderedhow this is possible. The answer to this ambiguity lies in the concept which is currently underdiscussion, i.e. Sidebands. Sidebands in FM and PM also appear as the sum and difference ofthe carrier and the modulating frequency the only difference is that several pairs of upper andlower sideband are generated. For suppose the modulating frequency is 2 khz and the first pair ofsidebands is above and below the carrier by 2000 Hz. The second pair would be 2×2000 Hz i.e.,4000 Hz or 4 khz and so on. This bunch of upper and lower sidebands give FM and PM a muchwider bandwidth than the AM. Not only this, these sidebands can be made to produce a narrowband FM (still wider than AM).

If we keep on calculating the pair of sidebands generated in the similar manner, then obviously,this process would continue infinitely, which means a frequency modulated signal has infinitely

Page 70: Copyrights @ Higher Education Commission

70 Chapter 3. Angle Modulation & Demodulation

large bandwidth. This concept seems fine in the theoretical context. But obviously, it’s quiteimpractical to think that an infinitely large bandwidth exists.

The answer to this confusion is quite simple. The information is carried in only those sidebandsthat have the largest amplitude. Any sideband whose amplitude is less than one percent ofunmodulated carrier is considered insignificant and can be ignored.

The number of sidebands generated and the spacing between them depends on two things i.e.Frequency deviation and the modulating frequency (amplitude of the carrier remains constant). Thefrequency deviation in turn depends on the amplitude of the modulating signal. The variation in theamplitude of the modulating signal causes the frequency deviation to change.

3.6 Relationship Between PM and FM

It has been discussed in the sections, above that Frequency Modulation and Phase Modulation aretogether called as Angle Modulation. We have also discussed the working of both, the FM and PMin quite detail that in FM, the amplitude of the carrier signal remains constant and the frequencyof the carrier is varied in accordance with the amplitude of the baseband or the message signal.Whereas in PM, the phase of the carrier wave is varied as the amplitude of the modulating signalchanges. Also, the concepts of Modulation Index and sidebands associated with them have alsobeen discussed in quite depth. We are now in a stage to discuss about what actually relates PMand FM and why they are called Angle Modulation. It should be noted that the frequency or thephase of any signal are closely related parameters. It can also be said that frequency is the rateat which the phase changes or frequency is the rate of change of phase. This statement clearlyindicates that Frequency and Phase Modulation have many similarities and therefore are termedas Angle Modulation. It is also quite possible to obtain the frequency modulation from the phasemodulation by the Armstrong System. Both, the phase modulation and the frequency modulationare inter-convert able. We can obtain a frequency modulated waveform by integrating the basebandor the message signal denoted by m(t) w.r.t time and use the resulting signal as an input to thephase modulator. Conversely, a phase modulated waveform can be obtained by differentiating themodulating or the message signal m(t) w.r.t time and use the resulting signal as an input to thefrequency modulator. This suggests that the characteristics of the phase modulation can be inferredfrom the frequency modulation and vice versa.

The FM is also said to be a variant of PM. This is because in FM, the Frequency deviation isproportional to the amplitude of the message or the modulating signal. Furthermore, if the carrierfrequency can be related to a reference vector that rotates with a constant angular velocity, then FMwill be said to have a phase lead or a phase lag w.r.t to the reference, since its frequency oscillatesbetween fc−δ and fc−δ .

After this entire discussion, you may wonder, that if frequency and phase are so much inter-related then why are they named differently or what sets them apart? Well, the two differ whenspeaking in context of Modulation Index (MI). The modulation index for the frequency modulationis given as (3.13).

As the modulation index increases, the modulating frequency fm decreases. In case of PM, themodulation index or MI is directly proportional to the amplitude of the modulating waveform andis independent of the frequency of the modulating wave. This means that in PM the frequencydeviation is not constant as in FM.

If the carrier is modulated by a constant frequency then the difference between PM and FMcannot be seen and therefore the difference between the two cannot be detect by the receiver. Butsuch a carrier that does not change carries no information. Thus, as the modulation waveformvaries, the difference between PM and FM can be seen.

To sum up:

Page 71: Copyrights @ Higher Education Commission

3.7 FM & PM Demodulation 71

1. With PM, the carrier frequency deviation is proportional to both the modulation frequencyand amplitude.

2. With FM, the frequency deviation is proportional to the modulation amplitude regardless ofsignal’s frequency.

3.7 FM & PM Demodulation

There are many different ways to demodulate an angle modulated signal. The some of thesemethods are following.

3.7.1 Slope Detector

The simplest FM/PM detector called a Slope Detector is showin in Fig. 3.7. The two parallelresonant tank circuits L1C1 and L2C2, have an overall response curve like that sown in Fig. 3.7.Notice that the response curve does not peak at the center frequency of the IF signal. Instead, thecenter frequency falls half way up the side of the curve.

Figure 3.7: The slope detector and its response curve

When the IF signal is at its center frequency, the average amplitude signal reaches D1.In effect, the tuned circuit changes the frequency modulated IF signal to a signal which varies

in amplitude. That is, it changes the FM signal to an AM signal. The diode then detects the AMsignal just like the AM detector.

However this method requires to consider the response curve of the IF amplifiers which precedethis stage. Furthermore, any response curve offset is undesirable, because received gain would bereduced. Its also difficult to achieve linear response during the FM to AM conversion, renderingthe use of this technique limited in practical applications.

Page 72: Copyrights @ Higher Education Commission

72 Chapter 3. Angle Modulation & Demodulation

3.7.2 Ratio DetectorThe ratio detector is showin in Fig. 3.8. This circuit is one of the most poplar types of FM detectorsbecause it has its own limiting action and does not require any preceding linear stage.

Figure 3.8: The Ratio Detector

With D1 reversed the two diodes are in series and conduct equally at the center frequency. Thevoltage builds up across the capacitors C4 and C5 in series.

The large value capacitor C6 is key to the whole circuit operation. After several cycles of theinput signal, this capacitor charges to a voltage proportional to the average received signal strength.C6 is large enough the keep the R2 voltage constant. It also keeps the constant voltage across theseries combination of C4 and C5. It does this even under noisy conditions making ratio detectorrelatively insensitive to the noise. At the center frequency the sum of voltage across C4 and C5 mustalways be equal to the voltage across C6.

3.7.3 Phase Locked Loop (PLL) DemodulatorFig. 3.9 shows the basic phase-locked loop (PLL) block diagram. It consists of a phase detector,voltage-controlled oscillator (VCO) and a low pass filter along with basic electronic components.The phase detector compares the VCO frequency and the input frequency. The VCO operates atthe input frequency i.e., IF frequency in this case. An error voltage is produced depending on theamount and direction difference between the input and VCO frequencies. The error signal s theninputted to the low-pass filter which determine the frequency range over which the loop will acquireand hold its phase lock. The error voltage from the low pass filter is then used to control the VCO.

The primary advantage of the PLL is its excellent performance at low cost with minimumcomponents possible. It also eliminates costly inductors, transformers and it greatly simplifiestuning requirements.

3.8 AM and FM: A ComparisonAs we are approaching to the end of the chapter, the basic concepts about the three variants ofAnalog Modulation are quite clear. Chapter 02, was purely dedicated to the discussion of AmplitudeModulation whereas, this chapter focused on the discussion of other two variants of the Analog

Page 73: Copyrights @ Higher Education Commission

3.8 AM and FM: A Comparison 73

Figure 3.9: The Phased-Locked Loop (PLL) Detector

Modulation called Frequency Modulation and the Phase Modulation. These three modulationschemes, have a few things in common. Such as, both the message signal or the baseband signaland the carrier signal are analog in nature hence the name Analog Modulation. Secondly, in all thevariants of the Analog Modulation, the modulating or the baseband signal, which is a low frequencysignal, changes the amplitude, frequency and phase of the high frequency carrier signal, whichcorresponds to three different types of modulation schemes i.e. amplitude, frequency and phasemodulation respectively. We have also discussed in previous sections that FM and PM are quiteinter-related however, this is not the case with AM. Both, FM and AM are quite different from oneanother. AM works by changing by the amplitude of the carrier sine wave in accordance with theamplitude of the message signal and FM works by changing the frequency of a constant amplitudecarrier sine wave in accordance with the amplitude of the message signal. However, they differ infollowing context:• Origin: AM was the first method introduced for the audio transmission carries out in mid

1870s whereas, FM was introduced in 1930s and was introduced by Erwin Armstrong.Amplitude Variations: Since the amplitude of a frequency modulated waveform remainsconstant, and amplitude limiter can be employed to suppress the noise above the limitingamplitude. This reduces the overhead of eradicating noise from the signal that is received.The amplitude limiter cannot be employed for an amplitude modulated waveform becausethe information signal itself is carrier by the amplitude.• Power Content: In FM, the major amount of power content is contained in the sidebands

whereas the carrier has less power. Thus, in FM there is no loss of power since most of theinformation is carried in sidebands that have more power content. This is opposite to whathappens in an AM system. In amplitude modulation, only one third of the total power residesin sidebands and two-third of the power resides in the carrier which contains no information.• Noise Suppression: In an AM system the noise can be suppressed by increasing the trans-

mitted power of the signal making the AM system costly. On the other hand, in FM, noisecan be reduced or suppressed by increasing the frequency deviation.• Interference: FM provides more resistance to the interference by employing guard bands to

separate the frequency channels. These guard-bands do not allow any signal transmission.They simply act as barrier between the signals in two adjacent channels. In case of AMsystems, there are no guard-bands hence the signal in one channel can interfere with thesignal in the adjacent channel. The interference can only be suppressed if the signal strengthof one channel is strong enough to suppress the other.• Sidebands: An FM modulated waveform has theoretically infinite number of sidebands

which makes the bandwidth of an FM modulated waveform much bigger than the Amplitude

Page 74: Copyrights @ Higher Education Commission

74 Chapter 3. Angle Modulation & Demodulation

Modulated waveform. Whereas in amplitude modulation, the bandwidth is only twice themodulation frequency. This makes FM system more costly than an AM system.• Reception of Signals: The AM signals have a wide receiving area. AM systems can be

employed anywhere in the world without worrying about whether the signals will be receivedor not. On the other hand, the reception area of the FM signals is limited. They are mostlyemployed in metropolitan areas.

Chapter Summary• Angle modulation involves both the phase and the frequency modulation. Angle modulation

is more resistant to noise and interference than the amplitude modulation.• In the Frequency Modulation Scheme or FM, the amplitude of the information signal dictates

the change in the frequency of the carrier wave.• During FM, the amplitude and the phase of the carrier wave remain constant.• If the amplitude of the modulating signal is high, the carrier frequency increases. If the

amplitude of the modulating signal is low, the carrier frequency decreases.• Frequency deviation is the amount by which the carrier frequency is varied from its unmodu-

lated value.• In phase modulation, when the amplitude of the information signal goes high or positive,

there is a lagging phase shift in the carrier wave. Similarly, when the information signal goesnegative, there is a leading phase shift in the carrier wave.• In the context of Angle Modulation, modulation index is defined as the ratio of frequency

deviation to the modulating frequency. Mathematically given as:

m f =δ

fm

• Sidebands in FM and PM also appear as the sum and difference of the carrier and themodulating frequency the only difference is that several pairs of upper and lower sidebandare generated.

Multiple Choice Questions

1. Phase modulation and frequency modula-tion differ in:

(a) Different definition of ModulationIndex.

(b) Theory only, otherwise they aresame.

(c) The poorer radio response of PMI.

2. In case of frequency modulation, fre-quency deviation is:

(a) Zero(b) Constant(c) Proportional to amplitude of modu-

lating signal(d) Proportional to modulating fre-

quency.

3. Frequency deviation in phase modulatedsignal is:

(a) Frequency only.(b) Amplitude only.(c) Both (a) and (b).(d) None of them.

4. Frequency modulation is preferred overAmplitude Modulation because:

(a) They are less immune to noise.(b) Amplitude limiters are used to cor-

rect amplitude variations(c) They have less adjacent channel in-

terference.(d) All of them.

5. The ratio of actual frequency deviation tothe maximum allowable frequency devia-tion is called:

(a) Percentage modulation(b) Phase deviation

Page 75: Copyrights @ Higher Education Commission

3.8 AM and FM: A Comparison 75

(c) Deviation Ratio(d) None of them

6. Modulation Index of FM is given as:(a) M= (Modulating Frequency)/(Frequency

Deviation)(b) M= (Frequency Deviation)/(Modulating

Frequency)(c) M= (Modulating Frequency)/(Carrier

Frequency)(d) M= (Carrier Frequency)/(Modulating

Frequency)

7. Find out the Modulation Index of an Fre-quency Modulated signal, given that car-rier frequency of 100 MHz is modulatedby 10 k-Hz wave. The frequency deviationis 500 KHz.

(a) 100(b) 70(c) 50(d) 60

8. Find out the bandwidth of an FM wave.Given that the allowable maximum devia-

tion is 75 kHz and the frequency of modu-lating signal is 10 kHz

(a) 100 k-Hz(b) 200 k-Hz(c) 170 k-Hz(d) The given information is not suffi-

cient.

9. Given an FM signal:v(t) = 15cos(10∗108t +10sin1220t)The carrier frequency and the modulationfrequency, respectively are:

(a) 185.5 M-Hz and 200.15 Hz(b) 159.1 M-Hz and 194.1 Hz(c) 350.1 M-Hz and 200.1 Hz(d) 159.1 Hz and 194.1 Hz

10. Find out the frequency deviation for thefollowing FM signal:v(t) = 10cos(6000t +5sin2220t)

(a) 2200 Hz(b) 6000 Hz(c) 1750 Hz(d) 1100 Hz

Review Questions

1. Elucidate the working phenomenon of Frequency Modulation.2. What happens to the amplitude of carrier during: aFrequency Modulation bPhase Modulation3. Why ‘FM’ and ‘PM’ are combinedly known as Angle Modulation?4. State what are the disadvantages of FM over AM and how can they be avoided?5. Enlist three practical applications of FM.6. How does the Modulation Index in Angle Modulation differ from modulation index in AM?7. Elucidate the working phenomenon of PM in your own words.8. Elaborate in your own words, in what ways FM differs from PM.9. State and explain the difference between modulation index and deviation ratio.

Numerical

1. A 300 Hz audio sine wave modulates a 20 M-Hz carrier. Given that the voltage of carrier is4V and maximum frequency deviation is 15 K-Hz. Write the FM equation for the modulatedwave.

2. Find out the modulation Index of the FM signal when:a.200 M-Hz carrier is modulated by 10k-Hz wave.b. 110 MHz carrier is modulated by 15kHz wave.Given that the frequency deviation is 500k-Hz

3. Given the FM signal, v(t) = 30cos(10∗108t +5sin1550t) Find out: a Modulation Index bMaximum Frequency Deviation

4. Given the FM signal, v(t) = 20cos(15∗108t +5sin300t) Find out the total power dissipatedby FM wave in a resistor of 15.

Page 76: Copyrights @ Higher Education Commission

76 Chapter 3. Angle Modulation & Demodulation

5. For the FM signal given above, Find out:a. Carrier Frequencyb.Modulation Frequency

Miscellaneous Questions

1. Define ’Narrow band Frequency Modulation’ and ‘Wide band Frequency Modulation’.2. Elucidate the concept of Guard Bands in FM?3. Define Pre-emphasis and De-emphasis. Also, state the cut-off frequency of both.4. What is Carson’s Rule?

Brain Buzzer

1. Elucidate how does pre-emphasis improve the performance of communication when noise ispresent.

2. Brainstorm and enlist practical applications of Phase Modulation.3. Define Capture Effect.4. A 200 M-Hz carrier is frequency modulated by 10 k-Hz wave. The frequency deviation is

110 k-Hz. Compute the carrier swing of FM signal.

Bibliography

1. U., Madhow, Fundamentals of Digital Communication. Cambridge University Press, 2008.2. Principles of Electronic Communication Systems - Louis E. Frenzel Jr. (4th Edition, ISBN-13:

978-0-07-337385-0)3. J. G., Proakis and M., Salehi, Digital Communications. McGraw-Hill, 2007.4. J. M., Wozencraft and I. M., Jacobs, Principles of Communication Engineering. Wiley, 1965;

reissued by Waveland Press in 1990.5. B. P., Lathi, Linear Systems and Signals. Oxford University Press, 2004.6. A. J., Viterbi and J. K., Omura, Principles of Digital Communication and Coding. McGraw-

Hill, 1979.7. R. E., Blahut, Digital Transmission of Information. Addison-Wesley, 1990.8. J. D., Gibson, ed., The Mobile Communications Handbook. CRC Press, 2012.9. G., Foschini, “Layered space–time architecture for wireless communication in a fading

environment when using multi-element antennas,” Bell-Labs Technical Journal, vol. 1, no. 2,pp. 41–59, 1996.

10. K., Sayood, Introduction to Data Compression. Morgan Kaufmann, 2005.11. D. P., Bertsekas and R. G., Gallager, Data Networks. Prentice Hall, 1991.12. A., Kumar, D., Manjunath, and J., Kuri, Communication Networking: An Analytical Ap-

proach. Morgan Kaufmann, 2004.13. S. K., Mitra, Digital Signal Processing: A Computer-based Approach. McGraw-Hill, 2010.14. Electronic Communications System: Fundamentals Through Advanced - Wayne Tomasi (5th

Edition, ISBN-13: 978-0-13-049492-4)15. Telecommunications - Warren Hioki (4th Edition, ISBN-10: 013020031X or ISBN-13:

978-0130200310)16. Modern Digital and Analog Communication Systems - B. P. Lathi, Zhi Ding (4th Edition,

ISBN-10: 0195331451 or ISBN-13: 978-0195331455)17. Digital Communications - Ian A. Glover, Peter M. Grant (3rd Edition, ISBN-10: 0273718304

or ISBN-13: 978-0273718307)

Page 77: Copyrights @ Higher Education Commission

3.8 AM and FM: A Comparison 77

18. Communication Systems - Simon Haykin, Michael Moher (5th Edition, ISBN-10: 8126521511or ISBN-13: 978-8126521517)

19. Kwonhue Choi; Huaping Liu, "Amplitude Modulation," in Problem-Based Learning inCommunication Systems Using MATLAB and Simulink , , IEEE, 2016, pp.

20. John W. Leis, "Modulation and Demodulation," in Communication Systems Principles UsingMATLAB , , Wiley, 2019, pp.155-267

21. S., Haykin, Communications Systems. Wiley, 2000.22. J. G., Proakis and M., Salehi, Fundamentals of Communication Systems. Prentice Hall, 2004.23. M. B., Pursley, Introduction to Digital Communications. Prentice Hall, 2003.24. R. E., Ziemer and W. H., Tranter, Principles of Communication: Systems, Modulation and

Noise. Wiley, 2001.25. J. R., Barry, E. A., Lee, and D. G., Messerschmitt, Digital Communication. Kluwer Academic

Publishers, 2004.26. S., Benedetto and E., Biglieri, Principles of Digital Transmission; with Wireless Applications.

Springer, 1999.27. Richard W. Middlestead, "Amplitude Shift Keying Modulation , demodulation and perfor-

mance ," in Digital Communications with Emphasis on Data Modems: Theory, Analysis,Design, Simulation, Testing, and Applications, Wiley, 2017, pp.

28. J. D., Gibson, ed., The Mobile Communications Handbook. CRC Press, 2012.

Page 78: Copyrights @ Higher Education Commission
Page 79: Copyrights @ Higher Education Commission

4. Digital Modulation & Demodulation

Learning Focus

Chapter 02 and Chapter 03 were solely dedicated to the discussion of the ContinuousWave modulation schemes. This chapter provides an in-depth discussion about the digitalmodulation schemes.

1. Build the foundation of the fundamentals of digital communication system and itsimportance.

2. Familiarize the reader with the basic components of a digital communication system.3. Familiarize the reader with the basics of digital signal processing.4. Acquaint the reader with the concept of serial and parallel transmission.5. Familiarize the reader with various modulation schemes such as:

(a). Analog-Digital schemes(b). Digital-digital modulation schemes(c). Line coding schemes

Page 80: Copyrights @ Higher Education Commission

80 Chapter 4. Digital Modulation & Demodulation

4.1 Introduction to Digital Communication

Modulation, we know is the process of sending information, usually a low frequency signal fromone place to another by the means of a high frequency signal, called the carrier signal. However,our discussion, so far, was only confined to only the Analog Modulation schemes; AmplitudeModulation (AM), Phase Modulation (PM) and Frequency Modulation (FM). Although, Am-plitude modulation and Angle Modulation (Frequency and Phase) were poles apart from oneanother (frequency and phase modulation, too, had some minor differences.) but they all hadone thing in common; both the message signal (base band signal) and the high frequency carriersignal, were analogous in nature. These modulation schemes are termed as Analog Modulationschemes or the Continuous Wave (CW) Modulation schemes. We also studied that by varyingcertain parameters of the analog carrier signal in accordance with the analog message signal wecould achieve the amplitude and the angle modulation. However, the classification of modulationschemes does not end here. Analog modulation schemes were the simplest of all the modulationschemes. The other set of modulation schemes are termed as Digital Modulation schemes, which isthe focus of this chapter. Before proceeding to the details about the particular variants of the digitalmodulation schemes, let’s see what actually digital communication is.

4.1.1 What is Digital Communication System

The term digital communication, is employed when referring to the data that is coming from thecomputers, not essentially your laptop or desktop, but any kind of computing equipment. Speakingof information from computing devices usually creates a picture of data in form of bits or the binarydata. So, in this context, digital communication can be defined as "The data communicated inform of bits" or it can be defined as "The electronic exchange of information from one placeto the other which has been digitally encoded". However, it is not necessary that the digital dataalways comes from a computing device. Analogous data such as voice, video or audio can alsobe converted into a series of bit streams by using appropriate modulation schemes or some datacompression or A/D conversion schemes.

4.1.2 Why Digital Transmission

The world around us is in which we live we can see, hear and speak through the transmission andreception of analog signals. One therefore can conclude that the world around us in itself is analogin nature. Digital signals which are noting but streams of 1s and 0s are meaningless for humanperception. So why digital transmission is being preferred over analog transmission systems? Thecommunication system receiver is essentially an estimating device. It does not know in advancewhat is being transmitted. Depending on the received signal characteristics it makes guesses aboutwhat could have been transmitted. Obviously it is expected to make right guesses more than 90%of times. This is of course a challenging task but with the use of digital systems it becomes simple.Because unlike analog signals which can have any (or infinite) waveform due to their continuouswave nature, digital signal on the other hand has only two possible wave forms often represented bylow and high voltage levels. Therefore when we have an analog system depending on the type ofmessage like voice or video produced by even a single individual could result in unique and infinitepossible waveforms making the job of analog systems extremely difficult. But in the case of digitalcommunication system no matter who is generating the data or whether it is voice or video signal itis first converted in to a bit form restricting it to assume only one of two possible values beforetransmitting. Therefore a digital communication system receiver is required to make a guess fromonly two possible waveforms. Furthermore since a bit has fixed shape it can easily be regeneratedafter getting corrupted over the propagation through the channel as shown in Fig. 4.1 somethingthat is not possible with analog signals.

Page 81: Copyrights @ Higher Education Commission

4.1 Introduction to Digital Communication 81

Figure 4.1: Phase Modulation

Earlier, it seemed that analog transmission was the most prevailing kind of electrical communi-cations. However, this was proved to be wrong later on. The first ever form of the applications ofdigital transmission were the telegraphy, introduced in 1809 and typewriters introduced in 1906.Both, telegraphs and typewriters made use of digital signals. As time passed, the need and demandof digital communication arose a lot. These demands increased when the electronic gurus realizedseveral differences between the two variants of communication. For example, when communicatingthe data through analogous means, repeaters (amplifiers) are employed to strengthen the signalsmaking them cover longer distances, though, amplifiers worked quite well in making the data travelover a long communication path but it also used to regenerate or strengthen the noise along withthe original signal. However, in late 1940s, it was seen that by using regenerative repeaters, thedigital signal could be recreated, error free at appropriately spaced intervals. This allowed the noiseto be completely removed from the from the communication path. To state clearly, following arethe reasons which make the digital transmission a much better choice to communicate data ratherthan analog means:

1. Error Detection and Correction: With digital transmission systems, it is much easier toidentify the errors that occur during the transmission of data and correct them at the sametime. Digital transmission systems are smart enough to identify the errors by making use of aspecial circuitry. Not only this, such error detection schemes have been developed that canidentify the type of error and its location.

2. Noise Resistance: Digital transmission systems are more resistant to noise than the analogtransmission system. Analog systems use amplifiers to strengthen the signal but theseamplifiers also amplify the noise that comes along with the signal this decreases the signal-to-noise ratio, making the entire transmission erroneous.

3. Greater Bandwidth: Transmission via digital means overcomes bandwidth limitations too.With digital transmission of data, the bandwidth is two or more times greater than bandwidthwith the analog transmission.

R Essentially the receiver of digital communication systems only has to make a choice fromset of two symbols while the receiver of analog communication system has to make a choicefrom set of infinite symbols making its job extremely difficult.

Page 82: Copyrights @ Higher Education Commission

82 Chapter 4. Digital Modulation & Demodulation

4.2 Elements of a Digital Communication System

If you recall, back in Chapter 1, we discussed the block diagram of a simple communication system.The same diagram (See Fig. 4.2) is reproduced here for the sake of reference:

Figure 4.2: Block Diagram of a Simple Communication System

A detailed discussion of all the components of the above block diagram was presented inChapter 1. The reason to reproduce this diagram here was to lay the foundation for the blockdiagram of a digital communication system. A digital communication system like every othercommunication, will too have these fundamental units but it will obviously have certain additionalcomponents, which sets it apart from the analog communication system. This does not meanthat the block diagram shown above refers to the block of an analog communication system. Asclearly, stated in Chapter 1, it is the generic block diagram that acts as the building blocks forall the communication system. Once, we have recalled the basic block diagram of a genericcommunication system, let’s move a step further. The Fig. 4.3 shows the block diagram of ageneral digital communication system. This means every communication system that falls inthe category of digital communication system will have these components. However, additionalcomponents can be added depending on the specific type of the digital communication system.

Figure 4.3: Basic Elements of a Digital Communication System

We will now discuss each section of the block diagram in detail.1. Information Source and Input Transducer: The Information source is usually analog in

nature such as voice or video. This has to be converted from analog to digital format beforeit can be accepted by digital communication system. This block is therefore also sometimes called as Format block. The conversion of analog signals such as voice into electrical

Page 83: Copyrights @ Higher Education Commission

4.2 Elements of a Digital Communication System 83

signals (digital format) is done through devices called Transducers. Transducer is a devicewhich convert one form of energy into other. For example a voice signal is converted intoelectrical signal through a transducing device microphone. In case the input is already indigital format like an image from a digital camera, than n conversion is required. The processof analog-to-digital conversion is described in later sections.

2. Source Encoder: Source encoding produces Analog-to-digital (A/D) conversion (for Analogsources) and is required to serve two basic requirements:

(a) Two remove redundant data (unneeded information) obtained from the Informationsource after formatting, and

(b) To help achieve synchronization.Representing the information by using few binary digits allows the digital communicationsystem achieve higher transmission rates, reduces error rate, reduces storage requirementsand consumes less power to transmit; rendering the system energy efficient. Removing theunneeded information is conceptually equivalent to compression. Moreover, source encodingensures that the encoded signal is rapidly fluctuating between high and low states which iskey to synchronization.

R Since source encoders include Analog-to-Digital Converters they are not used withFormat blocks because the former already includes the essential step of digitizing theinformation

The example of source encoding techniques include line encoding or digital-to-digital modu-lation schemes discussed in later chapter.

Exercise 4.1 Explain how source encoding help in achieving synchronization and howremoving unneeded information can improve transmission rates, reduces error rate, re-duces storage requirements and consumes less power to transmit; rendering the systemenergy efficient

3. Channel Encoder: The channel encoder has complementary function to that of source en-coder. That is instead of removing the extra bits it is employed to insert in controlled manner,certain redundant bits into the information sequence. The reason to insert redundant bits hereis to overcome the noise and interference effects at the receiver during the transmission ofinformation through the channel. Example 4.1 will explain this concept further.

Figure 4.4: Example of Channel Encoding

Page 84: Copyrights @ Higher Education Commission

84 Chapter 4. Digital Modulation & Demodulation

Example 4.1.

Consider following packet to be transmitted over a noisy channel with and withoutchannel encoding.Fig. 4.4(a) and (b) shows a the two cases with and wihtout channel encoding. Wehave taken Hamming parity (even) bit technique as an example of Channel encodingtechnique. In the first case a packet of five bits is transmitted over a noisy channel. Atthe receiver side, bit four is corrupted. However without channel coding, there is noway for the receiver to know whether this bit is corrupted or not.In the second case shown in in Fig. 4.4(b) with Hamming (even) parity bit channelencoding where an extra bit is added along with five data bits in the overall packet.The value of that bit depends on the number of 1s already in the data bits. If thenumber of 1s are even the value of parity bit would be 0 as in this example. Otherwisethe value would be 1. This is to ensure that every time a packet is transmitted thenumber of 1 bits should always be even. Therefore in this situation when an erroneouspacket is received which contains an odd number of 1s, the receiver immediatelyknows that data is corrupted and can request for retransmission.

4. Digital Modulator: This part is responsible for modulate the digital sequence using any oneof analog-to-digital, or digital-to-digital techniques i.e. it maps the information sequenceinto the signal wave forms which is then passed through the channel. These techniques arediscussed later in this chapter and next chapters.

4.3 Introduction to Digital Signal ProcessingThe analog to digital converted signals are analyzed and modified to optimize the performance ofdigital communication systems. This field is known as Digital Signal Processing. Digital signalsare those that can only be in one of the two possible states; 0 or 1. A common example can be yourfan or bulb. They are either in the ON state or the OFF state. A fan or bulb can never be in an onstate or the off state at the same time. Because of finite set of waveforms available with digitalsignals the processing becomes much easier and communication system becomes much accurate.Due to this reason most devices such as the computing devices; desktops, laptops, smart phone,tablets, phablets, calculators all are digital.

The sole purpose of Digital Signal Processing is to take the analog data such as sound, video orimage that have been digitized, and then perform high-speed data manipulation on this acquireddata. An analog to digital processor is required to convert the analog information into the form ofbits. Once the required processing of the data is done, a digital to analog convertor is required toconvert the data back to analog form so that it can represented in real time.

4.3.1 Key Components of a Digital Signal ProcessorA digital signal processor is required to perform the digital signal processing. A digital signalprocessor is nothing but an integrated circuit that performs data manipulations at a very highspeed. Data manipulation may involve manipulating sound, or an image. Fig. 4.5 shows the keycomponents of a digital signal processor:

1. Computing Engine: As the name suggests, computing engine is used to perform all thecomputational tasks. The computing engine gets its data from the Data Memory and theinstructions from the Program Memory. It also interacts with the outside world using I/Oports.

Page 85: Copyrights @ Higher Education Commission

4.3 Introduction to Digital Signal Processing 85

Figure 4.5: Key Components of a Digital Signal Processor

2. Data Memory: The Data Memory as the name suggests contains all the data to be processed.Instructions from the program memory are used to perform operations on the data memory.

3. Program Memory: Program is nothing but a set of instructions to carry out the tasks. So, inthis context, the program memory section of the digital signal processor contains all the tasksor the set of instructions that will be used for the manipulation of the data.

4. I/O: A system is said to efficient if it can communicate with the outside world. In this context,a digital signal processor has an I/O section to interact with the data that is coming from theexternal world.

4.3.2 Analog-to-Digital Conversion

The process of converting analog data into the digital is called Analog-to-Digital conversion orsimple A/D conversion. Analog-to-Digital conversion changes the continuously varying data suchas voltage, temperature or current into discrete digital quantities. The best example of analog-to-digital conversion can be seen in case of digital sound recording. In this case, the audio signals areconverted into the digital data which is then recorded on some magnetic or optical tape. Once thedata is recorded, it should be converted back into the analog form to be used by the sound system.The devices that accomplish the task of converting the analog data into the digital data are calledas Analog-to-Digital Converters or ADC for short. Similarly, Digital-to-Analog conversionor D/A conversion is the process of converting the digital data into the analog form. The mostcommon example of digital-analog conversion is MODEM or Dataset. MODEM is an acronym forModulation and Demodulation. A MODEM converts digital signals produced by the computerinto the analog signals to be carried by the telephone circuits. These signals our converted back todigital signals at the other end of the communication link.

The entire analog-to-digital conversion is a three-step process. The steps involved in convertinganalog signals into digital are sampling, quantization and encoding. Each step is discussed below:

Sampling

The very first step in A/D conversion is sampling of the analog signal. We know quite well thatan analog signal is a continuously varying signal. Sampling of an analog signal simply meansmeasuring the analog signal at regular time intervals. During sampling, instantaneous value ofanalog signal is measured and then a corresponding binary number is generated to represent thatparticular sample. Dividing an analog signal in such a way will produce a series of binary numbersthat represent the signal. Sampling may be achieved by passing an analog signal through a switchturning on at regular intervals as shown in Fig. 4.6.

There are certain key points that should be considered while sampling an analog signal, the firstimportant thing is that the samples that are taken should be equally spaced. The other important

Page 86: Copyrights @ Higher Education Commission

86 Chapter 4. Digital Modulation & Demodulation

Figure 4.6: Sampling

concept is of sampling frequency denoted by f . Sampling frequency f is the reciprocal of thesampling interval t. In order to obtain high frequency information from the analog signal, a veryimportant criterion, called the Nyquist criterion should be followed.

Nyquist CriteriaThe Nyquist criterion suggests that, the minimum sampling frequency fs should be twice the

highest analog frequency content of the signal. For example, if the analog signal has maximumfrequency fm of 2000Hz. To get the maximum possible information, this analog signal should besampled at a rate of twice the maximum frequency i.e. 4000Hz. This minimum frequency is calledthe Nyquist Frequency. For Nyquist criteria,

fs = 2 fm (4.1)

Although, it is said that the maximum possible information from the analog signal can beobtained if it is sampled at a rate which is twice the highest input frequency, in practice it should besampled at much higher rate, usually 2.5 to 3 times more.

fN ≥ 2B (4.2)

where B is the highest frequency component of the input analog signal.Suppose that an analogsignal has the maximum frequency of 15 k-Hz. According to Nyquist criteria, it should be sampledat 2x15=30 Hz. But in practice, it should be sampled at a much higher rate i.e. 3 to 10 times themaximum frequency.

4.3.3 QuantizationSampling can be thought of as division in the horizontal axis while the quantization is a process ofdividing the sampled signal in vertical axis to assign it digital values. It is a process of adjusting thesampled values to the nearest closest levels. Hence it results in the so called quantization error. Thequantization noise or the quantization distortion is the difference between the actual analog valueand the value assigned to it. The number of quantization levels q can only be a number equal to

q = 2n (4.3)

Whereq = No. of quantum levels

Page 87: Copyrights @ Higher Education Commission

4.4 Serial and Parallel Transmission 87

n = length in bits of binary code wordsFig. 4.7 shows the quantization process. The process of quantizing is achieved by rounding the

signal to the nearest available value on the scale. However, it is quite obvious to think that roundingor truncating the signal in such a way will clip off the information contained in the signal. Theentire process of quantization results in a stepped waveform that is similar to the source signal.

Figure 4.7: Quantization

4.3.4 EncodingEncoding is the process of assigning binary values to the quantized levels as shown in Fig. 4.8. Theencoding can be simple binary or gray level encoding.

Figure 4.8: Encoding

4.4 Serial and Parallel TransmissionChapter 1 presented a great deal of information about the data transmission. Data transmission asthe name suggests, is nothing, but simply the exchange of data from one point to the other. However,there are a few important things that should be considered while exchanging data between entities:

1. Direction of transfer.2. Transmission Media

Page 88: Copyrights @ Higher Education Commission

88 Chapter 4. Digital Modulation & Demodulation

3. Method of Transfer4. SynchronizationNow, we have already discussed about the direction of transfer and the transmission media

in Chapter 1. Direction of data transfer simply refers to whether the communication takes placesimultaneously (full duplex) or the both the parties take turn while transmitting data (half duplex)or the communication is simply initiated by a single entity (simplex) the other only receives it.Transmission Media, was too, discussed in quite detail. Transmission media refers to the path takenby the data. It can be guided media i.e. the data is transmitted by some cable or it can be unguidedi.e. the data travels without being confined by the wall of the cable. So, we have now the clearconcept about the direction of transfer and the transmission media. The question to ponder over is,how exactly the data is sent from one place to another? How exactly does your printer or peripheralsattached to your device send and receive information? There are only two ways that can accomplishthis; Serial Transfer of Data and Parallel Transfer of Data. The peripherals are attached toyour machines via a physical cord or a wireless connection. This physical cord for example is thetransmission path through which the signals are sent and received. The communication betweenyour peripherals and machine occurs when the computer send electronic pulses to the deviceattached. The device attached, too, sends the signals back to your personal computer (PC). Theseelectronic pulses are nothing but the actual message that is carried back and forth between yourcomputer and the device attached. The way these signals are transmitted, i.e. either serially or in aparallel manner depend on the type of peripheral attached to your computer.

4.4.1 Serial Transmission of Data

What does your mind picture when you hear the word serial? Or series? Your mind might depictsomething moving one after another in a row. Similar to a long queue of people, for example, atticket counter or some market. The similar concept of Serial Data Transfer applies when speaking interms of communication. Each bit of data is transferred one after another from source to destination.Consider a stream of data represented by a series of 0s and 1s as shown in the Fig. 4.9:

Figure 4.9: Serial Transmission of Data

The serial interface transmits one bit at a time starting from the Least Significant Bit (LSB)position of the bit stream. As indicated in the Fig. 4.9, the bit falling in the LSB position start fromthe extreme right. The Most Significant Bit (MSB) that is bits on the extreme left of the bit streamare transmitted after the LSB.

Page 89: Copyrights @ Higher Education Commission

4.5 Analog-to-Digital (Pulse) Modulation 89

4.4.2 Parallel Transmission of DataAs is quite evident from the name itself the data in this case is transmitted all at once. It is quiteobvious to think that parallel data transfer is much faster than the serial transfer of data.

Unlike serial transfer of data, which carries the data stream on a single wire, the parallel transferof data takes place over multiple wires. Each wire carries a single bit of the data stream. So, forexample, if we have a bit stream as 10011101, then the transmission of this data stream will requirean 8-wire cable. Each wire will carry a single bit of the data stream. The Fig. 4.10 shows the datato be sent from sender to receiver.

Figure 4.10: Parallel Data Transfer

Parallel transfer, although provides data to be communicated simultaneously, it is not a goodchoice for transmitting data over longer distances. The reason for this limitation is quite obvious;the parallel transfer requires one wire to carry each bit in the data stream. Imagine your datastream has hundreds and thousands of 1s and 0s and your target destination is miles apart. Doyou think it will be a wise choice to lay hundreds and thousands of wires to transmit a singledata stream? Given that there will be hundreds of other similar data streams to be transferred?Obviously, communicating data through parallel means in such scenario will be quite impracticaland costly. Apart from being expensive, there will be a lot of distortion and interference among thewires which will corrupt the information to be transferred. Thus, parallel data transmission servesbest when the source and destination are few meters apart. For example, communication betweenyour computer and printer takes place via parallel transfer.

4.5 Analog-to-Digital (Pulse) ModulationChapter-02 and Chapter-03 were solely dedicated to the modulation schemes that had both inputand output signal analog in nature. We have other variants too. This chapter focuses on modulationschemes which take analog input signal and produces modulated signal in digital form, hencecalled Analog-Digital Modulation Schemes.Such Modulation schemes are also commonly knownas Pulse Modulation Schemes because the output of such modulation schemes is in the form ofdigital pulses. This section is however, confined to the discussion of Analog-to-Digital Modulationtechniques.

The concept of Analog-to-Digital modulation schemes is quite evident from its name. Thesemodulation schemes have the message or information signal in the analog form and the carrier signal

Page 90: Copyrights @ Higher Education Commission

90 Chapter 4. Digital Modulation & Demodulation

is in the digital form. As discussed in the previous section, in order to convert the analog signal intothe digital signal, the analog signal should be sampled at regular time intervals. Analog-to-Digitalmodulation schemes usually go by the name Pulse Modulation schemes.

The modulation schemes falling in this category modulate a carrier that is usually represented bya train of pulses, hence the name pulse modulation schemes. The basic idea of pulse modulation isquite straightforward; a continuously varying signal modifies certain parameters of the carriersignal which is represented by a train of pulses. Nyquist criterion should be considered in orderto reconstitute the original signal. Transmitting information via a series of pulses will cause thereceiver output to have minimum distortion. This also allows the average power of the carrier toremain low even when the high peaks of power are involved.

Why Pulse Modulation?Following are some reasons that make pulse modulation schemes a better choice than the analogmodulation schemes in most applications:

1. The information which is carried via binary means is more resistant to noise and allows thedegraded to be regenerated.

2. These modulation schemes allow the clipping of noise that gets added to the binary signalwhile the transmission of information.

3. These modulation schemes allow the degraded signals to be reshaped using different circuits.4. These modulation schemes overcome the need of generating the power continuously.

Types of Pulse Modulation SchemesThe pulse modulation schemes can be divided into two types; Analog Pulse Modulation Schemesand Digital Pulse Modulation Schemes. Each of these variants are further classified. The Fig. ??depicts this classification:

Figure 4.11: Classification of Pulse Modulation Schemes

The Analog Pulse Modulation Schemes can further be classified into:1. Pulse Amplitude Modulation

Page 91: Copyrights @ Higher Education Commission

4.5 Analog-to-Digital (Pulse) Modulation 91

2. Pulse Duration Modulation3. Pulse Position ModulationOn the other hand, digital pulse Modulation schemes include:

1. Pulse Coded Modulation2. Differential PCM

4.5.1 Pulse Amplitude Modulation

Pulse Amplitude Modulation (PAM) is the simplest form of the pulse modulation scheme. Theinformation signal in this case is an analog signal and the carrier is represented by a train ofpulses. The basic idea of this modulation is somewhat similar to the Amplitude Modulation. Ifyou recall, in amplitude modulation, the analog information signal varied the amplitude of theanalog carrier signal. Similarly, in pulse amplitude modulation, the analog signal is sampled first atregular intervals of time. Each sample is then made proportional to the amplitude of the carriersignal (see Fig. 4.12). Simply put, the amplitude of the carrier wave, which is a train of pulses, isvaried in accordance with the amplitude of the sampled analog signal. As in amplitude modulation,the frequency and phase of the carrier remain constant, similarly, in PAM, the position and theduration of the carrier remain unaltered. The PAM modulation imposes certain restrictions whichare somewhat similar to the restrictions imposed by the amplitude modulation.

Figure 4.12: Analog Pulse Modulation Schemes

1. It is difficult to remove noise from a pulse amplitude modulated waveform because by doingso, the actual information can be lost.

2. The transmission bandwidth of a pulse amplitude modulated waveform is quite large.

Some of the applications of PAM include:

Ethernet: Ethernet a computer networking technology used in Local Area Networks make use ofPulse Amplitude Modulation. Particularly, 100Base-TX, a variant of fast Ethernet and anEthernet physical layer standard Broad-Reach technology, make use of three level PAM.

Electronic Drivers for LED Lightning: PAM is used for the control of LED for the purpose oflightning applications. PAM LED drivers are adept to synchronizing pulses across multipleLED channels enabling seamless color matching.

Digital Television: Advanced Television Systems Committee for digital television make use ofPAM to broadcast the data that makeup the television signal.

Page 92: Copyrights @ Higher Education Commission

92 Chapter 4. Digital Modulation & Demodulation

4.5.2 Pulse Duration ModulationPulse Duration Modulation or PDM for short is also called Pulse Width Modulation (PWM) orPulse Length Modulation (PLM). As the name suggests, this modulation scheme alters the widthof the carrier signal in accordance with the amplitude of the analog signal. Unlike PAM , a PWMwave has two level. When the amplitude of the analog signal goes high, a corresponding wideningin the width of carrier wave can be seen. In other words,at high signal voltages, the pulses get wider.Conversely, when the amplitude of the modulating signal decreases, the width of the carrier wavebecomes narrow. Due to the narrowing and widening of width, a pulse duration modulated waveform has varying content of power. Although, Pulse modulated waveform provides better noiseimmunity, it requires much larger bandwidth than the pulse amplitude modulated waveform. TheFig. 4.12 depicts a waveform modulated by pulse duration modulation. As can be seen in Fig. 4.12,a change in the amplitude of the message signal causes a corresponding change in width of thepulses of the carrier signal. The pulses get wider during positive peak of modulating signal andthen they get narrower during the negative peaks of the modulating signal.

4.5.3 Pulse Position ModulationIn this case the position of the pulses is varied in accordance with the amplitude of the modulatingwave as can be seen in Fig. 4.12. It provides high noise immunity but generation and detection iscomplex.

4.5.4 Pulse Code ModulationPulse Code Modulation, also known as PCM for short, was introduced in 1939 and is the mostwidely used digital technique to communicate or transfer the analog data. It is a standard techniqueused in digital compact discs, digital audio and digital telephony. The PCM principally is not aunique technique in itself it can be considered as the extended version of pulse amplitude modulation.The PCM signal is obtained by sampling, quantizing and encoding a PAM signal. Fig. 4.13 showsthe block diagram of Pulse Code Modulation. The PCM signals can be generated by either using asample/hold circuit to sample the data along with the A/D converter to convert the analog signalinto a series binary words.

Figure 4.13: Block Diagram of Pulse Code Modulation

All the frequencies above and below the desired frequency range are filtered from the analogsignal. The lower frequencies are filtered to remove the electrical noise and the upper frequenciesare filtered because they require extra bits that make the design of the system costly. In order to

Page 93: Copyrights @ Higher Education Commission

4.5 Analog-to-Digital (Pulse) Modulation 93

transmit the modulating signal, periodic samples of the modulating waveform are taken and thenonly those samples are transmitted. To elaborate more, sampling is nothing but the process ofreading or measuring the values of filtered analog signal at a constant sampling frequency.

4.5.5 Differential Pulse Code ModulationDifferential Pulse Code Modulation (DPCM) is quite similar to ordinary Pulse Code Modulation,however, it workd on the principle of prediction. As the name suggests, binary words in DPCMsystems indicate the difference in amplitude between the current sample and the previous sampletaken. This is a very common technique used in speech coding. In speech coding it is required topredict the value of next sample from the previous samples.

The analog signal in DPCM is sampled and then the difference between the actual value ofsample and its predicted value is quantized and then encoded. The predicted value is based onthe value of previous sample or samples. Fig. 4.14 depicts the working of differential pulse codemodulation.

The input signal is sampled at a constant sampling frequency. The samples are then modulatedusing Pulse Amplitude Modulation. The sampled input signal is then stored in predictor. Thepredictor further sends the stored sample signal through the differentiator. The differentiator, asthe name suggests, differentiates the current input sample and the previous input sample and thensends the output to the quantizing and the coding phases of the PCM. The difference signal is thende-quantized at the receiver end and is added to the sample of the signal stored in the predictor,which is then send to a low pass filter that reconstructs the original signal.

Figure 4.14: Block Diagram of DPCM

4.5.6 Delta ModulationThe PCM systems are costly and quite complex in that they require several functional componentsat the transmitter end as well as the receiver end. In order to overcome this, communication systemsoffer an alternative approach which uses much less circuitry and is cost-efficient. This technique isknown as Delta Modulation or DM for short. The Delta Modulation works on the principle thatinstead of quantizing the absolute value of input analog signal, DM quantizes the difference betweenthe current and the previous step. Delta modulation is basically 1-bit DPCM. Delta Modulationis also called one-linear modulation. In its simplest forms, DM sends one bit per sample and thepolarity of the singal indicates whether the signal is larger or smaller than the previous sample.Simply put, the value of present sample is comapred to the previous sample and the indication aboutwhether the amplitude is increased or decreased is sent.Following two points should be consideredfor DM:

1. If the amplitude in the sampling period is greater than the amplitude of the previous samplingperiod, then the bit in the digital signal is represented as 1.

2. Conversely, if the amplitude in the sampling period is less than the amplitude of previoussampling period then the corresponding bit value is zero.

Delta Modulation yields a value of 1 if there is an increase in the input analog signal w.r.t. theprevious sample and yields 0 if there is an increase in the input analog signal w.r.t. the previous

Page 94: Copyrights @ Higher Education Commission

94 Chapter 4. Digital Modulation & Demodulation

sample. Comparing the input analog signal with the previous samples obtained, in such a way andassigning 0 and 1, will produce a resultant function which is like a staircase which goes up anddown for each sampling period to map the input analog signal. Another key point that should benoted during DM is that the signal is first quantized into discrete levels taking into account that thestep size between adjacent samples remain constant. Once the quantization process is accomplished,the signal is transmitted by sending 1 for positive transmission and 0 for the negative transmission.For the proper DM, right step size and sampling period should be made. An incorrect samplingperiod means that the signal level changes too fast. The problem with DM is that it producesgranular noise and over slope noise distortion.

4.6 CompandingWe have discussed Pulse Code Modulation in quite detail. To recap, PCM is a technique used torepresent the analog data into the digital form so that it can be stored in computer for the purposeof manipulation. Conversion of analog data into the digital is achieved by taking samples of theamplitude of the analog signal at regular time intervals. After sampling, the quantization processtakes place which assigns binary value to the samples taken from a pool of available values. We alsodiscussed that the samples taken are compared to a quantized scale that has fixed number of valuesavailable. Furthermore, since there are 256 quantizing levels available so it is usually required touse smallest number of bits that produces less error. But working with finite number of values andconverting the samples into the estimated digital values causes certain errors. In order to increasethe accuracy of the signals, a technique called Companding is employed. Companding is a mixof two words; compression and expansion. It is a system in which the information is compressedfirst and then transmitted through a channel usually a bandwidth-limited channel. During signalcompression, the lower levels of the signals are emphasized (i.e. the signal is intentionally altered)and the high levels of the signals are de-emphasized. The signal is then expanded at the receivingend. An expanding circuitry is used to accomplish this task. This process of signal compression andexpansion is employed to overcome the problems of noise and distortion during the transmission ofaudio signals. The telephone systems make use of two laws of companding:

1. µ-Law2. A-Law

4.7 Digital-Digital Modulation: Line Coding SchemesSo far from what have discussed, it is very much clear that there are two possible ways in whichdata can be stored i.e. either in analog form or in digital form. We also know by now that thecomputers we use, require the data to be in discrete or digital form. To convert the digital data intodigital signals, line coding schemes are used. Line coding schemes are a form of digital-digitalmodulation schemes.

Line coding consists of representing the digital signal to be transported by an amplitude andtime-discrete signal that is optimally tuned for the specific properties of the physical channel (andof the receiving equipment). The waveform pattern of voltage or current used to represent the 1sand 0s of a digital data on a transmission link is called line encoding. The common types of lineencoding are unipolar, polar, bipolar, and Manchester encoding.

The terminology line coding originated in telephony with the need to transmit digital informationacross a copper telephone line. The concept of line coding, however, readily applies to anytransmission line or channel. Each line code has its own distinct properties. Depending on theapplication, one property may be more important than the other.

In the context of the above discussion, line coding schemes are simply a way to communicatebinary information over the transmission medium. It is the process of choosing a code so that the

Page 95: Copyrights @ Higher Education Commission

4.7 Digital-Digital Modulation: Line Coding Schemes 95

data can be carried over the transmission channel in communication systems. Line coding is alsoknown as Digital Encoding.

To elaborate further, binary data is represented by some physical quantity whenever it is sentover the communication link. For example, if the data is being transmitted over an electrical link,it is usually represented by voltage and current. Similarly, optical systems make use of lightintensity to represent the data over the optical lines.

A line coding format consists of a formal definition of the line code that specifies how a stringof binary digits are converted to a line code waveform. There are two major classes of binaryline codes: level codes and transition codes. Level codes carry information in their voltage level,which may be high or low for a full bit period or part of the bit period. Level codes are usuallyinstantaneous since they typically encode a binary digit into a distinct waveform, independentof any past binary data. However, some level codes do exhibit memory. Transition codes carryinformation in the change in level appearing in the line code waveform. Transition codes maybe instantaneous, but they generally have memory, using past binary data to dictate the presentwaveform. There are two common forms of level line codes: one is called return to zero (RZ) andthe other is called non return to zero (NRZ). In RZ coding, the level of the pulse returns to zero fora portion of the bit interval. In NRZ coding, the level of the pulse is maintained during the entirebit interval. Line coding formats are further classified according to the polarity of the voltage levelsused to represent the data. If only one polarity of voltage level is used, i.e., positive or negative (inaddition to the zero level) then it is called unipolar signaling. If both positive and negative voltagelevels are being used, with or without a zero voltage level, then it is called polar signaling. Theterm bipolar signaling is used by some authors to designate a specific line coding scheme withpositive, negative, and zero voltage levels.

4.7.1 Unipolar CodingThe simplest among the three is unipolar line coding format. As the name suggests, unipolar linecoding schemes make use of a single polarity only. This means the positive polarity or the positivevoltage represents 1 whereas, binary 0 is represented by a simple straight line indicating no voltageis transmitted. This, however, imposes a restriction that while dealing with long strings of 0s and 1sleads to reduced spectral energy for the data recovery. Also, the DC component in the pulse streamvaries which makes the data recovery difficult in noisy environment.

Unipolar Non Return to Zero CodingOne of the popular variant of the unipolar line coding scheme is uni-polar Non-Return-to-Zeroor NRZ for short. NRZ works on the principle that binary bit 0 is encoded as no transmission ofvoltage and binary 1 is encoded as the positive transmission of voltage. It is called Non-return toZero because signal does not return to zero at the middle of the bit. This technique is also calledNon-Return-to-Zero Level (NRZ-L). The Fig. 4.15 depicts how unipolar NRZ works:

Figure 4.15: Unipolar NRZ Coding

As you can see, we have a string of 0s and 1s i.e., 0100110110. The 0 in the given data stream

Page 96: Copyrights @ Higher Education Commission

96 Chapter 4. Digital Modulation & Demodulation

is represented by a straight line this means that no voltage is being transmitted. On the other hand,whenever binary 1 is encountered, a positive rise in the voltage can be seen. This scheme is calledNon-Return-to-Zero, because it does not return to zero at the mid of the bit (you will see shortlythat some of the line coding schemes do return at the mid).

Despite being the simplest form of the line coding schemes, NRZ has many drawbacks:1. It has high DC levels.2. NRZ consumes large amount of bandwidth3. If we have a data stream with continuous 1s or 0s, then no change in the voltage can be seen

which means no information is transmitted.4. The signal is polarized.

Unipolar Return to Zero CodingIn this line code, a binary 1 is represented by a non-zero voltage level during a portion of the bitduration, usually for half of the bit period, and a zero voltage level for rest of the bit duration.A binary 0 is represented by a zero voltage level during the entire bit duration. Thus, this is aninstantaneous level code. Fig. 4.16 shows the unipolar RZ coding of data stream 0100110110.

Figure 4.16: Unipolar RZ Coding

Non-Return-to-Inverted (NRZ-I)Non-Return-to-Zero follows the basic principle that it does not return to zero at the mid of thebit. However, the term inverted, pretty much changes its functionality. NRZ-I changes its polaritywhenever the bit 1 is encountered. Based on what the polarity of bit 1 was in the previous stage,it inverts the polarity of next binary 1 encountered. For example, the bit stream is represented as0100110110. If the first 1 encountered is represented by a positive rise in the voltage, then the nextbit 1 encountered, will been coded by the inverted voltage. Hence the name NRZ-I. Bit 0,howeveris represented by no voltage transmission. The Fig. 4.17 depicts this:

Figure 4.17: Non-Return to Zero Inverted (NRZ-I)

Page 97: Copyrights @ Higher Education Commission

4.7 Digital-Digital Modulation: Line Coding Schemes 97

4.7.2 Polar Line Coding

Polar line coding represents the string of 0s and 1s by using two voltage levels; positive and negative.The specific types of the polar line coding are discussed below:

Return-to-Zero (RZ)

In this scheme, a binary 1 is represented by the positive voltage level, which returns to zero generallyat half the bit period. A binary 0 is represented by the negative voltage level which returns to zerogenerally at half the bit period. Return-to-Zero offers improved synchronization because the signaldrops to zero at the mid of each bit. However, RZ utilizes more bandwidth because its pulse rate istwice than that of NRZ. In this scheme, binary 0 is represented by a low voltage level and binary 1is encoded as a short high voltage pulse. This means that the signal level drops to zero or returns tozero immediately after the high level. The Fig. 4.18 represents Return to Zero coding scheme fordata stream 0100110110.

Figure 4.18: Polar or Bipolar Return to Zero (RZ) Scheme

Manchester Coding

The Manchester Encoding scheme was introduced by IEEE. This encoding scheme performsinversion at the middle of each bit and offers perfect synchronization. There are two variants ofManchester coding scheme.

In this coding, a binary 1 is represented by a pulse that has positive voltage during the first-halfof the bit duration and negative voltage during second-half of the bit duration. A binary 0 isrepresented by a pulse that is negative during the first-half of the bit duration and positive duringthe second half of the bit duration. The negative or positive midbit transition indicates a binary 1 orbinary 0, respectively. Thus, a Manchester code is classified as an instantaneous transition code; ithas no memory. The code is also called diphase because a square wave with a 0 phase is used torepresent a binary 1 and a square wave with a phase of 180 used to represent a binary 0; or viceversa. This line code is used in Ethernet local area networks (LANs). The Fig. 4.19 representsManchester coding scheme for data stream 0100110110.

Differential Manchester Coding

Differential Manchester code (DMC) is a blend of Manchester and NRZI: It uses transitions in themiddle of the bit, but the transition direction changes with every bit 1 in the data stream. In simplewords, whenever bit-1 is encountered, there will be a transition at mid represented by the drop inthe signal level. However, no transition occurs when bit 0 is encountered. Fig. 4.20 depicts DMC:

Page 98: Copyrights @ Higher Education Commission

98 Chapter 4. Digital Modulation & Demodulation

Figure 4.19: Manchester Coding

Figure 4.20: Differential Manchester Coding

4.8 Comparison Between Line Coding SchemesReturn to zero and Manchester line coding scheme occupy more BW than non-return to zero.Manchester coding is not spectrum efficient rather it is good for synchronization. In terms ofpower efficiency, return to zero is more efficient because every pulse is required to be transmitted.Manchester coding is also power efficient. For power to be consumed, data bit zero and data bit 1both require a pulse. To be precise,

1. Synchronization: Manchester Coding.2. Power Consumption: Return to zero3. Spectrum Efficiency: NRZ

4.9 Digital-Analog Modulation: Keying TechniquesThere are situations when one needs to convert the digital data into the analog form. For example,when it is required to send computer information over transmission channels such as fibre opticcable, computer modems, cellular phone networks, satellite systems etc. These channels require thedata to be in the analog form. Digital to analog communication is important when multiple usersshare the same bandwidth.

The modulation schemes falling under the digital to analog modulation schemes are collectivelyknown as Keying Techniques. In these modulation schemes, the information to be transmitted is indigital form i.e. it is represented as a series of 0s and 1s. The carrier that carries the informationis in analogous form. In other words, the carrier is an electromagnetic wave that is used to carryinformation over longer distances and connect the users at remote locations. The digital data, isused to modulate the characteristics of the analog carrier. There are three ways through which thecharacteristics of the analog carrier can be varied.

Page 99: Copyrights @ Higher Education Commission

4.9 Digital-Analog Modulation: Keying Techniques 99

1. Amplitude Shift Keying.2. Phase Shift Keying.3. Frequency Shift Keying.Each one is discussed below:

4.9.1 Amplitude Shift KeyingIf you recall, in amplitude modulation, the amplitude of the analog information signal varied theamplitude of the analog carrier wave. The amplitude shift keying or ASK; is a digital modulationscheme that works in similar manner. The amplitude of the analog carrier is varied in accordancewith the digital bit stream or the digital input signal. In ASK, amplitude level of the carrier signal isswitched according to the binary information, keeping the phase and frequency fixed. Since theinformation signal is digital in nature, therefore, it can be in one of the two possible states only; 0or 1. During ASK, bit 1 is represented by the carrier signal with some amplitude whereas bit 0 isrepresented by no amplitude in carrier. In other words, if the input signal has a value of 1 then thecarrier will be transmitted else no carrier will be transmitted. This is shown in the Fig. 4.21.

Figure 4.21: Amplitude Shift Keying

Mathematically, ASK is represented as:

Asin(2π f1t +θ) when b = 1

0 when b = 0(4.4)

Even though ASK is the simplest form of the digital-analog modulation schemes, it has certaindisadvantages:

1. The probability of error is high in ASK and has less SNR.2. It has low noise immunity3. Despite being bandwidth efficient, ASK has low power efficiency.

4.9.2 Frequency Shift KeyingFrequency Shift Keying or FSK is quite similar to the Frequency modulation. If you recall, duringfrequency modulation, the amplitude of the information signal dictated a corresponding change inthe frequency of the analog carrier wave. A rise in the amplitude of the baseband signal causedthe frequency of the analog carrier to be increased which was indicated by the close appearance of

Page 100: Copyrights @ Higher Education Commission

100 Chapter 4. Digital Modulation & Demodulation

waves. During the negative peaks of amplitude of the baseband signal, there was a correspondingdecrease in the frequency of the carrier wave. FSK works in similar manner. When the input signalhas 1 in its bit stream, there is a rise in the carrier frequency. The cycles of the carrier wave appearclose to one another. Similarly, when the input signal has 0 in its bit stream, there is a correspondingdecrease in the frequency of the carrier wave and this is indicated by the carrier cycles that appearfar apart from one another. This is shown in the Fig. 4.22.

Figure 4.22: Frequency Shift Keying

In other words, the binary 0 and binary 1 represent two different frequencies. Binary 1 representshigh frequency and binary 0 represents low carrier frequency. The FSK differs from ASK in thatthe data appears during bit 0 and bit 1. Mathematically, FSK is given as:

Asin(2π f1t +θ) when b = 1Asin(2π f2t +θ) when b = 0

(4.5)

FSK is used in slow dial-up connections, caller ID and remote metering applications.

4.9.3 Phase Shift KeyingIn Phase Modulation, the phase of the analog carrier is changed in accordance with the amplitudeof the analog input signal. Phase Shift Keying or PSK works in similar manner. The phase of theanalog carrier is varied by the digital input signal. The 0s and 1s in the bit stream dictate how thephase of the carrier will be varied. When a binary 1 is encountered in the input bit stream, there is aphase shift of 180 in the carrier. Similarly, during binary 0, the phase of the carrier is shifted by 0.The Fig. 4.23 depicts this:

Mathematically, PSK is given as:

Asin(2π f t +θ1) when b = 1Asin(2π f t +θ2) when b = 0

(4.6)

PSK has application in wireless LAN (Local Area Network), Bluetooth technology and RadioFrequency Identification systems. PSK systems have high power efficiency and lower bandwidthefficiency. Unlike ASK, PSK systems have high SNR and probability of error is less.

Page 101: Copyrights @ Higher Education Commission

4.10 Constellation Diagram 101

Figure 4.23: Phase Shift Keying

4.10 Constellation Diagram

Constellation literally means – arrangement of stars. (a group of stars forming a recognizablepattern). In digital communications a constellation diagram helps us to define the digital-to-analogmodulation schemes and such diagram is often used for performance analysis. In constellationdiagram one horizontal and one vertical axis known as In phase and Quadrature componentsperpendicular to each other are used to represent digital modulation techniques. The symbolsare represented by small circles which can be placed anywhere on the constellation diagram.The number of such circle (minimum two or any that satisfies the 2n form) determines order ofmodulation. While the distance between the two symbols would determine the quality of modulationscheme in terms of error performance. The maximum the distance between the two symbols thebetter modulation technique performance would be in terms of error performance.

The X-axis represents the in-phase carrier and the Y-axis represents quadrature carrier as shownin Fig. 4.24.

Figure 4.24: Constellation Diagram

Page 102: Copyrights @ Higher Education Commission

102 Chapter 4. Digital Modulation & Demodulation

Fig. 4.25 shows the constellation diagrmas of ASK, PSK and QPSK.

Figure 4.25: Constellation Diagram

Chapter Summary

• Digital communication is the data communicated in form of bits or the electronic exchangeof information from one place to the other which has been digitally encoded.• Digital Communication is important because:

- With digital transmission systems, it is much easier to identify the errors that occur duringthe transmission of data and correct them at the same time.- Digital transmission systems are more resistant to noise than the analog transmission system.- With digital transmission of data, the bandwidth is two or more times greater than bandwidthwith the analog transmission.• The transmitter section of a digital communication system consists of:

Information source and input transducerSource encoder which converts the information captured from the information source intothe digital signal. This overall process of converting the information whether analog or digitalinto the binary digits results in a sequence of binary digits called the information sequence.Channel Encoder is used to insert in controlled manner, certain redundant bits into theinformation sequence.Digital Modulator is responsible for converting the digital sequence into the electricalsignals• Channel is a physical path through which the information travels. The path can either be

wired or wireless.• The receiver section consists of:

Channel Demodulator is used to process the information that has been altered or corruptedin the channel due to the noise or the interference introduced in the channel.Channel Decoder is responsible for reconstructing the original information from the knowl-edge of code used by the channel encoder and the redundancy contained in the received data.Output Transducer is used for providing desired output• Digital Signal Processing is to take the analog data such as sound, video or image that have

been digitized, and then perform high-speed data manipulation on this acquired data.• The serial interface transmits one bit at a time starting from the LSB position of the bit stream.

The MSB (bits on the extreme left) of the bit stream are transmitted after the LSB.• Serial transfer of data, which carries the data stream on a single wire,

Page 103: Copyrights @ Higher Education Commission

4.10 Constellation Diagram 103

• In case of parallel transfer, the entire data stream reaches the destination all at once.• The parallel transfer requires one wire to carry each bit in the data stream which makes it

inconvenient choice for transmitting data over longer distances.• The process of converting analog data into the digital is called Analog-to-Digital conversion• Digital-to-Analog conversion or D/A conversion is the process of converting the digital data

into the analog form.• MODEM is an acronym for Modulation and Demodulation.• MODEM converts digital signals produced by the computer into the analog signals to be

carried by the telephone circuits. These signals our converted back to digital signals at theother end of the communication link.• Sampling of an analog signal is the process of measuring the analog signal at regular time

intervals.• During sampling, instantaneous value of analog signal is measured and then a corresponding

binary number is generated to represent that particular sample.• The Nyquist criterion suggests that, the minimum sampling frequency should be twice the

highest analog frequency content of the signal.• For Nyquist criteria,

fN ≥ 2 fm

• In pulse modulation, a continuously varying signal modifies certain parameters of the carriersignal which is represented by a train of pulses.• PAM changes the amplitude of the carrier signal in accordance with the amplitude of the

modulating signal• Pulse Duration modulation changes the width or the duration of the carrier signal in accor-

dance with the amplitude of the modulating signal.• In PPM position of the pulses is changed in accordance with the amplitude of the modulating

signal.• PCM is used to send the digital data by transmitting group of pulses.• Differential Pulse Code Modulation is the process of converting analog signals to the digital

but The analog signal in DPCM is sampled and then the difference between the actual valueof sample and its predicted value is quantized and then encoded. The predicted value is basedon the value of previous sample or samples.• In DM, that instead of quantizing the absolute value of input analog signal, DM quantizes

the difference between the current and the previous step.• Delta Modulation yields a value of 1 if there is an increase in the input analog signal w.r.t.

the previous sample and yields 0 if there is an increase in the input analog signal w.r.t. theprevious sample.• Companding is a system in which the information is compressed first and then transmitted

through a channel usually a bandwidth-limited channel.• digital encoding or line encoding define how the binary data is represented over the commu-

nication link.• Non-Return to Zero or NRZ works on the principle that binary bit 0 is encoded as no

transmission of voltage and binary 1 is encoded as the positive transmission of voltage.• NRZ-Level or NRZ-L encodes binary 1 as a positive rise in the amplitude whereas binary 0

is not represented as a simple straight line but by some voltage level. This voltage level islower than the one used to encode binary 1.• NRZ-I changes its polarity whenever the bit 1 is encountered. Based on what the polarity of

bit 1 was in the previous stage, it inverts the polarity of next binary 1 encountered.• In Return to Zero, binary 0 is represented by a low voltage level and binary 1 is encoded as

a short high voltage pulse. This means that the signal level drops to zero or returns to zero

Page 104: Copyrights @ Higher Education Commission

104 Chapter 4. Digital Modulation & Demodulation

immediately after the high level. In the first variant of Manchester coding had positive tonegative transition when there is data bit 0 and negative to positive transition for data bit 1.• In the other variant of Manchester coding scheme for data bit 1 there is high to low transition

and for data bit 0 there is transition from low to high.• In Differential Manchester code (DMC) , whenever bit-1 is encountered, there will be a

transition at mid represented by the drop in the signal level.• In ASK, amplitude of the analog carrier is varied in accordance with the digital bit stream or

the digital input signal.• In ASK, if the input signal has a value of 1 then the carrier will be transmitted else no carrier

will be transmitted.• In FSK, when the input signal has 1s in its bit stream, there is a rise in the carrier frequency

and the of the carrier wave appear close to one another. Similarly, when the input signal has0s in its bit stream, there is a decrease in the frequency of the carrier wave and they appearfar apart from one another.• In PSK, when a binary 1 is encountered in the input bit stream, there is a phase shift of 180

in the carrier. Similarly, during binary 0, the phase of the carrier is shifted by 0.

Multiple Choice Questions

1. Process of converting an analog sampleinto the digital form is called:

(a) Multiplexing(b) Sampling(c) Modulation(d) Quantization

2. Pulse Code Modulation follows which ofthe following sequence of operations?

(a) Sampling, Quantization and Encod-ing.

(b) Encoding, Sampling and Quantiza-tion.

(c) Quantization, Sampling and Encod-ing

(d) None of Them

3. In Delta Modulation:(I) All coded bits used for sampling are

transmitted.(II) The step size is fixed.

(III) One bit per sample is transmitted.Indicate the true statement:

(a) (I) and (II) only.(b) (II) only.(c) (II) and (III) only.(d) (I) only.

4. In Differential Pulse Code Modulation:(a) Analog signal is converted into digi-

tal.(b) Difference between successive sam-

ples of the analog signals are en-coded into n-data bit streams.

(c) Digital codes are the quantized val-ues of predicted values.

(d) All of them.

5. Which of the following is true for the trans-mission bandwidth of line code.

(a) It should be as small as possible.(b) It should have maximum possible

bandwidth.(c) Depends on the signal being trans-

mitted.(d) None of them.

6. In Unipolar NRZ scheme,(a) Binary 0 is encoded by a simple

straight line.(b) Binary 1 is encoded by a positive

voltage.(c) The waveform shows negative polar-

ity for binary 0 and positive polarityfor binary 1.

(d) Both (a) and (b).

7. Given the statements for NRZ:(I) It has high DC level.

(II) It consumes large amount of band-width.

(III) The signal is polarized(IV) None of themWhich of them are NOT true?

Page 105: Copyrights @ Higher Education Commission

4.10 Constellation Diagram 105

(a) (I) and (II) only.(b) (I) only.(c) (III) only.(d) (IV) only.(e) (I), (II) and (III).

8. Which of coding scheme offers good syn-chronization?

(a) NRZ.(b) Manchester.(c) Return-to-Zero.(d) All of them.

9. Pulse length modulation is disadvanta-geous because:

(a) It requires larger bandwidth as com-pared to PAM.

(b) It does not provide noise immunity.(c) Both (a) and (b).(d) None of them.

10. Pulse Amplitude Modulation offers whichof the following drawbacks?

(a) It is cumbersome to remove noisefrom a PAM system.

(b) The transmission bandwidth of aPAM system is quite large.

(c) Both (a) and (b)(d) None of them.

Review Questions

1. Elaborate the term ‘Digital Communication System’ and explain its importance.2. “Transmission via digital means overcomes bandwidth limitations.” Comment.3. Briefly explain elements of a digital communication system.4. Why Parallel Transmission is not suitable for communicated data over longer distances?5. Define Data Conversion.6. What is Sampling?7. Explain Nyquist Criterion.8. Elaborate the phenomenon of Pulse Amplitude Modulation.9. Enlist the uses of Pulse Duration Modulation.

10. Elaborate the procedure of Differential Pulse Cod Modulation with the help of a suitableblock diagram.

11. What is Delta Modulation?12. Stress on the importance of Digital-to-Analog communication by providing valid reasons

and explanations.13. Elaborate different kinds of Keying Techniques.14. Elaborate the term ‘Digital Communication System’ and explain its importance.15. “Transmission via digital means overcomes bandwidth limitations.” Comment.16. Briefly explain elements of a digital communication system.17. Why Parallel Transmission is not suitable for communicated data over longer distances?18. Define Data Conversion.19. What is Sampling?20. Explain Nyquist Criterion.21. Elaborate the phenomenon of Pulse Amplitude Modulation.22. Enlist the uses of Pulse Duration Modulation.23. Elaborate the procedure of Differential Pulse Cod Modulation with the help of a suitable

block diagram.24. What is Delta Modulation?25. Stress on the importance of Digital-to-Analog communication by providing valid reasons

and explanations.26. Elaborate different kinds of Keying Techniques.

Page 106: Copyrights @ Higher Education Commission

106 Chapter 4. Digital Modulation & Demodulation

Miscellaneous Questions

1. Enlist at least two advantages of Pulse Amplitude Modulation.2. Enlist at least two types of A/D convertors.3. Elaborate the concept of aliasing.4. What is Adaptive Delta Modulation. Explain?

Brain Buzzer

1. Brainstorm two types of communication services that are not digital yet but might eventuallybe.

2. Brainstorm a few practical applications of Delta Modulation.3. What are the conditions under which serial transfer of data can be faster than the parallel

transfer of data.4. Enlist at least two advantages of Pulse Amplitude Modulation.5. Enlist at least two types of A/D convertors.6. Elaborate the concept of aliasing.7. What is Adaptive Delta Modulation. Explain?8. Brainstorm two types of communication services that are not digital yet but might eventually

be.9. Brainstorm a few practical applications of Delta Modulation.

10. What are the conditions under which serial transfer of data can be faster than the paralleltransfer of data.

Bibliography

1. Principles of Electronic Communication Systems - Louis E. Frenzel Jr. (4th Edition, ISBN-13:978-0-07-337385-0)

2. J. G., Proakis and M., Salehi, Digital Communications. McGraw-Hill, 2007.3. J. M., Wozencraft and I. M., Jacobs, Principles of Communication Engineering. Wiley, 1965;

reissued by Waveland Press in 1990.4. B. P., Lathi, Linear Systems and Signals. Oxford University Press, 2004.5. A. J., Viterbi and J. K., Omura, Principles of Digital Communication and Coding. McGraw-

Hill, 1979.6. R. E., Blahut, Digital Transmission of Information. Addison-Wesley, 1990.7. J. D., Gibson, ed., The Mobile Communications Handbook. CRC Press, 2012.8. G., Foschini, “Layered space–time architecture for wireless communication in a fading

environment when using multi-element antennas,” Bell-Labs Technical Journal, vol. 1, no. 2,pp. 41–59, 1996.

9. K., Sayood, Introduction to Data Compression. Morgan Kaufmann, 2005.10. D. P., Bertsekas and R. G., Gallager, Data Networks. Prentice Hall, 1991.11. A., Kumar, D., Manjunath, and J., Kuri, Communication Networking: An Analytical Ap-

proach. Morgan Kaufmann, 2004.12. S. K., Mitra, Digital Signal Processing: A Computer-based Approach. McGraw-Hill, 2010.13. Electronic Communications System: Fundamentals Through Advanced - Wayne Tomasi (5th

Edition, ISBN-13: 978-0-13-049492-4)14. Telecommunications - Warren Hioki (4th Edition, ISBN-10: 013020031X or ISBN-13:

978-0130200310)15. Modern Digital and Analog Communication Systems - B. P. Lathi, Zhi Ding (4th Edition,

ISBN-10: 0195331451 or ISBN-13: 978-0195331455)

Page 107: Copyrights @ Higher Education Commission

4.10 Constellation Diagram 107

16. Digital Communications - Ian A. Glover, Peter M. Grant (3rd Edition, ISBN-10: 0273718304or ISBN-13: 978-0273718307)

17. Communication Systems - Simon Haykin, Michael Moher (5th Edition, ISBN-10: 8126521511or ISBN-13: 978-8126521517)

18. Kwonhue Choi; Huaping Liu, "Amplitude Modulation," in Problem-Based Learning inCommunication Systems Using MATLAB and Simulink , , IEEE, 2016, pp.

19. John W. Leis, "Modulation and Demodulation," in Communication Systems Principles UsingMATLAB , , Wiley, 2019, pp.155-267

20. S., Haykin, Communications Systems. Wiley, 2000.21. J. G., Proakis and M., Salehi, Fundamentals of Communication Systems. Prentice Hall, 2004.22. M. B., Pursley, Introduction to Digital Communications. Prentice Hall, 2003.23. R. E., Ziemer and W. H., Tranter, Principles of Communication: Systems, Modulation and

Noise. Wiley, 2001.24. J. R., Barry, E. A., Lee, and D. G., Messerschmitt, Digital Communication. Kluwer Academic

Publishers, 2004.25. S., Benedetto and E., Biglieri, Principles of Digital Transmission; with Wireless Applications.

Springer, 1999.26. Richard W. Middlestead, "Amplitude Shift Keying Modulation , demodulation and perfor-

mance ," in Digital Communications with Emphasis on Data Modems: Theory, Analysis,Design, Simulation, Testing, and Applications, Wiley, 2017, pp.

27. J. D., Gibson, ed., The Mobile Communications Handbook. CRC Press, 2012.28. J. Abrahams, and parse trees for lossless source encoding," Comm. Info.and Syst., 1: 113-146,

2001 (http://www.ims.cuhk.edu.hk/ cis/).29. N. Abramson, Information Theory and Coding, McGraw-Hill, New York, 1963.30. Y. S. Abu-Mostafa, Ed., Complexity in Information Theory, Springer-Verlag, New York,

1988.31. J. Aczel and Z. Daroczy, On Measures of Information and Their Characterizations, Academic

Press, New York, 1975.32. A. Argawal and M. Charikar, the advantage of network coding for improving network

throughput," 2004 IEEE Information Theory Workshop, San Antonio, TX, Oct. 25-29, 2004.

Page 108: Copyrights @ Higher Education Commission
Page 109: Copyrights @ Higher Education Commission

5. Multiplexing and Multiple Access Techniques

Learning Focus

The chapter attempts to full-fill the following objectives:1. To present the definition and importance of Multiplexing.2. To introduce the term multiple access and explain the difference between the two

terms i.e. Multiplexing and Multiple Access.3. To introduce and explain various types of Multiplexing techniques such as Frequency

Division Multiplexing, Time Division Multiplexing and Code Division Multiplexing.4. To acquaint the reader with the pros and cons of each Multiplexing Technique dis-

cussed.5. To introduce and explain in detail different types of Multiple Access techniques such

as.

Page 110: Copyrights @ Higher Education Commission

110 Chapter 5. Multiplexing and Multiple Access Techniques

5.1 Introduction to Multiplexing

If you recall, back in Chapter 1, we discussed that in order to transmit information from one placeto another, a communication channel is needed. A communication channel or communication linkis nothing but a physical path through which the data is carried from one place to the other or morespecifically from the transmitter to the receiver. Furthermore, we discussed the modes of communi-cation in quite detail. We studied either the communication can be a one-way communication alsoknown as Simplex, it can be a two-way communication with communicating parties taking turn intransmitting and receiving the information (half-duplex). Simplex and half-duplex are pretty muchself-explanatory in that the data is either communicated in a single direction from transmitter toreceiver or it is communicated in turns. But what about full-duplex mode of communication? Howthe data is carried simultaneously over a single communication link? There can be two ways totransmit data simultaneously. One being a common-sense approach i.e. setting up multiple cablesor a transmitter/receiver pair for each channel. But it seems quite impractical idea to have the bulkof wires running around. The second and a much cleaner approach is to make use of a techniquecalled Multiplexing.

Multiplexing is a signal transmission technique which transfers multiple signal over a singlecommunication link or channel. The process of multiplexing is quite simple; multiple signals arecombined to form a single complex signal. This signal is then transmitted over the communicationchannel. At the receiver end, the signal is de-multiplexed i.e. it is separated into its original formand is fed to the appropriate receiver. Thus, each receiver at the end of the communication link getsits own desired information. This approach is much simpler than running or setting up a wholebunch of wires for transmitted signals when it is required to send multiple signals at the same time.The concept of multiplexing has made the design of the communication systems more practical,simple and economically more affordable.

5.2 Need of Multiplexing

Now that the concept of multiplexing is clear, the next important issue to consider is why is itneeded in first place? The answer to this question is quite simple and straightforward; we do nothave infinite range of frequencies, but we do have numerous users. This single statement explainsa lot about why multiplexing is an important technique. Multiplexing allows various users toshare the same communication link and the frequency range without overlapping in each other’sfrequency slot. There are many ways, discussed later in this chapter, which allows each user tosend and receive data simultaneously without interfering, over a single communication link. Fig.5.1 clearly illustrates how multiplexing works:

A circuit that performs the multiplexing or combines the signals is called a multiplexer alsocalled MUX and the circuitry that separates the combined signals is called a Demultiplexer alsocalled DEMUX. As shown in the diagram above, inputs from various sources are fed to themultiplexer. The multiplexer, combines these inputs and sends them over the communicationchannel in form of a single complex signal. When this signal arrives at its destination, thedemultiplexer circuitry at the end of the communication system separates the single complex signalinto its counterparts and the information is sent to the appropriate receivers lined up at the end ofthe receiving end. In this way, each receiving entity gets its own corresponding piece of information.Let’s relate the working of the multiplexer with the postal system. You first write and deposit theletter in the collection box. The collection box has various letters from different customers allmeant to be sent at different locations. The postal carrier takes all the letters from the collection boxand takes it to the post office. All the letters are then sent to the mail processing unit. After all thenecessary stamps and postmarking is done, the carrier delivers the letters to specified destinations.In the above scenario, letters that you and all the other customers wrote is the piece of information

Page 111: Copyrights @ Higher Education Commission

5.3 Multiple Access 111

Figure 5.1: How multiplexing works

that needs to be transmitted. One way to transmit all the letters is to have a separate carrier for eachletter to be delivered but that would be quite impossible because there are finite number of carriersavailable. The usage of collection box overcomes this and all the letters are deposited there. Thisis similar to feeding the information to the multiplexer circuit. The carrier then acts as a singlecomplex signal that carries the information over the communication link. When the carrier reachesits destination, it delivers all the letters to its corresponding recipients.

5.3 Multiple Access

In communication systems bandwidth is a precious resource. One big challenge that arises whenassigning users communication services is that the amount of bandwidth of a communication chan-nel is limited and the number of users utilizing the bandwidth resources is infinite. So, there shouldbe some way to allow these infinite number of users make use of the channel bandwidth in the mostefficient way. The technique which accomplishes this is called “Multiple Access.” Multiplexingand multiple access are sometimes used interchangeably. Though, both the terminologies referto making efficient use of bandwidth but they do have a minor differences in the way they bothwork. Multiplexing is simply combining different signals into one and passing them along thecommunication link. Multiple Access techniques, as the name suggests, define how the multipleusers will access the channel.

Multiplexing combines multiple signals from different users through a circuit situated at thesame geographical location into one signal which is then sent over the communication channel.On the other hand, multiple access combines signal from different users situated in differentgeographical location into one signal and then sends them over the communication media.

With multiplexing, the sharing of communication resource is fixed, or at most, slowly changing.The resource allocation is assigned a priori, and the sharing is usually a process that takes placewithin the confines of a local site (e.g., a circuit board). Multiple access, however, usually involvesthe remote sharing of a resource. Multiplexing involves an algorithm that is known a priori; usually,it is hard-wired into the system. Multiple-access, on the other hand, is generally adaptive, and mayrequire some overhead to enable the algorithm to operate.

Page 112: Copyrights @ Higher Education Commission

112 Chapter 5. Multiplexing and Multiple Access Techniques

Multiple Access techniques has its usage in wireless communication systems, where it isoften required to allow the subscribers to send and receive the information from the base stationsimultaneously.

5.4 Types of Multiplexing/Multiple Access

In order to combine different users signals to share the same channel it is essentially required thatthey are either physically or logically separated so that they do not interfere with each other. Thisseparation primarily can be achieved on either time, frequency or spatial basis resulting in threefundamental types of multiplexing techniques i.e.,

1. Time Division Multiplexing (or Multiple Access)2. Frequency Division Multiplexing (or Multiple Access)3. Spatial Division Multiplexing (or Multiple Access)However there are other techniques available as well which ensures that two signals can coexist

without interfering each other for example:1. Code Division Multiplexing (or Multiple Access)2. Polarization Division Multiplexing (or Multiple Access)3. Interleave Division Multiplexing (or Multiple Access)Each technique essentially have its pros and cons. This section attempts to present only a brief

description of each variant. The focus of this book is primarily on Frequency, Time and CodeDivision Multiplexing techniques and therefore would be described in following sections.

5.5 Frequency Division Multiplexing (FDM)

Frequency Division Multiplexing or FDM is one of the basic variants of the multiplexing. Thesole purpose of the frequency division multiplexing technique is to combine signal from differentsources and share them over a single bandwidth. The signals in frequency division multiplexingare multiplexed together and then shared over a physical communication link of some specifiedfrequency rather than being simply broadcasted in the air.

The signals in FDM, have different carrier frequency and are broadcast simultaneously. Itshould be noted that the carrier frequency of one signal is different from the carrier frequency ofanother signal being transmitted at the same time. Thus, different carrier frequencies ensure thatthe signals do not interfere with one another during the simultaneous transfer of the data over acommunication link.

5.5.1 Working of FDMKeeping in mind the basic idea about the frequency division multiplexing, let’s see at a glancehow the entire procedure of FDM works: We know that the carrier frequency of the signals inFDM differ from one another. Each input signal occupies a specific portion of the bandwidth or inother words, the process of FDM divides a single bandwidth into equally spaced channels. Thechannel assignment is done in such a way that the information of one source does not hamper withthe information of the other source. The source signals, are then fed to the multiplexer. After themultiplexed signal arrives at the receiving end, it is passed through a series of band-pass filters linedat the receiving end. The band-pass filters tuned at different frequencies, extracts the signal of itsfrequency and rejects the others. In order to get the original information at the receiver, the signalsare passed through de-modulator which provides the actual signal. Making the above informationas the foundation, we will now discuss the detailed working of FDM by diving it into two halves;transmitter-multiplexer and receiver de-multiplexer.

Consider the Fig. 5.2:

Page 113: Copyrights @ Higher Education Commission

5.5 Frequency Division Multiplexing (FDM) 113

Figure 5.2: Transmitter-Multiplexer

The figure shows N sources of signals to be transmitted. Each of the source signal i.e. m1,m2, m3,....,mn are fed to the modulator circuit. Each modulator circuit is tuned at different carrierfrequency indicated by f1, f2, f3,...., fn. These carrier frequencies are called subcarrier frequencyand are spaced from one another over a specific frequency range. The input signal when fed tothe modulator circuitry, are modulated using any one of the analog or digital modulation schemes.The output of the modulator circuits denoted by s1, s2, s3, ...., sn is then fed to the multiplexer. Thesignal mb(t) appears as the combined result of all the modulated subcarriers. In order to prevent theadjacent channel interference in the frequency division multiplexing, guard bands are employed.Guard band is a small frequency gap between each frequency slot. It should be noted that noinformation is carried in the guard band. This small range of frequency is used just to prevent theinterference between the signals that are being transmitted simultaneously. There is no pre-definedfrequency range for the guard-band, but it is usually much smaller than the frequency assigned tocarry the actual information. This is done to utilize the frequency of a channel in the most efficientmanner.

Consider the Fig. 5.3:

Figure 5.3: Receiver De-Multiplexer

The multiplexed signal denoted by s(t) is fed to the main receiver as shown in the Fig. 5.3.The composite baseband signal, mb which is a mix of different frequencies, is passed to a series ofbandpass filters. Each filter is set to some specified frequency. Thus, it extracts only the signalsof its desired frequency and passes or rejects all other frequencies. The result of each band-pass

Page 114: Copyrights @ Higher Education Commission

114 Chapter 5. Multiplexing and Multiple Access Techniques

filter, after it extracts the signal of its desired frequency appear as s1, s2, s3,...., sn. Note that this isthe modulator output and not the actual signal that we had at the beginning. In order to obtain theactual signal, the output of each band-pas filter is fed to the demodulator circuitries as shown above.Thus, the actual information m1, m2, m3,....,mn is obtained as the result of demodulation.

Example 5.1.

Four channels, each with a 100-KHz bandwidth, are to be multiplexed together. What is theminimum bandwidth of the link if there is a need for a guard band of 10 KHz between thechannels to prevent interference?Solution:

For four channels, we need at least three guard bands. This means that the requiredbandwidth is at least4 ×100+3×10 = 430KHz

5.5.2 Features of FDM1. Frequency Division Multiplexing is mostly used for analog data and all user access the

system simultaneously.2. All the source signals in frequency division multiplexing are modulated at different carrier

frequencies called the sub-carrier frequencies.3. FDM requires high-performing filters in the radio hardware.4. The entire process of frequency division multiplexing, divides the entire bandwidth into

equally spaced channels that carry information.5. The procedure ensures that no two adjacent frequency channels interfere with one another

whilst the transmission of data.6. Interference is prevented by making use of guard-bands. Guard-band is just a small portion

of frequency assigned to separate one frequency slot from the other.7. Handling interference prevention in such a way and allowing multiple users to send their

information simultaneously over a single communication link, makes frequency divisionmultiplexing a bandwidth efficient technique.

8. Frequency division multiplexing is the simplest and the most inexpensive form of multiplex-ing. It is popularly used in radio, cable TV and TV. Though the technique and working ofFDM is simple, but the filters used in the process make the design and construction of thecircuitry difficult. Moreover, although guard-bands are employed to prevent interference,

Page 115: Copyrights @ Higher Education Commission

5.6 Time Division Multiplexing 115

but still some amount of frequency is wasted in the guard-band because this frequency isincapable of transmitting the information.

5.5.3 ApplicationsFrequency division multiplexing has its applications in Stereo FM. These transmissions make useof sub-carrier frequency which sets apart signals to the right and left channel i.e. one for eachspeaker. It can differentiate more than two signals depending on the number of speakers. Cable TValso makes use of FDM. Numerous TV signals are combined and multiplexed on physical channeli.e. cable either co-axial or fibre optic and is then sent to the nearby residencies. Each TV signal isallotted its own 6 Mhz channel. FDM is also used in Digital Subscriber Line or DSL. DSL makesuse of different sub-carrier frequencies each for a specific transmission purposes such as upstream,downstream, voice etc.

Ever wondered how can you flick through hundreds of channels on the TV when only a singlecable is connected to it? Well, the answer is quite simple and interesting. Cable TV makes use ofa single coaxial cable and the hundreds of audio and video channels are broadcasted to differenthomes. This whole procedure makes use of frequency division multiple access. The coaxial cablehas bandwidth of about 4 MHz to 1 GHz. This bandwidth is split into 6-MHz channels whichis then shared with the multiple TV channels. However, if the assigned channels are close infrequencies FDM produces cross-talk. This interrupts the transmission and causes interferencebetween the users.

Time division multiple access was popularly used in Europe, Asia and Japan. This technique isusually popular in second generation of the cellular network such as GSM or Global System ofMobile Communication, Personal Digital Cellular and Integrated Digital Enhanced Network-iDEN.With the TDMA, the user can perform the fax services, SMS, multimedia and video conferencing.Although TDMA is the most cost effective technique, it has some limitations too. Since each userhas a fixed time slot so there are chances the user might get disconnected while transferring fromone cell to the other.

5.6 Time Division Multiplexing

Another important variant of multiplexing is Time Division Multiplexing or TDM. Where frequencydivision multiplexing, attempted to divide the communication bandwidth into equally spacedfrequency ranges (frequency channels), time division multiplexing allows signal to be transmittedsimultaneously by assigning a fixed time frame to each signal. During the process, the signals atthe transmitter end are assigned fixed-length time frames (time slots).

To state more simply, FDM divides a single bandwidth of some specified range into equallyspaced channels depending on the number of source signals to be transmitted. TDM, on the otherhand, performs simultaneous transmission of data by transmitting each source signal for a briefamount of time. All the source signals are provided the same time slot. Also, the number of timeslots depend on the number of source signals present at the transmitter end.

5.6.1 Working of TDMThe signal or the information from all the multiple sources lined at the transmitter end, are allassigned a fixed-length time frame during which they travel through the communication channel.The information that is being transmitted, is usually consist of bytes. So, one byte i.e. 8-bits fromsource 1, for example, is transmitted. After it has been transmitted, another one byte i.e. 8-bits istransmitted from source2 and so on. The cycle then repeats itself for the second byte, for each of thesource signals. This keeps going on when all the bytes from all the source signals are successfullytransmitted. The signals are then de-multiplexed at the receiver end.

Page 116: Copyrights @ Higher Education Commission

116 Chapter 5. Multiplexing and Multiple Access Techniques

Consider the Fig. 5.4:

Figure 5.4: Time Division Multiplexing-Working

In the Fig. 5.4, there are N sources from which information is transferred through the commu-nication channel. All the sources of information are multiplexed via the multiplexer.

The multiplexer breaks each signal into segments. Each segment is allowed to transmit for abrief amount of time. Suppose that data which is being transmitted is composed of sequential bytes.One byte from each of the four data sources are transmitted in the time interval allotted to a specificchannel. The figure shows four slots. Each of these four slots might contain data from each of thefour sources. For example, slot1 has the data from source 1 and slot 2 has the data from source 2and so on. One byte of source1 will transmit its 8-bits and halt and then 8-bits of the source 2 willbe transmitted. When all the bytes from all the sources are transferred, a digital stream is formedwhich is then de-multiplexed at the receiving end.

Example 5.2.

Four 1-Kbps connections are multiplexed together. A unit is 1 bit. Find1. the duration of 1 bit before multiplexing,2. the transmission rate of the link,3. the duration of a time slot, and4. the duration of a frame?

Solution:1. The duration of 1 bit is 1/1 Kbps, or 0.001 s (1 ms).2. The rate of the link is 4 Kbps.3. The duration of each time slot 1/4 ms or 250 ms.4. The duration of a frame 1 ms.

Time division multiplexing comes in two flavors.1. Synchronous TDM.2. Asynchronous TDM.

Page 117: Copyrights @ Higher Education Commission

5.6 Time Division Multiplexing 117

Synchronous Time Division Multiplexing

Synchronous TDM, as the name suggests, ensures that the data being transmitted from all sourcesis coordinates with one another. It does so by ensuring that the multiplexer gives same amount oftime to each of the sources connected to it. The time slot is allocated even if the source has no datato transmit. Though, the coordination among the source data is guaranteed, there are situationswhen the sources have nothing to transmit and the time slots allotted to those sources goes useless.Thus, synchronous TDM fails to make an efficient use of the communication link.

Asynchronous Time Division Multiplexing

In this case, the length of the time frame allocated is variable for each of the sources connected.Moreover, the time frame is allocated only to those sources which have some data to transmit. If asource has no data to transfer, its time frame is allocated to some other source which has the data totransfer.

Example 5.3.

In a TDMA system with ten users, frame length of 2 ms, guard time of 10µs and a peak bitrate of 1 Mbps, find

1. Total bit rate per frame and,2. bit rate per user per frame.

Solution:

Total Guard time = 10×10×10˘6 = 100µsTotal useful time = 2ms−100µs = 1900µsWe assume that for this useful time we always transmit at peak bit rate, thereforeNumber of bits per frame = 1900×10˘6×1×106 = 1900bits = 1.9×103bits(a) Total bit rate per frame = 1.9×103bits/2×10˘3s = 0.95MbpsNumber of bits per frame per user = (2×10˘3s˘100×10˘6s)/10 = 190×10˘3s/user190×10˘3s/user×1×10˘6 = 190bits/user/ f rame(b) Bit rate per user per frame = (190bits/ f rame/user)/(2×10˘3s/ f rame) = 95kbps

Page 118: Copyrights @ Higher Education Commission

118 Chapter 5. Multiplexing and Multiple Access Techniques

5.6.2 Features of TDMThe main features of the time division multiplexing are listed below:

1. The TDM multiplexes or combines the data from different sources by allocating each sourcedata a different time slot.

2. Each byte from the source data is transmitted for a brief time interval.3. Time division multiplexing does not make use of the guard bands to prevent the interference.Prevention from interference is implicit in the procedure in that each source data is assigned a

different time slot.

5.7 Code Division Multiplexing/Multiple Access

Both FDM and TDM techniques have constraints of frequency and time respectively. In CodeDivision Multiplexing/Multiple Access or CDMA is quite different from the time division multipleaccess and frequency division multiple access which we discussed previously. It was originallyused in world war II by English allies to prevent Germans from jamming the transmissions. Codedivision multiple access neither divides the channel bandwidth in frequency slots nor in time slotsrather, it assigns a unique code to each subscriber of the channel. The assigned code is basicallyresponsible to make the one users data orthogonal to all other users in order to avoid interference.This technique is widely used in cell phones where all the subscribers have a distinctive code andhave access to the same channel without dividing it into frequency or time slots. These codes aremade unique so that the information or the messages of one user does not hamper with other.

CDMA combines multiple signals from various users or subscribers over a single frequencychannel. It accomplishes this using a spread spectrum technique. You don’t have to worry aboutthis term for now as an entire chapter is dedicated to the spread spectrum technique where theconcept is discussed in quite detail.

CDMA works by assigning a pseudo-random code to multiplex the signal so that the bandwidthof the signal is spread out over the communication channel. The device at the receiving end isfamiliar with the spreading code and is used to de-multiplex (de-spread) the signal.

Multiplexing signals in such a way provides a high level of security in that all the informationfrom all the sources is mixed together each having a separate code which are then de-multiplexedat the receiver end. Only the receiver associated with a particular source, is aware of the code andhence can be deciphered by that receiver only. Also, there is no need to reserve a small frequencyslot as guard-band to prevent the interference of the signal as in FDM. Due to this the bandwidthof the channel is used effectively for the transmission purposes. All the third-generation mobilenetworks make use of CDMA as it provides better voice and data communication capabilities.However, the signals with low-transmission power i.e. the signals farther from the base stationmight get buried under the signals which are nearer to the base station.

5.7.1 Working Principles of CDMAThe entire working of CDMA is dependent on the spreading codes which are orthogonal to oneanother. Therefore, it is quite important to understand the concept of orthogonality.

Orthogonal simply means perpendicular or at right angles. In case of wireless networks, theconcept is employed to make sure that the signals do not interfere with one another when beingtransmitted over a single communication link.

For this purpose, we need a set of codes or in other words sequence of numbers whose dotproduct comes out to be zero. One of such codes is known as Walsh-Hadamard codes generatedusing Hadamard Matrix.

Consider a small matrix of 1×1. The matrix rule of creation says that copy the matrix to theright then copy the same matrix downward and then inverse the matrix and write it in diagonal.

Page 119: Copyrights @ Higher Education Commission

5.7 Code Division Multiplexing/Multiple Access 119

Refer the Figure 5.8.1-(a) below:

Figure 5.5: Hadamard Matrix

Figure 5.6: Example of Hadamard Matrix

From the example, we obtained the following 8x8 matrix:1 1 1 11 −1 1 −11 1 −1 −1−1 1 1 −1

Example 5.4.

Given the 4x4 matrix below, [1 11 −1

]Solution: Following the rule of Hadamard Matrix,

Page 120: Copyrights @ Higher Education Commission

120 Chapter 5. Multiplexing and Multiple Access Techniques

Example 5-2 below illustrates the entire process for this 8×8 matrix.

Example 5.5.

Given the 4×4 matrix obtained from Example 5.1, write the 8×8 matrix1 1 1 11 −1 1 −11 1 −1 −11 −1 1 −1

Solution:Following the rule of Hadamard Matrix,

As stated above, for the condition of orthogonality to be satisfied, the dot product of any tworows (Refer Example 5-2) should be zero. For the purpose of testing, let’s take the codes from row2 and row 5. Notice that rows are being started from 0. Figure 5.7 illustrates this:

Figure 5.7: Satisfying Condition of Orthogonality By Taking Codes from Rows 2 and 5 respectively

These orthogonal spreading codes are needed to separate the transmission of one user from theother. Therefore, for the purpose of transmission, the orthogonal codes are chosen in such a waythat the interference between the multiple users remains minimal.

5.7.2 Code Spreading and De-SpreadingNow that the concept of orthogonality is pretty much clear, lets discuss how the code spreading anddispreading works. We know that CDMA enhances the spectral efficiency. It does this by mixing

Page 121: Copyrights @ Higher Education Commission

5.7 Code Division Multiplexing/Multiple Access 121

the user information with the carrier signal which in turn spreads the user information. The signalfrom a single user is to be transmitted as shown in the Figure 5.9 below:

Figure 5.8: User Data to be Transmitted

Also, the 8-bit carrier is shown in Figure 5.10

Figure 5.9: 8-Bit Carrier

Now we will multiply each bit of user information with the carrier as shown below (See Figure5.11 (c),(d)(e)):

Figure 5.10: (c) User Information Data, (d) Spreading Code (e) Spreaded Information AfterMultiplying User Data and Spreading Code

As you can see in the figure above, the user information is now spread. For de-spreading, thereceiver knows that a particular sequence of the signal represents information from one user andsome other sequence represents information from another user.

Thus, we can say that 1-bit of user information was spread to 8-chip set. It should be noted thatafter the spreading process is done, the bits are referred to as chips.

In a nut shell, the key to the proper CDMA technique is to choose the right spreading code. Thespreading code should not be too large because a large code may cause interference. Also, for theprocess to work correctly, time synchronization should be done properly.

Chapter Summary• Multiplexing is a signal transmission technique which transfers multiple signal over a single

communication link or channel.• Multiplexing combines multiple signals to form a single complex signal which is then

transmitted over the communication channel. At the receiver end, the signal is separated intoits original form and is fed to the appropriate receiver.

Page 122: Copyrights @ Higher Education Commission

122 Chapter 5. Multiplexing and Multiple Access Techniques

• Multiplexing allows various users to share the same communication link and the frequencyrange without overlapping in each other’s frequency slot.• Multiple Access techniques, define how the multiple users will access the channel.• The signals in frequency division multiplexing are multiplexed together and then shared over

a physical communication link of some specified frequency.• The signals in FDM have different carrier frequency. These different carrier frequencies

ensure that the signals do not interfere with one another during the simultaneous transfer ofthe data over the communication link.• Guard-band is small portion of frequency assigned to separate one frequency slot from the

other.• Time division multiplexing allows signal to be transmitted simultaneously by assigning a

fixed time frame to each signal.• Each signal in TDM is transmitted for a brief amount of time.• Synchronous TDM, ensures that the multiplexer gives same amount of time to each of the

sources connected to it. The time slot is allocated even if the source has no data to transmit.• Asynchronous TDM, the time frame is allocated only to those sources which have some

data to transmit. If a source has no data to transfer, its time frame is allocated to some othersource which has the data to transfer.• In FDMA, a single bandwidth of some specified frequency range is divided into smaller

number of frequency channels one for each user.• In TDMA, the users are allowed to use the same frequency but for a limited span of time.• With TDM, a particular time frame is allotted to the user and stays with the user until the

user is connected to the network. Whereas, during time division multiple access, the user isallotted the time slot on need basis.• CDMA assigns a unique code to each subscriber of the channel. This allows the user have

access to the same channel without dividing it into frequency or time slot.• The code assigned to each user should be unique so that the transmission of one user does

not hamper with the other.• In wireless networks, the concept of orthogonality is used to make sure that the signals do

not interfere with one another when being transmitted over a single communication link.• For the orthogonality, a sequence of numbers is needed whose dot product comes out to be

zero. This can be done using Hadamard Matrix.

Multiple Choice Questions

1. Synchronous Time Division Multiplexingis inefficient because:

(a) It has higher data rate.(b) Time slots are allocated irrespective

of the availability of data.(c) It offers infinite number of time

slots.(d) None of them.

2. Which of the following are the applica-tions of Frequency Division Multiplexing:

(a) AM and FM radio stations.(b) Television Broadcasting.(c) Cellular Telephones.(d) All of them.

3. Which of the following technique makesuse of analog signal for transmission?

(a) Time Division Multiplexing.(b) Frequency Division Multiplexing.(c) Code Division Multiple Access.(d) None of them.

4. Simultaneous transmission of data iscalled:

(a) Modulation.(b) De-multiplexing.(c) Multiplexing.(d) Spreading.

5. Which technique listed below has variants:

Page 123: Copyrights @ Higher Education Commission

5.7 Code Division Multiplexing/Multiple Access 123

Synchronous and Asynchronous:(a) Synchronous Time Division Multi-

plexing.(b) Asynchronous Time Division Multi-

plexing.(c) Frequency Division Multiplexing.(d) None of them.

6. Which of the following allocates timeframe of variable length:

(a) Synchronous Time Division Multi-plexing.

(b) Asynchronous Time Division Multi-plexing.

(c) Frequency Division Multiplexing.(d) None of them.

7. How does Frequency Division Multiplex-ing avoid adjacent channel interference:

(a) By using code(b) By using Guard Bands.(c) Allocating different frequencies.(d) None of them.

8. How many number of guard bands are re-

quired if a channel of 540 k-Hz is dividedinto five sub-channels of 100 k-Hz each?

(a) Four guard bands.(b) Three guard bands(c) Five guard bands.(d) The above data is insufficient to de-

termine.

9. Sequence of codes in Code Division Mul-tiple Access is referred to as:

(a) Bits.(b) Chips.(c) Sets.(d) None of them.

10. Code Division Multiple Access over-comes interference of different users by:

(a) Dividing channel into different timeslots.

(b) Dividing channel into different fre-quency slots.

(c) Assigning pseudo-random codes tousers.

(d) All of them.

Review Questions

1. Enlist all the reasons that make multiplexing an important technique in the field of communi-cation systems.

2. How does Frequency Division Multiplexing prevent adjacent channel interference?3. Elaborate the basic principle of Time Division Multiplexing.4. List out the advantages and disadvantages of:

a.Synchronous Time Division Multiplexing.b.Asynchronous Time Division Multiplexing.

5. Elaborate the working phenomenon of Frequency Division Multiplexing.6. List out the practical applications of Time Division Multiplexing.7. Spot and state the differences between Time Division Multiplexing and Frequency Division

Multiplexing.8. How does Multiple Access differ from Multiplexing?9. How would you explain the difference between Time Division Multiplexing and Time

Division Multiple Access?10. List out the practical applications of:

a. Frequency Division Multiple Access.b.Time Division Multiple Access.c.Code Division Multiple Access.

11. How does Code Division Multiple Access make efficient use of Spread Spectrum whencompared to other techniques of multiplexing and multiple access?

12. What is code spreading and de-spreading?13. Spot and state the difference between Frequency Division Multiple Access and Code Division

Multiple Access.

Page 124: Copyrights @ Higher Education Commission

124 Chapter 5. Multiplexing and Multiple Access Techniques

14. List out some pros and cons of Code Division Multiple Access.15. Enlist some practical applications of Code Division Multiple Access.

Miscellaneous Questions1. What is meant by Wavelength-Division Multiple Access or WDM?2. Define Duplexing.3. Differentiate between Frequency Division Duplexing and Time Division Duplexing.4. Enlist and briefly elaborate the Spread Spectrum techniques used in CDMA?

Brain Buzzer1. What is “Frequency Allocation Concept?”2. Enlist the main causes of interference in CDMA.

Bibliography1. J. R., Barry, E. A., Lee, and D. G., Messerschmitt, Digital Communication. Kluwer Academic

Publishers, 2004.2. Louis Frenzel. 2007. Principles of Electronic Communication Systems (3 ed.). McGraw-Hill,

Inc., New York, NY, USA.3. S., Haykin, Communications Systems. Wiley, 2000.4. J. G., Proakis and M., Salehi, Fundamentals of Communication Systems. Prentice Hall, 2004.5. S., Benedetto and E., Biglieri, Principles of Digital Transmission; with Wireless Applications.

Springer, 1999.6. B. P., Lathi, Linear Systems and Signals. Oxford University Press, 2004.7. D. P., Bertsekas and R. G., Gallager, Data Networks. Prentice Hall, 1991.8. S. K., Mitra, Digital Signal Processing: A Computer-based Approach. McGraw-Hill, 2010.9. A. J., Viterbi, Principles of Coherent Communication. McGraw-Hill, 1966.

10. R. D., Yates and D. J., Goodman, Probability and Stochastic Processes: A Friendly Introduc-tion for Electrical and Computer Engineers. Wiley, 2004.

11. A., Leon-Garcia, Probability and Random Processes for Electrical Engineering. PrenticeHall, 1993.

12. H. V., Poor, An Introduction to Signal Detection and Estimation. Springer, 2005.13. R. E., Blahut, Modem Theory: An Introduction to Telecommunications. Cambridge Univer-

sity Press, 2009.14. A. V., Oppenheim, A. S., Willsky, and S. H., Nawab, Signals and Systems. Prentice Hall,

1996.15. A. V., Oppenheim and R. W., Schafer, Discrete-Time Signal Processing. Prentice Hall, 2009.

Page 125: Copyrights @ Higher Education Commission

6. Spread Spectrum Communication

Learning Focus

There are two forms of communication available: Wired Transmission of data and theWireless Transmission of data. This chapter’s main focus is on the form of wireless com-munication called the Spread Spectrum Communication. The chapter serves the followingobjectives:

1. 1. To define the term Spread Spectrum, elaborate its concept and state various reasonsto stress over its importance.

2. To acquaint the reader with the working phenomenon of spread spectrum communica-tion.

3. To familiarize the reader with the spreading and the de-spreading operation.4. To acquaint the reader with different kinds of spread spectrum techniques i.e. Direct

Sequence Spread Spectrum, Frequency Hopping Spread Spectrum Time HoppingSpread Spectrum.

5. To acquaint the reader with the Pseudo-Noise sequence.

Page 126: Copyrights @ Higher Education Commission

126 Chapter 6. Spread Spectrum Communication

6.1 Introduction to Spread Spectrum

Back in Chapter 01, we discussed two forms of communication. One that requires some path tocarry along the information. This was called as wired transmission or the guided transmission. Theinformation to be sent via wired communication, is confined within the walls of some wire or cable.The other form of transmission is called the wireless transmission or the unguided transmission.The information to be communicated in this variant, travels freely in the free space and does notrequire any wire or cable to guide its transmission hence the name wireless. This chapter focuses onthe form of wireless communication called the spread spectrum communication. Spread spectrumcommunication is defined as:

A method through which the bandwidth of a narrow band signal is deliberately increased infrequency domain so that it can be transferred over a bandwidth of a relatively wider bandwidth.

The bandwidth of the signal being transmitted using spread spectrum communication have muchwider bandwidth than the signals being transmitted by the conventional modulation schemes. Thiscommunication serves best in scenario where high levels of data security and data confidentialityis desired. The concept of the spread spectrum dates back to 1941. This technique was first putforward by a Hollywood actress named Heady Lamar and a musician called George Antheil in apaper. The two made efforts to describe a secure link to control the torpedoes. The U.S army, at thattime, did not take this technique very seriously. However, in 1980s, this technique gained groundsand became the most widely used in the applications that make use of radio links in antagonisticenvironments.

6.2 Why Use Spread Spectrum?

So far from what we have discussed, one can conclude that spread spectrum communication is usedto secure radio links. You might be wondering does the signal frequency of an FM stereo too usesthe spread spectrum communication or not? The signal frequency of an FM radio uses conventionalwireless techniques for the purpose of transmission. The signal transmitted via such methods donot change with time. For example, when you listen to the radio channel 105.0 does the channelcapacity keeps on fluctuating to a channel of higher or lower frequency? The answer is certainly no.This is so because the bandwidth of the channel is kept constant and any one can locate the signal togain the information. Transmitting via conventional means can cause the following two problems:

1. Constant frequency signals are prone to catastrophic interference which can occur eitheraccidently or deliberately.

2. A constant frequency signal is easy to intercept making them unfit for the applications thatdemand high levels of security.

Therefore, to address the above-mentioned problem, it is necessary to avoid using a constant-frequency signal. The frequency of the signal should be varied according to some specific, yetcomplicated mathematical frequency-versus-time function also called as spread spectrum function,at the transmitter. For the receiver to intercept this signal, it should be accompanied with thefollowing two pieces of the information:

1. It should know the frequency-versus-time function used by the transmitter.2. Should be familiar with the initial time at which the function begins.The spread spectrum technique serves best when a large number of users need to have access

to the same channel without interference. Apart from this, the spread spectrum technique helpsto prevent jamming so that the signal cannot be intercepted by any unauthorized user. Sincespread spectrum is a form of wideband communication, many military applications prefer usingthis technique because wideband signals are more difficult to jam than the narrowband signals.Following are the reasons which make the spread spectrum communication a better choice to sendand receive data than the conventional modulation techniques:

Page 127: Copyrights @ Higher Education Commission

6.2 Why Use Spread Spectrum? 127

1. Resistance to Interference and Jamming: The spread spectrum communication helpsto keep out interference signals and the signals sent by the jammers. The interferenceand the jamming signals can either be intentional or unintentional. The spread spectrumcommunication rejects all the signals that do not have the spread spectrum key. Only thosesignals are allowed to pass which have the spread spectrum key. Figure 6.1 shows this:

Figure 6.1: Resistance to Interference and Jamming

2. Resistance to Interception: Spread spectrum communication keeps out the unauthorizedlisteners and interceptors. Since, these unauthorized listeners do not have the spread spectrumkey, so they cannot hamper the communication. Only the listeners who have the actual spreadspectrum key, can communicate their data from one place to other. The spread spectrumappears as noise for all those people who do not have the right key and hence cannot decode.However, there are scanning methods available to break the code if it is too short andpredictable. With spread spectrum communication, the message can be made invisible byburying the signal levels under the noise floor. In this way, the unintended user will see notbe to see the transmission; it will only appear as noise.

Figure 6.2: Resistance to Interception

3. Resistance to Multipath Fading: Ideally there should only be a single path between thetransmitter and the receiver. But wireless communication usually result in multipath transmis-sion because the signal can be reflected from various objects such as buildings. The Figure6.3 below shows this:

Figure 6.3: Resistance to Multipath

The reflected path (R) can interfere with the direct path (D) in a phenomenon called Fading.Signal R is rejected even though it has the same key because the de-spreading processsynchronizes to signal D, Methods are available to use the reflected-path signals by de-spreading them and adding the extracted results to the main one.

Page 128: Copyrights @ Higher Education Commission

128 Chapter 6. Spread Spectrum Communication

6.2.1 Working Principle of Spread SpectrumIn the spread spectrum communication, the bandwidth of the signal is spread over a relatively largerbandwidth. This is done to transmit error-free information for some given signal-to-noise ratio inthe channel. One way to spread the bandwidth of the signal is to multiply the information withsome code. In other words, to spread the signal over a wider bandwidth, some high frequencysignal is injected. This makes the energy used in transmitting the signal, extend or open-out to awider bandwidth and the energy itself appears as the noise. The Figure 6.2. below shows the simpleblock diagram of the spread spectrum communication system:

Figure 6.4: Block Diagram of Spread Spectrum Communication

As shown in the figure above, to spread the user information, insert a spread spectrum code atthe transmitter along with the user data. Injecting a spread spectrum code is called as the spreadingoperation. The spreading operation is similar to multiplying the user’s information with some codeknown as spreading code. At the receiver end of the communication system, the spreading codeshould be removed before the user data is retrieved. It is absolutely necessary that the spreadingcode at both the ends; transmitter and receiver should be identical and should be known in advance.

6.3 Classification of Spread Spectrum

The spread spectrum techniques are classified in accordance with where the pseudo-random noisecode is inserted in the system. Based on this, the spread spectrum techniques are classified as:

1. Direct Sequence Spread Spectrum2. Frequency Hopping Spread Spectrum3. Time Hopping Spread SpectrumIf the pseudo-random noise code or PRN code is inserted along with the data, this is called as

direct sequence spread spectrum. If the PRN code is injected at the carrier frequency level, thisis called as frequency hopping spread-spectrum. This technique causes the carrier to change itsfrequency according to the PRN code. If the PRN code behaves as an on/off gate to the transmittedsignal, this is called as time hopping spread-spectrum.

6.4 Direct Sequence Spread Spectrum

The direct sequence spread spectrum or DSSS also known as Direct Sequence Code DivisionMultiple access is one of the spread spectrum techniques. The operation of the direct sequencespread spectrum can be defined as:

The user data to be transmitted is multiplied with some pseudo-random noise code or PRNcode also called the chip code, to achieve the spreading operation.

The pseudo random noise code which is injected at the data level, has pulses of short durationi.e. larger bandwidth as compared to the pulses of information signal. As a result of modulation bythe information or the message signal, the information signal is chopped up which yields signals thathas relatively larger bandwidth than that of a pseudo-noise sequence. In other words, the message

Page 129: Copyrights @ Higher Education Commission

6.4 Direct Sequence Spread Spectrum 129

signal is combined with the chipping code which as much higher data rate. The chipping code orthe PN code divides the message signal in accordance with the spreading ratio. The direct sequencespread spectrum causes a phase shift in the sine wave (message signal) in a pseudo-random manner.Each bit of the sine wave is modulated by the fast chip code. The Figure 6.4 below shows thetransmitted bit stream by multiplying user data with the PN code:

Figure 6.5: Transmitted Bit Stream by Multiplying User Data with PN Code

6.4.1 Working Principle of DSSSThe working of the direct sequence spread spectrum can be best understood by separately explainingthe spreading and de-spreading operation. The Figure 6.6 below shows the block diagram for thedirect sequence spreading:

Figure 6.6: Block Diagram of DSSS

Spreading Operation: At the transmitter end, the binary input data dt is multiplied by thepseudo-noise code C(t). This PN code is independent of the binary input data. The product ofinput data and PN code yield the transmitted base band signal denoted by Tx(t). Mathematicallythis is represented as:

Tx(t) = dt ×C(t) (6.1)

De-spreading Operation: At the receiver end, the received base band signal or the receivedmessage signal denoted by Rx(t) is multiplied by the pseudo-noise code known to the receiver.Cr(t). The de-spreading operation takes place if and only if the pseudo-noise code at the transmitterdenoted by Ct(t) is similar to the pseudo-noise code at the receiver Cr(t) i.e.

Ct(t) =Cr(t) (6.2)

Page 130: Copyrights @ Higher Education Commission

130 Chapter 6. Spread Spectrum Communication

If the above condition is satisfied, then the recovered data is obtained at the receiver denotedby dr(t). If the pseudo-noise code at the transmitter, denoted by, Ct(t)doesnotmatchthepseudo−noisecodeatthereceiver,denotedbyCr(t)i.e.Ct(t) 6= Cr(t) then the recovered data at the receivercannot be obtained. The equation at the receiver will be:

Rx(t) = Tx(t)×Cr(t) (6.3)

The transmitted base band signal obtained at the transmitter is given by:

Tx(t) = dt ×Ct(t) (6.4)

Substituting the value of Tx(t),

Rx(t) = dt ×Ct(t)×Cr(t) (6.5)

Since, Ct(t) =Cr(t) so, Ct(t)×Cr(t) = +1 for all t. Thus, the above equation reduces to:

Rx(t) = dt (6.6)

6.4.2 Performance of DSSS in the Presence of InterferenceLet us now examine the performance of the direct sequence spread spectrum in presence of theinterference. If some interference is observed during the transmission process, the received signalRx(t) consist of the transmitted signal denoted by Tx(t) along with some interference factor i.Interference can be due to:

1. Noise2. Users3. JammersMathematically, the above situation is represented as:

Rx(t) = Tx(t)+ i (6.7)

From the transmitted base band equation, we have,

Tx(t) = dt ×Ct(t) (6.8)

Thus, by substituting we get,

Rx(t) = dt ×Ct(t)+ i (6.9)

To recover the original data dt(t) in the presence of the interference, the recovered signal denotedby Rx(t) is multiplied with a pseudo-noise code. This PN code should be the exact replica of thePN code we injected at the transmitter. Let us assume that the recovered data is denoted by: dr(t).The recovered data is the product of the received signal along with the PN code same as that oftransmitter:

dr(t) = Rx(t)×Ct(t) (6.10)

By substituting the value of Rx(t) we get,

dr(t) = dt(t)×Ct(t)+ i×Ct(t) (6.11)

Now, by multiplying the data signal dt(t) twice by the PN code we get,

dr(t) = dt(t)+ i×Ct(t)

Multiplying the interference with the PN code points out to the fact the spreading will affectthe interference. After the de-spreading operation, the data is narrowband denoted by Rs and theinterference component is wideband denoted by Rc. If the recovered signal dr(t) is applied to abaseband filter with the bandwidth large enough to accommodate the recovery of data signal, mostof the interference will be filtered out.

Page 131: Copyrights @ Higher Education Commission

6.5 Frequency Hopping Spread Spectrum 131

6.5 Frequency Hopping Spread Spectrum

Hedy Lamar and George Antheil first put forward the concept of frequency hopping spread spectrumor FHSS also known as Frequency Hopping Code Division Multiple access FH-CDMA inventeda scheme to guide torpedo using frequency hopping remote control. The aim was to prevent thetorpedo control signal from being jammed. This could be done by varying the frequencies in some“arbitrary” pattern. The operation of frequency hopping spread spectrum can be defined as:

The data to be transmitted is switched to different available frequency channels making itopen-out to the entire available bandwidth.

The frequency hopping spread spectrum works by transmitting the radio signals by rapidlyswitching the carrier among various available frequency channels using some pseudo-random noiseor PRN code. The data to be transmitted is broken down into portions and it is then communicatedat different frequency levels at different times. The name frequency hopping comes from the factthat the transmitter switches or hops among the available frequency slots using some random orpredefined algorithm. For the process to work correctly, both the transmitter and receiver should bein sync i.e. the receiver should be aware about the frequency slots where the transmitter switches orhops to. In other words, the receiver should be familiar with the hopping pattern of the transmitter.One way to achieve synchronization is to make sure that the transmitter will make use of all theavailable frequency slots in some predefined time period. In this way, the receiver can detectthe transmitter’s frequency slot by choosing an arbitrary channel and listening to the data on thatchannel. The transmitter’s data is identified by a special sequence of data that is unlikely to occurover the segment of data for this channel and the segment can have a checksum for integrityand further identification. The transmitter and receiver can use fixed tables of channel sequencesso that once synchronized they can maintain communication by following the table. On eachchannel segment, the transmitter can send its current location in the table. (Expanation required)The bandwidth available for the transmission is divided into frequency slots and the transmitterperiodically jumps from one frequency channel to the other. The data is not transmitted all at oncerather, the transmitter conveys small bursts of data on one frequency slot and then it switches tosome other frequency channel. This way the data is spread out to the entire available frequencybandwidth. The signals that are transmitted, occupy a number of frequencies for a certain period oftime. The amount for time for which the information remains in a frequency slot is known as dwelltime. This is given by:

Th =1

Rh(6.12)

The hopping is done in some predefined manner called the hopping sequence. Hoppingsequence is the list of frequencies through which the transmitter will hop.

Figure 6.7: Frequency Hopping Spread Spectrum

Page 132: Copyrights @ Higher Education Commission

132 Chapter 6. Spread Spectrum Communication

6.6 Introduction to Pseudo-Noise Sequence

We have come across the term Pseudo Noise code in the previous sections. These codes playa vital role in the spread spectrum communication. This section attempts to discuss about thepseudo-noise sequences in greater detail along with their types. We have discussed that the pseudonoise code should be same at the transmitter as well as at the receiver during the spread spectrumcommunication to ensure proper spreading and de-spreading of the signal. However, you might bewondering why not use a truly random sequence instead of a pseudo-random sequence? The answeris quite simple: We have to ensure the synchronization between the transmitter and the receiver.For this, the receiver needs to have the exact imitation of the transmitted code to de-spread thesignal. however, if a truly random sequence is used at the transmitter, then there would be the needof a separate channel to transmit the spreading sequence which is obviously a very cumbersomeapproach. Thus, pseudo noise sequences are used so that both the transmitter and receiver cangenerate same code, independent of one another. A truly random sequence, as the name suggests,is the one which cannot be predicted. However, one can define its prospective variations in somestatistical manner. On the other hand, a pseudo-random code is the one that is not purely randombut imitates the statistical behavior of white noise. Pseudo-random noise codes are deterministicin nature i.e. they are equipped with sequence of pulses that repeat itself after a certain period oftime. Despite being deterministic, a pseudo-noise code has certain properties of a random sequence.Apart from spread spectrum, pseudo-random noise has applications in cryptography and scrambling.Linear Feedback Shift Registers or LFSR is used to generate a pseudo-random noise code. LFSRis a shift register whose input bit is a linear function of it previous state. An LFSR is usually ashift register whose input bit is determined by the XOR of some bits of the overall shift registervalue. The initial value of the LFSR is called the seed, and because the operation of the register isdeterministic, the stream of values produced by the register is completely determined by its current(or previous) state. Likewise, because the register has a finite number of possible states, it musteventually enter a repeating cycle. However, an LFSR with a well-chosen feedback function canproduce a sequence of bits that appears random and has a very long cycle.

6.6.1 Pseudo Random PropertiesThe spread spectrum communication is employed to accommodate multiple users on a singlebandwidth and each user is equipped with a bandwidth, large enough transmit his information.To prevent the interference among the users, a code is assigned to each user having access tothe channel. Apart from isolating the users on the same frequency channel, the code is used forspreading and de-spreading functions. These codes should be random in nature so that the code usedby one user cannot be predicted by the other user. However, we have just discussed that employinga truly random code is not a convenient approach. Thus, pseudo-random codes are employed toprevent interference as well as for the spreading and de-spreading operations. These codes are nottruly random hence the name pseudo. If a truly random sequence would have been used, then itwould have been difficult for everyone including the intended receiver to predict the code sequence.Pseudo noise codes appear as noise to all the users in the channel except the intended transmitterand the receiver. However, pseudo noise codes possess some statistical properties that are similar tothat of a truly random sequence.

Balance Property: Over the sequence period, the number of 0s and 1s differ at most by 1. Forexample: 111100010011010. This sequence has 8 1’s and 7 0’s. The sequence is exactlybalanced if and only if there are equal number of ones and zeros. Else the sequence is nearlybalanced.

Run Length Distribution: A run is defined as a sequence of single type of binary digits.Correlation Property: If a shift version of the sequence is compared term by term with the

original sequence the number of similarities differs from the number of dissimilarities by not

Page 133: Copyrights @ Higher Education Commission

6.6 Introduction to Pseudo-Noise Sequence 133

more than one count.

6.6.2 Maximal Length Sequence

They are bit sequences generated using maximal linear feedback shift registers and are so calledbecause they are periodic and reproduce every binary sequence (except the zero vector) that canbe represented by the shift registers (i.e., for length-m registers they produce a sequence of length(2m−1). m-sequence can be generated with m-stage shift register. The m-sequence has period ofN = 2m−1.Balance property: The occurrence of 0 and 1 in the sequence should be approximately the same.

More precisely, in a maximum length sequence of length there are ones and zeros.Run Length Distribution: Balanced runs, except there is no run f length zero.Autocorrelation: Binary valued auto-correlation function equal to ‘1 if M equals to 0 and -1

Notherwise.

Chapter Summary

• In spread spectrum communication, the bandwidth of a narrow band signal is deliberatelyvaried in frequency domain so that it can be transferred over a bandwidth of a relatively widerbandwidth.• The bandwidth of the signal being transmitted using spread spectrum communication have

much wider bandwidth than the signals being transmitted by the conventional modulationschemes.• Spread spectrum communication is used when many users need to have access to the same

channel without interference.• It also helps to prevent jamming so that the signal cannot be intercepted by any unauthorized

user.• Following are the reasons that make spread spectrum communication a preferable choice for

transmission of data:Resistance to Interference and Jamming:The spread spectrum communication rejects allthe signals that do not have the spread spectrum key. Only those signals can pass which havethe spread spectrum key.Resistance to Interception: The message can be made invisible by burying the signal levelsunder the noise floor. In this way, the unintended user will see not be to see the transmission;it will only appear as noise.Resistance to Multipath Fading:• To spread the user information, insert a spread spectrum code at the transmitter along with

the user data.• Injecting a spread spectrum code is called as the spreading operation. The spreading operation

is like multiplying the user’s information with some code known as spreading code.• On the receiver end of the communication system, the spreading code should be removed

before the user data is retrieved.• The spreading code at both the ends should be identical and should be known in advance.• In DSSS, the user data to be transmitted is multiplied with some pseudo-random noise code

or PRN code also called the chip code, to achieve the spreading operation.• The frequency hopping spread spectrum works by transmitting the radio signals by rapidly

switching the carrier among various available frequency channels using some pseudo-randomnoise or PRN code.• A truly random is the one which cannot be predicted.• Pseudo-random noise codes are deterministic in nature i.e. they are equipped with sequence

Page 134: Copyrights @ Higher Education Commission

134 Chapter 6. Spread Spectrum Communication

of pulses that repeat itself after a certain period of time.• A pseudo-noise code has certain properties of a random sequence• Pseudo noise codes possess some statistical properties that are like that of a truly random

sequence:Balance Property: Over the sequence period, the number of 0s and 1s differ at most by 1.Run Length Distribution: A run is defined as a sequence of single type of binary digits.Correlation Property: If a shift version of the sequence is compared term by term with theoriginal sequence the number of similarities differs from the number of dissimilarities by notmore than one count.• Maximal Length Sequence are bit sequences generated using maximal linear feedback shift

registers and are so called because they are periodic and reproduce every binary sequence(except the zero vector) that can be represented by the shift registers.

Review Questions1. Define and elaborate Spread Spectrum in your own words.2. Compare and contrast how Spread Spectrum is a better technique of data transmission when

compared to other techniques that serve the same purpose?3. How does Spread Spectrum function?4. Explain how spreading and de-spreading operation take place in Direct Sequence Spread

Spectrum?5. What is Dwell Time?6. Elaborate, with the help of suitable diagrams the working phenomenon of Frequency Hopping

Spread Spectrum.7. What is meant by a truly random sequence? Also, explain why a Pseudo-Random Noise

(PRN) code is needed instead of a truly random one?8. What is Linear Feedback Shift Register?9. Elaborate the properties of Pseudo-Random sequence.

10. What is meant by anti-jamming?

Miscellaneous Questions1. List out the practical applications of:

a.Direct Sequence Spread Spectrumb.Frequency Hopping Spread Spectrum.

2. What are Hybrid Systems?3. What is meant by multi-path fading?

Bibliography1. S. Benedetto and E. Biglieri, Principles of Digital Transmission. New York: Kluwer Aca-

demic, 1999.2. Principles of Electronic Communication Systems - Louis E. Frenzel Jr. (4th Edition, ISBN-13:

978-0-07-337385-0)3. S. B. Wicker, Error Control Systems for Digital Communication and Storage. Upper Saddle

River, NJ: Prentice-Hall, 1995.4. S. G. Wilson, Digital Modulation and Coding. Upper Saddle River, NJ: Prentice-Hall, 1996.5. J. G. Proakis, Digital Communications, ed. New York: McGraw-Hill, 2001.6. D. Torrieri, “Information-Bit, Information-Symbol, and Decoded-Symbol Error Rates for

Linear Block Codes,” IEEE Trans. Commun., vol. 36, pp. 613-617, May 1988.

Page 135: Copyrights @ Higher Education Commission

6.6 Introduction to Pseudo-Noise Sequence 135

7. J.-J. Chang, D.-J. Hwang, and M.-C. Lin, “Some Extended Results on the Search for GoodConvolutional Codes,” IEEE Trans. Inform. Theory, vol. 43, pp. 1682–1697, September1997.

8. C. Berrou and A. Glavieux, “Near Optimum Error-Correcting Coding and Decoding: TurboCodes,” IEEE Trans. Commun., vol. 44, pp. 1261– 1271, October 1996.

9. L. Hanzo, T. H. Liew, and B. L. Yeap, Turbo Coding, Turbo Equalisation and Space-TimeCoding. Chichester, England: Wiley, 2002.

10. R. Pyndiah, “Near-Optimum Decoding of Product Codes: Block Turbo Codes,” IEEE Trans.Commun., vol. 46, pp. 1003–1010, August 1998.

11. P. Robertson and T. Worz, “Bandwidth Efficient Turbo Trellis-Coded Modulation UsingPunctured Component Codes,” IEEE J. Selected Areas Commun., vol. 16, pp. 206–218,February 1998.

12. D. R. Barry, E. A. Lee, and D. G. Messerschmitt, Digital Communication, 3rd ed. Boston:Kluwer Academic, 2004.

Page 136: Copyrights @ Higher Education Commission
Page 137: Copyrights @ Higher Education Commission

7. Orthogonal Frequency Division Multiplexing

Learning Focus

This chapter’s main focus is the study of Orthogonal Frequency Division Multiplexing(OFDM) system. This chapter, however, covers only the basic concepts related to OFDM.The chapter is organized in such manner that it first acquaints the reader with the following:

1. Fundamentals of classification of bandwidth i.e. Narrowband Communication andWideband Communication.

2. Challenges in a typical wireless communication channel.3. Concept of inter-channel interference.

After the foundation is made, the chapter proceeds to serve the following objectives:1. To elaborate the basics of an OFDM system that how they maximize the spectral

efficiency.2. To elaborate the concept of Cyclic Prefix.3. To discuss OFDM based 4G technologies that include: Long Term Evolution and

WiMAX.

Page 138: Copyrights @ Higher Education Commission

138 Chapter 7. Orthogonal Frequency Division Multiplexing

7.1 Basics of Wideband and Narrowband CommunicationA communication system operates within a limited bandwidth. This bandwidth is nothing buta range of frequencies. This range of frequencies may be defined in terms of kilo hertz (K-Hz),Megahertz (M-Hz) or Giga-hertz(G-Hz). These units along with other properties of bandwidth, forexample, the number of users it can have, categorizes the bandwidth into two classes:

1. Narrowband Communication2. Wideband Communication

7.1.1 Narrowband CommunicationNarrowband communication occurs where the gain is constant for all the frequencies in a particularfrequency range. In other words, the frequency response of the channel is small. The band offrequencies in the narrowband communication, should be smaller than the coherence bandwidth(defined later in this section) and smaller than the span of frequencies where the channel responseis not flat. When speaking in terms of data communication or the internet connections, narrowbandcommunication is said to occur if the data is transmitted in terms of bits per second. Dial-upconnection is one good example of narrowband communication. Also, narrow bandwidth intelephony occupies 300-3400 Hz of frequency range.

7.1.2 Wideband CommunicationAs the name suggests, the wideband communication occupies relatively larger band of frequencies.The band of frequencies exceeds the coherence bandwidth and thus, wideband communicationdoes not show a flat frequency response. Speaking in terms of data communication or the internetcommunication, wideband communication allows streaming of data by communicating data ata speed higher than 50Mbps. It provides better sound clarity than the conventional voice-bandtelephone calls by occupying greater frequency range of audio spectrum.

7.2 Challenges in Wireless CommunicationIdeally, there should exist only a single path between the transmitter and receiver as in guidedmedia. But the wireless media, as the name suggests are unguided i.e. there is no physical path(wire) between the transmitter and receiver to guide the transmission of the signals. When thesignal from the transmitter reaches the mobile receiver, there exist more than one path between thetwo entities. In a typical urban environment (See Figure 7.1 below), the other undesirable pathsarise from the reflection due to the objects present in the environment such as buildings and talltrees. So, the receiver not only gets a signal from the direct path which is also called the Lineof Sight or LOS, but also catches the signals which deflect from the objects that come along theway. These undesirable or deflected signals are called as scatters or the non-light of sight (NLOS)components. Therefore, what arrives at the receiver is not just the pure line of sight or the directpath component but the non-line of sight components that result from the scatterers also hit thereceiver. In a nutshell, the receiver, receives the multiple copies of the signal.

The signal transmission and reception in such an environment is called Multipath PropagationEnvironment. In a multipath propagation environment, the superimposition of the LOS componentand the non-line of sight component can lead to two types of interference; constructive interferenceor the destructive interference. Constructive interference, as the name suggests, enhances the signalamplitude whereas destructive interference attenuates or subsides the signal amplitude.

7.2.1 Coherence BandwidthThe term coherence bandwidth is very much evident from the name itself. It defines a span offrequencies, over which the channel remains un-variable or consistent, hence the name coherent

Page 139: Copyrights @ Higher Education Commission

7.3 Delay Spread & Inter Symbol Interference 139

Figure 7.1: Signal Reflecting from Different Objects in a Typical Urban Environment

bandwidth. In this case, the channel shows the same gain over some span of frequencies and thechannel can be considered as static over that particular range of frequencies. Consider the Figure7.2 below:

Figure 7.2: Coherence Bandwidth

As shown in the Figure 7.2, the channel exhibits constant range of frequency at a particularpoint (indicated by the double-sided arrow). The channel at this point, exhibits or shows constantgain. This span of frequencies is called coherent bandwidth. Some channels may experience thecoherent bandwidth over a larger span of frequencies and some may exhibit at a smaller span offrequencies.

7.3 Delay Spread & Inter Symbol Interference

There are certain parameters that characterize the wireless channel. One of the most importantparameter is Delay Spread. In the previous section, we discussed about the multipath propagationenvironment which results due to the superimposition of multiple radio signals. The equation of thechannel response was given as:

h(t) =L−1

∑i=0

aiδ (t− τi) (7.1)

Where,ai = gain of the multipath componentδ = impulse responseτi = delay experienced by the multipath component

Page 140: Copyrights @ Higher Education Commission

140 Chapter 7. Orthogonal Frequency Division Multiplexing

Figure 7.3: Transmitted Signal (Tx) with Symbols S0,S1,S2,S3,S4, S5, S6 ...

To understand the concept of delay spread, consider a transmitted signal Tx with various symbolssuch as S0,S1,S2,S3,S4, S5, S6 and so on as shown in the Figure 7.3:

The symbols S0,S1,S2,S3,S4, S5, S6,... are fluctuating between the positive and negative peaks.Each symbol appears from some brief amount of time T , known as symbol time. The symbol timeis constant per symbol.

Let us now consider a two-component multipath channel. For this case L = 2 therefore, thechannel response could be written as:

h(t) = a0δ (t− τ0)+a1δ (t− τ1)

For the sake of simplicity, consider the simple case where,

a0 = a1 = 1

The equation then reduces to:

h(t) = δ (t− τ0)+δ (t− τ1)

where, δ (t− τ0) is the delay corresponding to the 1st multi-path component and δ (t− τ0) isthe delay corresponding to the 2nd multi-path component. When the signal arrives at the receiver, itwill have two components; one delayed by τ0 which is also called the line of sight component andthe other τ1 which is also called the non-line of sight component. These two components will addup at the receiver.

The received signal denoted by Rx corresponding to the multipath channel is illustrated (SeeFigure 7.4):

Figure 7.4: Received Signal (Rx)

Page 141: Copyrights @ Higher Education Commission

7.4 History of OFDM 141

From the Figure 7.4, it is quite evident that there will be two replicas of the original signal atthe receiver. One of the signal is corresponding to the line of sight signal is delayed by τ0 and theother signal corresponding to the non-line of sight appears after a delay of τ1. These two signalsadd up at the receiver.

The Figure 7.4 also shows the relative delay given by τ1− τ0. The relative delay denoted byσ0 is greater than the symbol time denoted by T . Thus, when the two signals add up, the signalsymbol S1 is delayed by τ0 interferes or adds up with the signal symbol S0 delayed by τ1. Similarly,the signal symbol S2 delayed by the τ0 adds up with or interferes with the signal symbol S1 whichis delayed by τ1 and so on. This interference or adding up of the signals is called the Inter SymbolInterference or ISI. Whereas, the delay denoted by τ1− τ0 is called the delay spread.

Therefore, when the delay spread is greater than the symbol time or in other words whenτ1− τ0 > T then the signal symbols interfere with one another giving rise to the phenomenon ofinter symbol interference or ISI. ISI is undesirable because it leads to the distortion of the originaltransmitted signal. This means that the value of the delay spread should be made smaller than thesymbol time. The delay spread in a typical outdoor system is 2s.

7.4 History of OFDMThe idea of the multicarrier communication was first put forward by Chang in 1966. The idea wasto make use of parallel data by the frequency division multiplexing with overlapping subcarriers.The scheme proposed by Chang proved to be quite efficient in terms of bandwidth utilization andto overcome the effects of multi-path propagation. The implementation of the OFDM systemhaving surplus subcarrier and array of sinusoidal generators and coherent demodulator for paralleloperation, seemed quite cumbersome and complex until a more sophisticated scheme was proposedby Weinstein and Ebert in 1971. Their idea was based on replacing the group of sinusoidalgenerators and coherent modulators by using IDFT as modulator and DFT as demodulator. Lateron, Peled and Ruiz put forward the concept of cyclic prefix (CP) to ensure orthogonality betweenthe subcarriers.

OFDM has been the choice of many broadband communication systems such as High-bit-rateDigital Subscriber Line (HDSL), Asymmetric Data Subscriber Lines (ADSL), Very High SpeedDigital Subscriber Line (VHDSL), High Definition Television (HDTV) etc. The first OFDM basedwireless local area network (WLAN), IEEE 802.11 was proposed in 1997. IEEE 802.11 couldsupport data rate up to 2Mbps which later on was enhanced to 54 Mbps.

7.5 Basics of Orthogonal Frequency Division MultiplexingOrthogonal Frequency Division Multiplexing works on the principle of Multi-Carrier Modulation.A typical communication system with bandwidth B and a single carrier placed at the center of thebandwidth with some carrier frequency is shown below. Let the bandwidth of the system be 10Mhz.See Figure 7.5:

Ts =1B = 1

10MHz = 0.1µsec

From the discussion of section 7.3. we know that the delay spread of a typical outdoor systemis 2 µsec. If we compare this with the symbol time of 0.1 µsec, we can see that the symbol timeis much less than the delay spread which leads to the degradation of the system performance orinter symbol interference or ISI. From this we can observe that as the bandwidth B of the channelincreases, the symbol time Ts decreases and becomes smaller than the standard delay spread ofchannel and eventually leads to ISI.

In order to overcome the adverse effect of ISI on the communication channel, we employ theprocess of Multicarrier channel modulation or MCM. According to the MCM principle, instead

Page 142: Copyrights @ Higher Education Commission

142 Chapter 7. Orthogonal Frequency Division Multiplexing

Figure 7.5: Working of OFDM System. System of Bandwidth B=10MHz

of having a single large bandwidth, there should be multiple sub-bands which can be obtained bydividing the single bandwidth into smaller sub-channels. Also, each sub-band is centered aroundsome carrier frequency known as sub-carrier frequency. The Figure 7.6 illustrates this:

Figure 7.6: Sub-Bands Obtained by Dividing Single Bandwidth into Smaller Sub-channels

The bandwidth of each sub-band is given by BN . Suppose that B = 10MHz and N = 1000 thus,

BW in each subband 1B = 10MHz

1000 = 10KHz

Symbol time in each sub-band would be,

Ts =1

B/N = 110KHz = 0.1ms = 100µs

Thus, the symbol time is now much greater than the delay spread which is 2 µs. Hence, it canbe concluded that by dividing or splitting a large bandwidth into a number of small sub-bands,the inter symbol interference can be subsided to a great extent. Such a system in which a largebandwidth is broken down into smaller sub-bands is called a Multicarrier modulation or MCMsystem. The OFDM also follows the principle of MCM.

7.5.1 Description of an OFDM SystemOFDM is a subset of frequency division multiplexing in which a single channel utilizes multiplesub-carriers on adjacent frequencies. The sub-carriers in an OFDM system are overlapping tomaximize spectral efficiency. Due to this overlapping it may seem that the adjacent sub carrier

Page 143: Copyrights @ Higher Education Commission

7.6 OFDM Based 4G Technologies 143

frequencies will overlap but this is not the case because as the name suggests orthogonal frequency,the sub-carrier frequencies are orthogonal and hence due to orthogonality the adjacent signals donot interfere. OFDM systems are able to maximize spectral efficiency without causing adjacentchannel interference. The frequency domain of an OFDM system is represented in the diagram(See Figure 7.6):

Figure 7.7: Frequency Domain Representation of an OFDM System

7.5.2 Orthogonality of Sub-Carriers

The sub-carriers in OFDM are able to partially overlap without interfering with adjacent sub-carriersbecause the maximum power of each sub-carrier corresponds directly with the minimum power ofeach adjacent channel.

The frequencies of the subcarriers in a OFDM are harmonically related. Let XkN−1k=0 be the

set of complex symbols to be transmitted by multicarrier modulation, the continuous time domainmulticarrier modulation signal is expressed as:

f = 1

where fk = f0 + k and

f = 1

for k = 0,1,2, ...,N−1. The subcarriers become orthogonal if Ts∆ f = 1, and such a modulationscheme is called OFDM, where Ts and ∆ f are called the OFDM symbol duration and the subcarrierfrequency spacing respectively. In case of orthogonal subcarriers x(t) denotes a time domain OFDMsignal.

Because of the orthogonality condition we have,

7.6 OFDM Based 4G Technologies

OFDM is a key wireless broadband technology. Broadband technology supports large bandwidth.An orthogonal frequency division multiplexing based system can have bandwidth of 20M-Hzwhich is hundred times more than that of a GSM system which is only 200K-Hz. Due to the largebandwidth of the OFDM systems, it supports much higher data rate which makes it a good choicefor the modern 4G and 5G based systems. The two major OFDM based 4G cellular systems areLong Term Evolution or LTE and WiMAX (discussed shortly). Apart from being a popular choicein the implementation of the cellular systems, OFDM technology is being widely used in wireless orWLAN systems. The major IEEE WLAN standards based on OFDM are 802.11a, 802.11g,802.11n,and 802.11ac.

Page 144: Copyrights @ Higher Education Commission

144 Chapter 7. Orthogonal Frequency Division Multiplexing

7.6.1 Long Term Evolution-LTELTE or Long-Term Evolution is a registered owned by European Telecommunication StandardInstitute or ETSI for the wireless data communication technology. LTE is a successor to theGSM which is a 2G based technology and UMTS-3G based technology. The LTE has subsidedthe problems of multi-path fading which was available in UMTS by using OFDM as its basetechnology. The later releases of OFDM will be based on MIMO-Multiple Input and MultipleOutput technology. The combination of the two i.e. OFDM and MIMO would provide bettercoverage especially in dense urban areas. Some of the facilities offered by LTE that make it a betterchoice than GSM and UMTS are:

1. It provides a peak data rate of 100 Mbps for down-streaming and 300 Mbps for upstreaming.2. Provides reduced latency.3. It has scalable bandwidth capacity.4. LTE is backward compatible with the GSM and UMTS systems.

7.6.2 WiMAXWiMAX which is based on wireless MAN technology, stands for Worldwide Interoperability forMicrowave Access. The physical layer of WiMAX is based on Orthogonal Frequency DivisionMultiplexing allocating different users different subsets of OFDM tones. The working phenomenonof WiMAX is primarily same as Wi-Fi. i.e. it transmits signals that can take data wirelessly usingradio waves. But there is one key difference between the two which makes WiMAX stand outtechnology. Wi-Fi is restricted to a small area in which it is effective and the signal becomes weakeras it moves beyond that certain area. WiMAX, on the other hand, needs a set-up of towers whichare same as the common mobile phone towers dispersed across the country. WiMAX requiresonly a few towers to work effectively. A single tower can serve an area as wide as 7,800 squarekilometers. These towers can be connected to the internet either wirelessly through a microwavelink or in wired fashion by using a high geared cabled link. The user receives the signal from thesetowers via a small antenna installed on their devices i.e. computers or phones. WiMAX, unlikeWi-Fi, is better at penetrating through walls and tress there by providing excellent non-light ofservices to the users. It also provides direct line of sight facility which enables the user to haveaccess to more stable and strong signal.

Chapter Summary

• Narrowband communication occurs where the gain is constant for all the frequencies in aparticular frequency range• In terms of data communication or the internet connections, narrowband communication is

said to occur if the data is transmitted in terms of bits per second.• Wideband communication occupies relatively larger band of frequencies. The band of

frequencies exceeds the coherence bandwidth.• Wideband communication allows streaming of data by communicating data at a speed higher

than 50Mbps.• Coherence bandwidth defines a span of frequencies, over which the channel remains un-

variable or consistent.• When the signal from the transmitter reaches the mobile receiver, there exist more than one

path between the two entities.• In a typical urban environment, the receiver, receives multipath component i.e. it catches the

signals not only from the direct path but also catches the signals that aries from the reflectionof the objects in the environment such as trees and tall buildings. Such an environment iscalled multipath propagation environment.

Page 145: Copyrights @ Higher Education Commission

7.6 OFDM Based 4G Technologies 145

• Constructive interference, enhances the signal amplitude whereas destructive interferenceattenuates or subsides the signal amplitude.• Inter symbol interference or ISI arises when the delay spread is greater than the symbol time,

characterized by the equation: 1−o > T• ISI is undesirable because it leads to the distortion of the original signal. To avoid ISI, symbol

time should be made greater than the delay spread.• The delay spread in a typical outdoor system is 2s.• According to the MCM principle, instead of having a single large bandwidth, there should

be multiple sub-bands which can be obtained by dividing the single bandwidth into smallersub-channels.• Each sub-band is centered around some carrier frequency known as sub-carrier frequency.• The effect of ISI can be minimized by dividing a single bandwidth into smaller sub-bands.• OFDM is a subset of frequency division multiplexing in which a single channel utilizes

multiple sub-carriers on adjacent frequencies• The sub-carriers in an OFDM system are overlapping to maximize spectral efficiency.• The OFDM cyclic prefix is formed in such a way that every OFDM symbol is headed or

prefixed by a copy of the end-part of that same symbol.• In IBI, the samples in current block are interfered by the samples from previous block in this

OFDM system.• OFDM has following two key steps:

1. IFFT and FFT processing at the transmitter and the receiver respectively.2. Addition of Cyclic Prefix or CP.

Multiple Choice Questions

1. Cyclic Prefix is required:(a) To ensure orthogonality between the

subcarriers.(b) To overcome inter- block interfer-

ence and inter-symbol interference.(c) To ensure symbol time is an integer

number.(d) None of them.

2. OFDM provides which of the following:(a) Encoding of digital data.(b) Wideband communication.(c) Multiple carrier frequencies.(d) All of the above.

3. OFDM is used because:(a) Provides low symbol interval.(b) Overcomes inter-symbol interfer-

ence.(c) Allows multiple users at the same

frequency.(d) All of them.

4. Given the following options: I) IFFT andFFT processing at the transmitter and re-ceiver. II) Addition of cyclic prefix. III)

Guard bands. OFDM requires which ofthe following:

(a) (I) only.(b) (II) and (III) only.(c) (I), (II) and (III) only.(d) (I) and (II) only.

5. WiMAX stands for:(a) Wireless Microwave Access.(b) Worldwide Interoperability for Mi-

crowave Access.(c) Wireless Interoperability for Mi-

crowave Access.(d) Worldwide Interconnectivity with

Microwave Access X.

6. LTE stands for:(a) Long Term Evaluation.(b) Lower Tracking Error.(c) Line Terminal Equipment.(d) Long Term Evolution.

7. Which of the following is NOT true forWiMAX:

(a) The working phenomenon ofWiMAX is primarily same as WiFi.

Page 146: Copyrights @ Higher Education Commission

146 Chapter 7. Orthogonal Frequency Division Multiplexing

(b) It provides only Line of Sight com-munication services to the users.

(c) It requires fewer towers to transmitsignals.

(d) Signal can be communicated bothwirelessly and via cable.

8. Which of the following is true about LTE?(a) Provides peak data rate of 100 Mbps

for down streaming.(b) Provides reduced latency.(c) Has scalable bandwidth density.(d) All of the above.

9. In Cyclic Prefix:(a) Samples are transferred from end of

OFDM block towards the beginningof OFDM.

(b) Samples are transferred from startof OFDM block towards the end ofOFDM.

(c) Samples are transferred from themid of OFDM block towards the be-ginning.

(d) None of them.

10. Attenuation or subsiding of signal ampli-tude is called:

(a) Constructive Interference.(b) Destructive Interference.(c) Inter-symbol Interference.(d) Inter-block Interference.

Review Questions1. State and explain the difference between Narrowband and Wideband Communication.2. Why OFDM has been the choice for wireless communication of data transmission in the past

few years?3. What is meant by Multi-Carrier Modulation?4. What is meant by orthogonality in OFDM?5. What is meant by frequency selectivity?6. How does OFDM function?7. How does inter-block interference is overcome in OFDM system. Explain.8. What are the factors that led to the need of OFDM?9. What is Coherence Bandwidth?

10. Define Delay Spread.

Miscellaneous Questions1. Enlist some advantages as well as disadvantages of OFDM?2. How OFDM is implemented in real world?3. Enlist some wireless technologies, other than those mentioned in the text that make use of

OFDM.4. Write a short note on MIMO antenna technology.

Bibliography1. S. Benedetto and E. Biglieri, Principles of Digital Transmission. New York: Kluwer Aca-

demic, 1999.2. Principles of Electronic Communication Systems - Louis E. Frenzel Jr. (4th Edition, ISBN-13:

978-0-07-337385-0)3. Electronic Communications System: Fundamentals Through Advanced - Wayne Tomasi (5th

Edition, ISBN-13: 978-0-13-049492-4)4. Telecommunications - Warren Hioki (4th Edition, ISBN-10: 013020031X or ISBN-13:

978-0130200310)5. Modern Digital and Analog Communication Systems - B. P. Lathi, Zhi Ding (4th Edition,

ISBN-10: 0195331451 or ISBN-13: 978-0195331455)

Page 147: Copyrights @ Higher Education Commission

7.6 OFDM Based 4G Technologies 147

6. Digital Communications - Ian A. Glover, Peter M. Grant (3rd Edition, ISBN-10: 0273718304or ISBN-13: 978-0273718307)

7. Communication Systems - Simon Haykin, Michael Moher (5th Edition, ISBN-10: 8126521511or ISBN-13: 978-8126521517)

8. R. E., Ziemer and W. H., Tranter, Principles of Communication: Systems, Modulation andNoise. Wiley, 2001.

9. J. R., Barry, E. A., Lee, and D. G., Messerschmitt, Digital Communication. Kluwer AcademicPublishers, 2004.

10. S., Benedetto and E., Biglieri, Principles of Digital Transmission; with Wireless Applications.Springer, 1999.

11. Richard W. Middlestead, "Amplitude Shift Keying Modulation , demodulation and perfor-mance ," in Digital Communications with Emphasis on Data Modems: Theory, Analysis,Design, Simulation, Testing, and Applications, Wiley, 2017, pp.

12. J. D., Gibson, ed., The Mobile Communications Handbook. CRC Press, 2012.13. G., Foschini, “Layered space–time architecture for wireless communication in a fading

environment when using multi-element antennas,” Bell-Labs Technical Journal, vol. 1, no. 2,pp. 41–59, 1996.

14. K., Sayood, Introduction to Data Compression. Morgan Kauf

Page 148: Copyrights @ Higher Education Commission
Page 149: Copyrights @ Higher Education Commission

8. Introduction to Information Theory

Learning Focus

This chapter deals with one of the very important sections of communication system calledthe Information Theory. Information theory deals with the analysis of communicationsystem. This chapter, however, covers only the basics of information theory. The chapterserves the following objectives:

1. To acquaint the reader with the in-depth meaning of the word Information.2. To acquaint the reader with the information content present in a message from both

perspectives; the common-sense approach and the technical one.3. To introduce the concept of entropy which is the degree of the uncertainty of the

content of information in a message.4. To present the concept of source encoding. i.e. the number of binary digits requires

per message to encode non-equiprobable messages.5. To present the detailed discussion of the Huffman Coding Algorithm.6. To acquaint the reader with the Shannon-Hartley Capacity Theorem and to explain

the relationship between the channel capacity, bandwidth and signal-to-noise ratio.

Page 150: Copyrights @ Higher Education Commission

150 Chapter 8. Introduction to Information Theory

8.1 Introduction to Information Theory

One of the key theories which have been instrumental in stimulating the information technologyrevolution is that of Information Theory. Its a branch of science that deals with the analysisof a communication system. It attempts to calculate how much data transfer is possible over anoisy communication channel. Claude Shannon is credited to first demonstrate that a reliablecommunication over a noisy channel is in fact possible.

The field of information theory, deals with uncertainty about a situation and that how it shouldbe quantified, manipulated and represented. Information theory focuses on dealing with the analysisof communication system. It essentially tries to answer two basic questions:

1. How much data compression is possible (entropy)?2. What is the maximum Channel capacity?

8.1.1 History of Information TheoryThe block diagram of communication system, that we discussed back in Chapter 1 has beenrepresented here (See Fig. 8.1):

Figure 8.1: Block Diagram of Communication System

As shown in the figure, the noise source is an important component of the overall communicationsystem. This means the communication can never be completely error free. Nevertheless, byreducing the probability of error or Pe, the accuracy of digital signals can be enhanced but withthe existence of noise, the communication channel can never be made error-free. To reduce theprobability of error Pe to some desired level, the energy per bit or Eb should be increased. Thesignal power is given by:

Si = EbRb

where, Rb represents bit rate. Thus, increasing Eb means that signal power Si is increased fora given bit rate or the bit rate Rb is decreased for some given power or both. However, it is notpossible to increase Si beyond some certain level, because various physical limitations are imposed.Thus, the only way to reduce the probability of the error is to reduce the error rate of transmissioni.e. Pe→ 0, Rb→ 0. Hence, proved that as long as the channel noise exists, it is impossible toachieve an error-free exchange of information.

The communication experts started searching for every possible way to attain error-free com-munication but in vain. One of the notable efforts were made by Harry Nyquist and Hartley at BellLabs in 1920s. Though, these pioneers made a great effort in achieving the purpose, but their workwas more confined to their own applications and limited information-theoretic ideas. However, thepublication of Claude E. Shannon’s paper titled “A Mathematical Theory of Classification” in1948, caught everyone’s attention. Shannon stated in his paper that it is possible to attain error-freetransmission by maintaining the channel capacity i.e. rate of information digits per second to be

Page 151: Copyrights @ Higher Education Commission

8.2 Information Content of a Message 151

transmitted to a certain limit for some specified channel. Thus, it is not necessary to have Rb→ 0to make Pe→ 0, instead, to reduce the probability of error, the relationship of Rb <C where C ischannel capacity should be maintained.

8.2 Information Content of a MessageA communication system is modeled using basic components such as transmitter, communicationchannel and receiver. The messages are initiated by the transmitter and travel through the channel.The channel amends the message in some way. The receiver on the other hand, attempts to deducethe message that was originally sent by the transmitter. We will explain the information content ofa message from two perspectives:

1. Common Sense Approach2. Technical Approach

8.2.1 Common Sense ApproachBefore we go into the details about the engineering approach to the measure of information, wewill first discuss it in the context of common sense. Consider that you have come across followingthree headlines while reading the paper:

1. The earth revolves around the sun2. Aliens invade world3. Human beings invade MarsThe first headline won’t matter to you and you it won’t grab your attention. The second headline,

however, might surprise you to some extent and you might be interested in knowing the detailsabout it. The third headline, would definitely leave you astonished and contains a lot of informationthat might interest you.

If you observe this carefully, then you would see that the more surprising the news is, the moreinformation it contains. So, in this context, information is the degree of surprise in a message. Inother words, the content of information present in a message is proportional to the uncertaintylevel of the event or the message. If the message has a relatively low level of vagueness, then themessage is supposed to carry less content of information.

If we look the above scenario in terms of probability, then probability of the occurrence of thefirst event (headline) is unity. This means that for the first event i.e., earth revolves around the sun,is certain to occur and does not provide us with any information (new information). The probabilityof the second event is relatively low but it is of finite probability. However, the third event haszero probability which points out to the fact that this event is impossible to occur and thus, thisevent provides us with a lot of information. From this explanation in terms of probability, we candeduce the following relationship between the information content of a message and its probabilityof occurring:

I ∼ log 1p

This means, the lesser the probability of the occurrence of an event, the less is the content ofinformation delivered by that message.

8.2.2 Technical ApproachNow that we are pretty much clear about the common-sense approach about the information contentof a message, we will now analyze things from an engineering point of view. When looking at thesituation from an engineering perspective in general, what is first thing that comes into your mind?If you are thinking that engineering focuses on delivering efficiency, then you are thinking in justthe right direction.

Page 152: Copyrights @ Higher Education Commission

152 Chapter 8. Introduction to Information Theory

Similar concept applies here. When designing communication systems, the first and foremostpriority of the engineered product is to deliver the message in the most efficient way. And ideally,the cost of the product depends upon the amount of information delivered by the system. But inreality, this is not the case. The cost depends upon the time taken by the communication system totransmit messages from one point to the other.

Thus, the engineering point of view of information content of a message states that, theinformation contained in a message is directly proportional to the minimum time required totransmit the message. Therefore, a message with higher probability requires less transmission timeand vice versa.

Consider that we have two binary messages m1 and m2. The probability of occurrence ofthese two messages is equally likely. Let us encode these messages by 0 and 1 respectively. Nextconsider the case of 4 messages: m1, m2, m3 and m4 now we will encode them by the binary digits:00,01,10,11. These take twice the transmission time as compared to the previous case. Thus it canbe seen that, we need log2n binary digits to encode each of n equiprobable messages. Since allthe messages are equally likely i.e., P the occurrence of any one message is therefore 1/n henceeach message with probability P needs log2(1/P) binary digits for encoding: thus, the equationbecomes:

I = Klog2(1P) =−Klog2(P) (8.1)

where k is the constant to be determined. Thus, information content of the message is propor-tional to the log of inverse of probability of message. If the constant of proportionality is consideredto be unity, the equation reduces to,

I = log2(1P) (8.2)

This points out to the fact that the content of information I in a message can be inferred as theminimum number of digits required to encode a message.

Example 8.1.

An event X generates randomly 1 or 0 with equal probability. What is the amount ofinformation for any single result?Solution:Since event X generates 1 or 0 with equal probabilities. We can write:P(X = 0) = P(X = 1) = 0.5Therefore, the amount of information for event X is:I(X) =−log2(0.5) = 1

Exercise 8.1 What is the information content in an event for which P(X=0) or P(X=1) is alwaysequal to 1 or 0?

8.3 EntropyWe have just discussed that amount of information present in a message depends on how unlikelyare the chances of the occurrence of an event are. In other words, amount of information is directly

Page 153: Copyrights @ Higher Education Commission

8.3 Entropy 153

Figure 8.2: Entropy versus Probability curve

linked with the degree of surprise in a message. The more surprising the message is, the moreinformation it delivers and vice versa.

There should be some way to measure the uncertainty or unpredictability. Entropy is themeasure of unpredictability or the uncertainty about the content of information in a message. Letus elaborate the concept with a simple example of polling in elections. The outcome of polling isusually uncertain. After the results are declared, some new information is delivered. This means,the entropy of polling is large. Now if for suppose, the very same polling is carried out again, theresults would be predictable and entropy would be less.

Similarly, when tossing a fair coin, where the probability of the H(m) occurrence of head andtail is equally-likely, then the entropy would be high. Because one cannot predict the outcome ofthe head or tail before tossing the coin. In such a case, if we predict head to come up, then theprobability of the occurrence of heads would be 1/2 which means the coin has one bit of entropy.On the other hand, if a coin with heads only is tossed, then the entropy of such an event would bezero because the head will surely come up each time the coin is tossed.

Now let us examine entropy in a mathematical way. Consider a memoryless source m emittingmessages: m1,m2,m3, ...,mn having probabilities P1,P2,P3, ...,Pn. Each message emitted is inde-pendent of the previous message. Thus, the information content of a particular message is givenby:

Ii = log 1Pi

The probability of occurrence of mi is Pi the mean or average will be:

H(m) =n

∑i=1

I(mi)P(mi) =−n

∑i=1

P(mi)log2(P(mi)) (8.3)

Example 8.2.

Consider an event with two outcomes 1 and 0, occurring with probability of p and 1 – prespectively. Calculate the Entropy.Solution:We know that the entropy is calculated by 8.3:H(x) = p∗ log2(

1p)+(1− p)∗ log2(

1(1−p))

For p = 0 and 1;H(X) = 0and for p = 0.5; H(X) = 1The entropy versus probability curve is shown in Fig. 8.2.

Page 154: Copyrights @ Higher Education Commission

154 Chapter 8. Introduction to Information Theory

8.4 Source Encoding

In order to encode a message, the minimum number of binary digits required are equal to the sourceentropy log(1/P)provided that all the messages emitting from the source are of equal probability.

This section attempts to discuss the number of binary digits required per message to encodenon-equiprobable messages. Consider a source m that emits messages m1,m2,m3, ...,mn having theprobabilities P1,P2,P3, ...,Pn. Now suppose a sequence of N messages such that N←− ∞. Also, letKi be the number of times the message mi appears in the sequence. Then according to law of largenumbers we can define this relationship as:

limN→0

Ki

N= Pi (8.4)

Hence, the message mi appears NPi times in a sequence of N messages, given that N −→ ∞.This means in a typical sequence of N messages, m1 appears NP1 times, m2 appears NP2 times, m3appears NP3 times and mn appears NPn times.

Now we will consider the case in which message is emitted from a source independent of the pre-ceding message also called as zero-memory source. Assume a sequence of SN of N messages fromthe source. Since n messages having probabilities P1,P2,P3, ...,Pn appear NP1,NP2,NP3, ...,NPn

times and also each messages emitted is independent of the other, the probability of the occurrenceof a typical sequence SN is given by:

P(SN) = (P1)NP1(P2)

NP2(P3)NP3 ...(Pn)

NPn

Since all the sequences of N messages from this source have the same composition, all thesequences of the N messages are equiprobable having probability P(SN). We can take these longsequences as new messages, that are now of equal probability. In order to encode one such sequence,LN binary digits are needed:

LN = log[

1P(SN)

](8.5)

Placing the value of P(SN) in above equation:

LN = Nn

∑i=1

Pilog2

(1Pi

)= NH(m) (8.6)

LN is the number of binary digits of the code word needed to encode N messages in the sequence.Therefore, L, the average number of digits needed per message is given by

LN =LN

N= H(m) (8.7)

The above equation points out to the conclusion that it is possible to encode the messagesemitted by the source using an average of H(m) number of binary digits per message where H(m)is the source entropy expressed in bits.

Page 155: Copyrights @ Higher Education Commission

8.5 Huffman Coding 155

8.5 Huffman CodingThis method was defined by Dr.David A. Huffman in 1952. Huffman defined this technique in thispaper titled as Construction of Minimum Redundancy Codes. The result of the Huffman codingprocedure is a table having variable length codes to encode the symbols. The basic principle ofthe Huffman coding is to make use of fewer bits to represent frequent symbols and to make use ofmore bits to represent symbols that appear infrequently. In other words, shorter code words areused for the symbol that appear more frequently and longer code words are used for the symbolthat appear less frequently. The algorithm is as follows:

Algorithm:1. Place each symbol in leaf. Weight of leaf = symbol frequency.2. Select two trees L and R such that L and R have lowest frequency in the tree.3. Create new internal node: Le f tChild⇒ LRightChild⇒RNewFrequency⇒ f requency(L)+

f requency(R)4. Repeat until all nodes get merged into one tree.Huffman coding attempts to combine the two least frequency symbols and sums them up

forming a new symbol. The procedure keeps on repeating until unless we are left with a singlesymbol only.

8.6 Shannon-Hartley Capacity TheoremThe two prominent communication scientists, Harry Nyquist and Ralph Hartley, proposed theirview on the transmission of information in a telegraph communication system in the late 1920s.Although, their ideas led to great discovery at that time but it still lacked some major information.Claude Shannon, in 1940s, introduced the idea of channel capacity. His ideas were a mix of boththe Nyquist’s and Hartley’s work which led to the complete theory of information transmissionin a communication system. Earlier it was believed that by increasing the rate of transmission ofinformation over a communication channel will cause an increase in the probability of the error.Suppose that a source transmits r messages. Also the entropy of a message isH bits per second.The information rate would be:

R = rH

Channel CapacityChannel Capacity is the rate of information that can be reliably transmitted over a communicationchannel. Capacity is measured in terms of how many bits can be transmitted via channel. Anygiven communication system has a maximum rate of information, C known as channel capacity. Ifthe rate of information transmission denoted by R. is less than the channel capacity C, then in thepresence of noise, the information can be transmitted with less probability of error by making useof intelligent coding techniques. The encoder should work on longer blocks of data to attain lowererror probabilities. This entails longer delays and higher computational requirements.

The capacity of the channel denoted by C depends upon:1. Bandwidth.2. Signal to noise ratio.

R The channel capacity is linearly proportional to the bandwidth but logarithmically proportionalto the SNR.

Thus, we can express the above relation as:

C ∝ BW ∝ SNR

Page 156: Copyrights @ Higher Education Commission

156 Chapter 8. Introduction to Information Theory

The above equation can be written as:

C = BWlog2

[1+

SN

]bits/sec (8.8)

In the equation above, C is the maximum capacity of the channel in bits/second called Shannon’scapacity limit for the given channel, B is the bandwidth of the channel in Hertz, S is the signalpower in Watts and N is the noise power, also expressed in Watts. The ratio S/N is called Signal toNoise Ratio (SNR). From Eq. 8.8, it can be deduced that the maximum rate at which we can transmitthe information without any error, is limited by the bandwidth, the signal level, and the noise level.The above equation indicates how many bits can be transmitted per second without errors over achannel of bandwidth B Hz, when the signal power is limited to S Watts and is exposed to GaussianWhite (uncorrelated) Noise (N Watts) of additive nature. Since C is directly proportional to SNRthis value will ultimately reach a constant value. However, inclusion of log in the formula makesthe relationship as: For a certain period of time, C increases with SNR. But after a certain point onlySNR increases. As shown in the graph on right (See Figure 8.6-(a)), in power region C increaseswith SNR but beyond power region (See Figure 8.6-(b)) increasing SNR will not affect the capacityof the channel instead noise will be introduced in the channel. So, in order to increase the capacityof channel beyond power region, BW should be varied accordingly. At this point multiplexingand multiple access techniques come into play. Also, since many users occupy same BW so it isensured that each BW slot allocated to the user remains orthogonal.

Figure 8.3: C increases as SNR

Figure 8.4: Beyond power region increasing SNR does not affect the capacity of the channel

Chapter Summary

• Information reduces our uncertainty level about a situation or an event.• The field of information theory, deals with uncertainty about a situation and that how it

should be quantified, manipulated and represented.• It deals with analysis of communication system.

Page 157: Copyrights @ Higher Education Commission

8.6 Shannon-Hartley Capacity Theorem 157

• Information content of a message states that, the information contained in a message isdirectly proportional to the minimum time required to transmit the message. Therefore, amessage with higher probability requires less transmission time and vice versa.• Entropy is the measure of unpredictability or the uncertainty about the content of information

in a message• According to Huffman coding, shorter codewords are used for the symbol that appear more

frequently and longer codewords are used for the symbol that appear less frequently.• Channel capacity is the rate of information that can be reliably transmitted over a communi-

cation channel.• Capacity is measured in terms of how many bits can be transmitted via channel.• The capacity of the channel denoted by C depends upon Bandwidth and Signal to noise ratio.• The maximum rate at which we can transmit the information without any error, is limited by

the bandwidth, the signal level, and the noise level.

Review Questions1. How would you define the word “information?”2. Define and elaborate the concept of entropy.3. What is meant by information content of a message?4. State and explain the Shannon-Hartley Capacity Theorem?

Bibliography1. J. Abrahams, Code and parse trees for lossless source encoding," Comm. Info.and Syst., 1:

113-146, 2001 (http://www.ims.cuhk.edu.hk/ cis/).2. N. Abramson, Information Theory and Coding, McGraw-Hill, New York, 1963.3. Y. S. Abu-Mostafa, Ed., Complexity in Information Theory, Springer-Verlag, New York,

1988.4. J. Aczel and Z. Daroczy, On Measures of Information and Their Characterizations, Academic

Press, New York, 1975.5. A. Argawal and M. Charikar, the advantage of network coding for improving network

throughput," 2004 IEEE Information Theory Workshop, San Antonio, TX, Oct. 25-29, 2004.6. R. Ahlswede, B. Balkenhol and L. Khachatrian, properties of fix-free codes," preprint 97-039,

Sonderforschungsbereich 343, Universitat Bielefeld, 1997.7. R. Ahlswede, N. Cai, S.-Y. R. Li, and R. W. Yeung, information flow," IEEE Trans. Info.

Theory, IT-46: 1204-1216, 2000.8. R. Ahlswede and I. Csiszar, randomness in information theory and cryptography Part I:

Secret sharing," IEEE Trans. Info. Theory, IT-39: 1121- 1132, 1993.9. R. Ahlswede and I. Csiszar, randomness in information theory and cryptography Part II: CR

capacity," IEEE Trans. Info. Theory, IT-44: 225- 240, 1998.10. R. Ahlswede and J. Korner, coding with side information and a converse for degraded

broadcast channels," IEEE Trans. Info. Theory, IT-21: 629- 637, 1975.11. R. Ahlswede and I. Wegener, Suchprobleme, Teubner Studienbcher. B. G. Teubner, Stuttgart,

1979 (in German). English translation: Search Problems, Wiley, New York, 1987.12. R. Ahlswede and J. Wolfowitz, capacity of a channel with arbitrarily varying cpf’s and

binary output alphabet, Zeitschrift fu r Wahrscheinlichkeitstheorie und verwandte Gebiete,15: 186-194, 1970.

13. P. Algoet and T. M. Cover, sandwich proof of the Shannon-McMillanBreiman theorem,"Ann. Prob., 16: 899-909, 1988.

Page 158: Copyrights @ Higher Education Commission
Page 159: Copyrights @ Higher Education Commission

II

9 The Basics of Satellite Communication 161

10 Introduction to Optical CommunicationSystem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 175

11 Introduction to RADARs . . . . . . . . . . . . . 197

Glossary . . . . . . . . . . . . . . . . . . . . . . . . . . . 205

Index . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 211

Part TWO -Telecommunication Systems

Page 160: Copyrights @ Higher Education Commission
Page 161: Copyrights @ Higher Education Commission

9. The Basics of Satellite Communication

Learning Focus

This chapter attempts to introduce an important concept of worldwide communication calledthe Satellite Communication. The chapter will focus on the following objectives:

1. To acquaint the reader with the fundamental concept of the word satellite and thedifference between artificial and natural satellites.

2. To acquaint the readers with the early concepts regarding the satellite.3. To introduce the terms that might come across as useful during the study of the

Satellite Communication.4. To introduce different types of satellite orbits; LEO,MEO and GEO.5. To acquaint the readers with basics of orbital dynamics.6. To acquaint the reader with the concept of longitude, latitude and the concept of

Azimuth and Elevation7. To acquaint the reader with the working of the communication satellite.8. To familiarize the reader with a wide range of applications of satellite.

Page 162: Copyrights @ Higher Education Commission

162 Chapter 9. The Basics of Satellite Communication

9.1 Introduction to Satellite Communication

An important component of worldwide communication is called the satellite communication. Asatellite can be thought of as a physical object that orbits around some heavenly body or celestialbody such as stars or planets. The best example of satellite is earth and other planets that rotatearound the sun. Satellites can either be artificial which is intentionally made to orbit around someother object or the satellites can be natural satellites. The best example of natural satellite ismoon which rotates around the earth. It is quite evident that the body which is placed in orbit hasusually less mass than the object around which it is orbiting. The first ever artificial satellite calledSputnik 1 was launched in 1957 by Soviet Union. Satellites are launched for various differentpurposes. There are also many kinds of satellites available such as: weather satellites, militarysatellites, communication satellites, research satellites, navigation satellites etc. The main focusof this chapter is Communication Satellites. As the name suggests, the communication satellitesare used to enable the communication over the longer distances such as the distances which arebeyond the line of sight. Examples of long distance communication can be receiving or makingtelephone calls or communicating with ships and aircraft. However, before proceeding with thecommunication satellites, we will present a brief history of satellites in general. The study aboutthe satellite has been carried about different scientists. The first ever fictional illustration of thesatellite being launched in the orbit was given in Brick Moon, a short story written by EdwardEverett. Konstantin Tsiolkovsky, calculated the orbital speed which is required by the minimal orbitaround the earth at 8 Km/s. These findings were published in Exploring Space Using Jet PropulsionDevices in 1903. Arthur C Clarke, in 1945, provided a detailed description about the possible useof communication satellite an article called Wireless World.

9.2 Terms Related to Satellite Communication

This section attempts to present the certain terms that you will come across during the discussion ofthe satellite communication every now and then. All the important terms along with their definitionare compiled in a single place for the sake of reader’s convenience.

1. Satellite: It is defined as an artificial body which is positioned around the earth or any otherplanet for the purpose of exchanging information or for the long distance communication.

2. Orbit: It is a repetitive motion of any heavenly body such as stars or moon around a planet.The body performing the repetitive motion has less mass than the one around which it isorbiting.

3. Orbital Period: It is the time taken by the orbiting body to complete one round trip or onecircular motion around another body which is usually greater in mass.

4. Orbital Speed: It is the speed at which a natural satellite, a planet or an artificial satelliterevolves around a massive object.

5. Orbital Velocity: It is defined as the velocity that is required to keep a natural or an artificialsatellite rotating in the orbit.

6. Posigrade: if the satellite revolves in the same direction as the earth’s then the motion is saidto be Posigrade.

7. Retrograde: If the satellite revolves in a direction that is opposite to that of earth’s themotion is said to be retrograde.

8. Geocenter: When the satellite revolves in an orbit around a massive body, it forms a planepassing through the center of gravity of that massive body. This center of gravity is calledGeocenter.

9. APOGEE: When the satellite is said to be at the farthest point from the earth, it is called theAPOGEE of the orbit.

10. PERIGEE: It is the exact opposite of the APOGEE. When the satellite is said to be at the

Page 163: Copyrights @ Higher Education Commission

9.3 Satellite Orbits 163

nearest point to the earth, it is called the PERIGEE of the orbit. This is shown in the Figure8.1. on the right:

11. Angle of Inclination: The angle of inclination defines how the satellite is positioned relativeto the equator of the earth. It is the angle formed by the earth’s equatorial plane and thesatellite’s orbital plane. The angle of inclination is zero degrees if the orbit is above theearth’s equator and 90 degrees if the satellite orbits from north to South Pole.

Figure 9.1: Apogee and Perigee

9.3 Satellite OrbitsThere are many types of satellite orbits available. The choice of the orbits depends on the purpose ofthe application. For example, communication satellites make use of geostationary orbit (discussedlater in this topic). Global positioning system or GPS makes use of Low Earth Orbit which is alsodiscussed shortly. The first satellite Sputnik 1 was placed in the geocentric orbit. Geocentric orbithas approximately 2,465 artificial satellites orbiting around the earth. The Figure 9.2 below showsthe classification of the orbits:

We will keep our discussion confined to the Altitude Classification of the orbits.

9.3.1 Low Earth Orbit (LEO)Low earth orbits are the ones that are low in altitude. Their altitude ranges from 200-1200 kmabove the earth surface. These are the simplest and the cheapest type of the orbits. With low earthorbits or LEO, high velocities are needed to balance the earth’s gravitational field. Low earthorbits do not have very powerful transmission capabilities because they are unable to transmit overlonger distances from the earth. LEOs are preferred by the applications that require less powerfulamplifiers for the transmission purposes. Also, less energy is needed to place a satellite into lowearth orbit. The round trip time for the low earth orbits depends on the altitude of the orbit and theuser’s position relative to the satellite.

Page 164: Copyrights @ Higher Education Commission

164 Chapter 9. The Basics of Satellite Communication

Figure 9.2: Classification of Orbits

Low earth orbits are popular among the applications that do not require transmission of dataat a very large distance from the earth. Communication satellites make use of LEOs. Iridiumphone system is one such example of satellite system that makes use of LEO. Apart from this, earthmonitoring satellites, monitor and traverse the earth surface at a very close distance from the earth.So, LEOs are also best suited for the earth monitoring satellites. Spy satellites also make use ofLEO because they are able to see the surface of the earth at a very close distance. Figure 8.3 belowshows the GPS signals that pass through the atmosphere the signals are detected at the satellites bythe Low Earth Orbit.

9.3.2 Medium Earth Orbit (MEO)Medium Earth Orbit or MEO (See Figure 9.4) as the name suggests, has the orbiting range betweenthat of Low Earth Orbit-LEO and the Geostationary Orbit-GEO. MEO orbits the earth at an altitudebelow 22,300 miles; which is the altitude for the geostationary orbit and above the altitude ofthe low earth orbit. Unlike low earth orbit which requires more number of satellites to cover theearth surface, MEO achieves this by using a less number of satellites. But when compared to thegeostationary orbits, it still has more number of satellites to cover the earth’s surface. The roundtrip time or RTT for the MEO is also not as low as the LEO. Medium earth orbit satellites, thatorbit in perfect circles have constant altitude and therefore travel at constant speed. These satellitesare also capable of providing coverage for the global wireless communication if several of theMEO satellites are properly coordinated with the orbits. The number of MEO satellites required toprovide the global coverage is less than the number of LEO satellites to do the same.

9.3.3 Geostationary Earth Orbit (GEO)A geostationary orbit (See Fig. 9.4) or geostationary earth orbit or geosynchronous equatorialorbit is an orbit placed at an altitude of 22,300 miles over the equator and revolves in the samedirection as the earth’s. the basic idea of the geostationary earth orbit was given by Konstantin

Page 165: Copyrights @ Higher Education Commission

9.4 Position of Satellite Orbits 165

Figure 9.3: GPS signals in Low Earth Orbits passing through atmosphere

Tsiolkovsky. Another major contribution was made my Arthur C Clarke. He published his findingsin a UK electronics and radio publication called Wireless World. His article was named ExtraTerrestrial Relays: Can Rocket Stations give World Coverage? The geostationary earth orbit iscalled so because the satellite appears to be stationary for a ground observer. Due to this reasoncommunication satellites and weather satellites are placed in geostationary orbits. The time takenby the satellite to orbit increases or decreases with the height of satellite. The two are directlyproportional to one another. i.e. as the height increases the time taken by the satellite to orbit alsoincreases. Such an orbit is called a geosynchronous orbit. And geostationary orbit is a form ofgeosynchronous orbit. It is also evident to think that a single geostationary cannot provide theglobal coverage. Therefore, in order to provide coverage for the entire planet, three geostationarysatellites are needed to be placed each separated at an angle of 120 degrees of longitude. Although,in case of geostationary satellites, the antennas do not need to be re-oriented, but the long pathlength, which is 22,300 miles, leads to delay in communication.

9.3.4 Comparison of LEO, MEO GEO

The Table 9.1 shows the comparison between the three orbit types we have discussed:

9.4 Position of Satellite Orbits

Orbital dynamics focuses on the study of the physical and the mathematical laws that are defined tolaunch the satellite and to make it keep moving in the orbit. While choosing amongst the type ofthe satellites, two major factors should be looked upon:

1. The function of the satellite.2. The coverage area.

Page 166: Copyrights @ Higher Education Commission

166 Chapter 9. The Basics of Satellite Communication

Figure 9.4: Satellite Orbits

PARAMETER LEO MEO GEO

Satellite Height 500-1500 km (5-12) K km 35,800 km

Orbital period 10-40 min 2-8 hours 24 hours

Number of Handofs High Low Least (none)

Satellite Life Short Long Long

Number of Satellites 40-80 8-20 3

Propagation Loss Least High Highest

Table 9.1: Comparison of Satellite Orbits

9.4.1 Gravity and Orbit Positioning

We know that gravity is the force of attraction through which the earth tries to pull everythingtowards its center. Keeping this in mind, it is quite evident to think that when a satellite will bereleased in the vertical direction, it will fall back to the earth surface due to the earth’s gravitationalpull. To keep the satellite rotating in the orbit, it should be given some forward motion. Thisforward motion prevents the satellite from falling back on to the earth’s surface. This forwardmotion is called inertia. The inertia and the earth’s gravitational force, together, cancel out eachother’s effect making the satellite orbit perfectly around the earth. The velocity of the satellitedepends on how far or near it is from the earth’s surface. The satellites that are near the earth’ssurface have stronger gravitational pull. Such satellites, must orbit at a faster pace to cancel outthe effect of the gravitational pull. The lowest practical height of the earth orbit is 100 miles. Tokeep the rotating in the orbit at such a low height, it should rotate at a speed of 17,500 mi/h. At thisspeed, the satellite takes approximately 90 minutes or one and half hour to orbit around the earth.Likewise, a satellite which is far from the earth’s surface has a relatively less gravitational pull andtherefore the satellite orbits at a slower pace. For a distance of 22,300 miles, the satellite shouldorbit at a speed of 6900 mi/h to orbit the earth in a time span of 24 hours. Other than the earth’sgravitational pull, the satellite is also affected by the gravitational force of sun and moon.

9.4.2 Circular and Elliptical Orbits

When a satellite rotates around the earth, it can have one of the two possible kinds of motion;circular and elliptical. The definition of the two is given below: Circular Orbital Motion: In thiskind of motion, the distance of satellite from the earth’s surface remains constant at all points in

Page 167: Copyrights @ Higher Education Commission

9.4 Position of Satellite Orbits 167

time.

Elliptical Orbital Motion: In this kind of motion, the distance of the satellite from the earth’ssurface keeps on varying at all points in time.

As shown in the Fig. 9.5, the satellite orbits in circular direction because the distance of thesatellite from the surface of the earth remains same. On the other hand, the second figure depictsthe elliptical satellite. You can observe that the distance of the satellite from the surface of the earth,keeps on changing continuously. When talking about circular and elliptical orbits, there are certainother terms that should be studied.

Figure 9.5: Circular and Elliptical Orbits

When a satellite rotates or revolves in an orbit, it forms a plane that passes through the centerof earth’s surface. This is called the Geocenter. We have also discussed the terms Posigrade andretrograde in Section 9.2. Thus, if the satellite revolves in the direction that is same as the directionof earth, then the orbit is said to be Posigrade else retrograde.

In case of elliptical orbits, the speed of the satellite also keeps on changing with the height ofsatellite. The height of satellite (See Fig. 9.5) is the defined as the distance measured from theearth’s radius to the position where the satellite is placed above it. The earth’s radius is supposed tobe 3960 miles. So, for a satellite which is 5000 miles above the earth’s surface is said to be at adistance of 8960 miles from the center of the earth.

When the satellite is near the earth’s surface, it travels at a high velocity and vice versa. Thesame concept applies in the case of elliptical orbits. When the satellite approaches near the surfaceof the earth, the velocity of the satellite increases and the when it is far from the surface of the earth,the velocity decreases.

subsubsectionLocating the Satellite

For the purpose of optimal transmission and reception, it is quite important to know the exactposition of the satellite in space. The position of the satellite can be known beforehand during thedesign and can be computed when it is launched for the first time and by adjusting it subsequentlocations. In case of geosynchronous satellites, we know that, the antenna of the earth station ispositioned only once. When locating the satellite, various tracking systems are used. These trackingsystems usually consist of an antenna, whose position is changed every now and then to obtainmaximum transmission and reception. This is done by making antenna of the tracking system pointtowards the satellite as it revolves.

Page 168: Copyrights @ Higher Education Commission

168 Chapter 9. The Basics of Satellite Communication

Figure 9.6: Height of Satellite

9.4.3 Concept of Longitude and Latitude

We know from our basic knowledge that to locate a point on the earth’s surface, the concept oflongitude and latitude is used. Latitude is referred to as an angle which is formed between the linedrawn from a specified point from the earth’s surface to its Geocenter (center of the earth). A linedrawn between the north and south pole of the earth’s surface is called meridian. Prime meridian isreferred to as the reference point to measure the longitude. Longitude is the point that connects theGeocenter of the earth’s surface is the point where prime meridian and equator and the meridian ofthe given point of interest intersect.

9.4.4 Concept of Elevation and Azimuth

To track down the position of the satellite accurately, the tracking system antenna should be pointedin accordance with the azimuth and elevation angles. Azimuth is defined as the direction where thenorth is equal to zero degrees and the angle measured in clockwise direction with respect to north iscalled the azimuth angle. Elevation angle on the other hand is formed between the horizontal planeand the pointing direction of antenna. It is relatively easy to determine the azimuth and elevationangles in case of geosynchronous satellites because these satellites have fixed position over theearth’s equator.

9.5 How Satellite Communication Works?

It is not necessary that the transmission station should always be close enough to the receivingstation. There are many cases when the transmission station on the earth is unable to communicatewith the receiving station or in other words the receiving station is beyond the line of sight of thetransmission station. To make the communication possible in such cases, satellites are employed.Satellites act as a relay that receives the information from one station and pass it on to the otherstation.

Fig. 9.7 depicts how two stations on earth are able to communicate information via the satellites.As shown in the figure, the transmitting station sends the information to the satellite at one

frequency called the uplink frequency and the satellite sends this information at another frequencycalled the downlink frequency which is usually less than the uplink frequency to the receiver stationon the earth. The information sent by the transmitting station on the earth is usually a weak signal.The satellite acts as a repeater in this case i.e. it amplifies the signal and then sends it to the receivingstation. A satellite usually consists of transponders. Transponder is the combination of transmitterand receiver and has two functions to perform:

1. Amplification2. Frequency translation

Page 169: Copyrights @ Higher Education Commission

9.5 How Satellite Communication Works? 169

Figure 9.7: Earth stations communicating via satellite

The frequency translation is done because transponder cannot transmit and receive at the samefrequency. The transponders are of wide-bandwidth so that they can transmit and receive multiplesignals. However, a transponder is always used with one uplink and downlink signal to ensurecommunication reliability. A satellite has multiple transponders and each transponder functions atdifferent frequency. A satellite with 24 channels may have 12 transponder positioned horizontallyand 12 transponders positioned vertically.

Many of the modern communication satellites are equipped with 12 transponders.

The transponders function as an amplifier. When the signal with some uplink frequency reachesthe satellite, it becomes weak. So the transponders in the satellite strengthens or amplifies it beforetransferring it to the receiver station.

Frequency translation is done because the transmitter and receiver in the transponder are veryclose to each other therefore, the high output power of the transmitter may desensitize the receiverinput power. To avoid this, transmitter and receiver in the transponder are made to function atdifferent frequencies. Apart from functioning at different frequencies, there is some spacingbetween the frequency bands too. It is also important to employ channelization process in thetransponders. In a satellite of 12 channels, twelve band pass filters are required in such a way thatone band-pass filter serves one channel. The channelization process is carried out to separate thereceived signals that are obtained. Majority of the communication satellite operate in the microwavefrequency spectrum. The microwave spectrum is divided in such a way that it serves the satellitecommunication as well as provides services for communication such as radar. However, somesatellites such as military satellites and amateur radio OSCAR function in Very High Frequency orUltra High Frequency spectrum. For the satellite communication, microwave spectrum is dividedinto frequency bands. These bands are designated by letters (Refer table 9.2)

Band C is the most widely used band for the satellite communication. It has an uplink frequencyof 6GHz and the downlink frequency of 4 GHz. To make the efficient use of the frequency spectrum,the concept of frequency reuse is also considered. To reuse the frequency, the satellite is equippedwith two identical sets of 12 transponders. The first channel in one transponder functions on thesame channel as the first channel in the other set. Although, the two operate on same frequency,yet they do not interfere because of the antenna techniques used. For instance, according to one ofthe technique, a vertically polarized antenna will not respond to a horizontally polarized antenna.Since a satellite has more than one transponder, so, to allow multiple users to have access to thesame channel some form of multiple access techniques is required. Most satellite transponders useFrequency Division Multiple Access. However, Time Division Multiple Access is also gaininggrounds.

Page 170: Copyrights @ Higher Education Commission

170 Chapter 9. The Basics of Satellite Communication

BAND FREQUENCY

P 255-390 MHzJ 350-530 MHzL 1530-2700 MHzS 2500-2700 MHzc 3400-6425 MHzX 7250-8400 MHzKu 10.95-14.5 GHzKa 17.7-31 GHzQ 36-46 GHzV 46-56 GHzW 56-100 GHz

Table 9.2: Comparison of Satellite Orbits

9.6 Application of Satellite

Satellites are serving great purposes in the current era. They have proved to be the most convenientway to achieve the purpose of long distance communication. A few major applications of thesatellites are explained below:

9.6.1 Communication Satellite

Communication Satellites have taken over the industry in the recent years. One of the basic use ofcommunication satellite is the long-distance telephone service. These services allow you to makecalls not only within your region but beyond it. TV is another major application of communicationsatellites. TV signals are of very high frequency making other long-distance transmission methodscostly. Nowadays, all the prominent TV network companies use communication satellite for TVsignal transmission.

9.6.2 Digital Satellite Radio

All the modern day radios found in your house or embedded in cars and trucks are the digitalsatellite radios. With the digital satellite radio, you can enjoy several channels of news, sports,music etc. The conventional was only capable of covering shorter distances leaving one with verylimited choice of radio channels. Conventional radios were also prone to the noise. Digital satelliteradios solve this problem by providing high quality audio which is less prone to noise. The twodigital satellite radio systems that are prevalent in US are the XM Satellite Radio and the SiriusSatellite Radio. The former makes use of three geosynchronous satellites named as Rock, Rolland Rhythm. These satellites are located to provide the coverage area for the continental UnitedStates. Whereas, the latter one makes use of three elliptical orbit satellite. Both of these digitalradio satellites make use of time division multiplexing to on air a variety of radio channels.

9.6.3 Navigation Satellite

These satellites are positioned to serve the purpose of tracking and navigation. Navigation satellitesconsist of at least one satellite constellation to fulfill the purpose of navigation. Global positioningsystem or GPS is one of the widely used example of the navigation satellites. GPS consist of anetwork of 24-satellites; each built to last for 10 years. These satellites orbit around the UnitedStates. The GPS orbits the earth two times in a day and transmit the information to the earth station.Solar energy is used to power the GPS satellites.

Page 171: Copyrights @ Higher Education Commission

9.6 Application of Satellite 171

9.6.4 Surveillance Satellite

As the name suggests these satellites are used to carry out the that are based on observation. Onegood example of surveillance satellites is meteorological satellites also called as weather satellite.Weather satellites work by photographing the cloud cover and sending this information to the earthstation for the weather forecasting purposes. Intelligence satellites are used to keep the track andcollect the information about the potential enemies. Spy satellites orbit around the earth whileimaging it and sending the information to for the military and political purpose.

Chapter Summary

• A satellite can be thought of as a physical object that orbits around some heavenly body orcelestial body such as stars or planets• The communication satellites are used to enable the communication over the longer distances

such as the distances which are beyond the line of sight.• Orbit: It is a repetitive motion of any heavenly bodies such as stars or moon around a planet.

The body performing the repetitive motion has less mass than the one around which it isorbiting.• Orbital Period: It is the time taken by the orbiting body to complete one round trip or one

circular motion around another body which is usually greater in mass.• Orbital Speed: It is the speed at which a natural satellite, a planet or an artificial satellite

revolves around a massive object.• Orbital Velocity: It is defined as the velocity that is required to keep a natural or an artificial

satellite rotating in the orbit.• Posigrade: if the satellite revolves in the same direction as the earth’s then the motion is said

to be Posigrade.• Retrograde: If the satellite revolves in a direction that is opposite to that of earth’s the motion

is said to be retrograde.• Geocenter: When the satellite revolves in an orbit around a massive body, it forms a plane

passing through the center of gravity of that massive body. This center of gravity is calledGeocenter.• APOGEE: When the satellite is said to be at the farthest point from the earth, it is called the

APOGEE of the orbit.• PERIGEE: When the satellite is said to be at the nearest point to the earth, it is called the

PERIGEE of the orbit.• Angle of Inclination: The angle of inclination defines how the satellite is positioned relative

to the equator of the earth. It is the angle formed by the earth’s equatorial plane and thesatellite’s orbital plane.• Low earth orbits are the ones that are low in altitude. With LEO high velocities are needed to

balance the earth’s gravitational field.• Low earth orbits do not have very powerful transmission capabilities because they are unable

to transmit over longer distances from the earth.• Medium Earth Orbit or MEO, has the orbiting range between that of Low Earth Orbit-LEO

and the Geostationary Orbit-GEO.• If several of the MEO satellites are properly coordinated with the orbits, they can provide

coverage for the global wireless communication.• The geostationary earth orbit is called so because the satellite appears to be stationary for a

ground observer.• Circular Orbital Motion: In this kind of motion, the distance of satellite from the earth’s

surface remains constant at all points in time.

Page 172: Copyrights @ Higher Education Commission

172 Chapter 9. The Basics of Satellite Communication

• Elliptical Orbital Motion: In this kind of motion, the distance of the satellite from the earth’ssurface keeps on varying at all points in time.• Azimuth is defined as the direction where the north is equal to zero degrees and the angle

measured in clockwise direction with respect to north is called the azimuth angle.• Elevation angle on the other hand is formed between the horizontal plane and the pointing

direction of antennae.

Multiple Choice Questions

1. Which of the following is the first activesatellite?

(a) Echo I.(b) Telstar I.(c) Early Bird.(d) Sputnik I.

2. Communication Satellites are used to en-able:

(a) Communication over longer dis-tances.

(b) Communication over telephonelines.

(c) Communication over longer dis-tances that are beyond the line ofsight.

(d) Both a b.

3. The ———————- is the repetitivemotion of any heavenly body such as starsor moon around a planet.

(a) Orbit.(b) Orbital Period.(c) Orbital Frequency.(d) Orbital Speed.

4. What ensures that the natural or an artifi-cial satellite does not escape from its orbit.

(a) Orbital Centripetal Force.(b) Orbital Velocity.(c) Orbital Acceleration.(d) Orbital Centrifugal Force.

5. The center of earth is called:(a) Geocenter.(b) Center of Gravity.(c) Geosynchronous.

6. What is considered as the unsolved prob-lem in satellite system:

(a) Coverage.(b) Cost.(c) Access.

(d) Privacy.

7. Point on the satellite orbit closest to theearth:

(a) Apogee.(b) Perigee.(c) Prograde.(d) Zenith.

8. INTELSAT stands for:(a) Intel Satellite.(b) International Telephone Satellite.(c) International Telecommunication

Satellite.(d) International Satellite.

9. Which one of the following does not fallinto the range of LEO altitude:

(a) 200-1200m.(b) 300-1100m.(c) 100-1200m.(d) 450-1200m.

10. Following satellite rotates around the earthin a low-altitude elliptical or circular pat-tern.

(a) Geosynchronous satellite.(b) Nonsynchronous satellite.(c) Prograde satellite.(d) Retrograde satellite.

11. Spy Satellite makes use of.(a) LEO.(b) MEO.(c) GEO.(d) None of above.

12. Indicate the false statement regardingMEO.

(a) MEO has the orbiting range betweenthat LEO and GEO.

(b) The RTT for the MEO is also not aslow as the LEO.

(c) MEOs are capable of providing cov-

Page 173: Copyrights @ Higher Education Commission

9.6 Application of Satellite 173

erage for the global wireless commu-nication.

(d) MEO orbits the earth at an altitudeabove 22,300 miles.

13. The satellite height for GEO is:(a) 35,8000 Km.(b) 358 Km.(c) 35,800 m.(d) 35,80 m.

a)

14. A satellite signal transmitted from a satel-lite transponder to the earth’s station iscalled:

(a) Uplink(b) Downlink(c) Terrestrial

(d) Earthbound

15. How many band-pass filters will be re-quired in a satellite consisting of 20 chan-nels?

(a) 10(b) 20(c) Depends on the quality of band-pass

filters(d) The given information is not enough

to determine the number of band-pass filters required.

16. What is the frequency range of C-band?(a) 8.5 to 12.5 GHz(b) 3.4 to 6.425 GHz(c) 12.95 to 14.95 GHz(d) 27.5 to 31 GHz

Review Questions

1. Define communication satellite in your own words.2. Why do you think study of satellite communication is crucial?3. What are the different kinds of satellites, other than those mentioned in the text?4. Define the terms: Orbit, Orbital speed, Orbital period, and Orbital Velocity5. State the difference between POSIGRADE and RETROGRADE6. What is name for the center of gravity for the earth?7. Define APOGEE and PERIGEE with the help of the diagram.8. What is the purpose of frequency translation in satellite communication?9. Why channelization is important in satellite?

10. Differentiate between different orbit types.11. What role does inertia play in the positioning of the satellites?12. State the difference between circular and elliptical orbital motion?13. How would you define the height of a satellite?14. Why do you think it is important to know the exact position of the satellite?15. What is meant by elevation and azimuth?16. What do you call the signal path from a satellite to a ground station? What is the name of the

signal path from the satellite to ground?

Brain Buzzer

1. Explain the phenomenon and need of frequency reuse from the perspective of satellitecommunication?

2. Prepare a case study regarding the Spy Satellites.3. Name the two digital radio satellite services in the United States and the frequency of

operation.

Bibliography

1. ADC USAF. 1980. Model for Propagation of NORAD Element Sets. Spacetrack Report No.3. Aerospace Defense Command, U.S. Air Force, December.

Page 174: Copyrights @ Higher Education Commission

174 Chapter 9. The Basics of Satellite Communication

2. ADC USAF. 1983. An Analysis of the Use of Empirical Atmospheric Density Models inOrbital Mechanics. Spacetrack Report No. 3. Aerospace Defense Command, U.S. Air Force,February.

3. Arons, A. B. 1965. Development of Concepts of Physics. Addison-Wesley, Reading, MA.

4. Bate, R. R., D. D. Mueller, and J. E. White. 1971. Fundamentals of Astrodynamics. Dover,New York.

5. Celestrak, at http://celestrak.com/NORAD/elements/noaa.txt Duffett-Smith, P. 1986. Practi-cal Astronomy with Your Calculator. Cambridge University Press, New York.

6. Schwalb, A. 1982a. The TIROS-N/NOAA-G Satellite Series. NOAA Technical Memoran-dum NESS 95, Washington, DC.

7. Schwalb, A. 1982b. Modified Version of the TIROS-N/NOAA A-G Satellite Series (NOAAE-J): Advanced TIROS N (ATN). NOAA Technical Memorandum NESS 116, Washington,DC.

8. Thompson, Morris M. (editor-in-chief). 1966. Manual of Photogrammetry, 3d ed., Vol. 1.American Society of Photogrammetry, New York.

9. Wertz, J. R. (ed.). 1984. Spacecraft Attitude Determination and Control. D. Reidel, Holland.

10. Maral, G., and M. Bousquet. 1998. Satellite Communications Systems. Wiley, New York.

Page 175: Copyrights @ Higher Education Commission

10. Introduction to Optical Communication System

Learning Focus

Fiber optic deals with the transmission of light energy through fibers. This chapter is solelydedicated to the study of fibre optic communication. The chapter attempts to cover all thetopics that are vital to build the foundation of optical communication. The chapter servesthe following objectives:

1. To familiarize the reader with the roots of optical communication.2. To revisit the basics of light and terms related to it such as reflection, refraction and

refractive index.3. To familiarize reader with prominent theories i.e. Ray theory and the Mode Theory

that describe how light is transmitted.4. To acquaint the reader with the Optical Fibre Communication system and parts of a

Fibre Optic cable such as core, cladding and buffer coating.5. To introduce the steps involved in the preparation of Fibre Optic Cable.6. To acquaint reader with Single Mode Fibre and Multi-mode Fibre.7. To acquaint the reader with the functionality of LED.8. To acquaint the reader with pros and cons of fibre optic technology.

Page 176: Copyrights @ Higher Education Commission

176 Chapter 10. Introduction to Optical Communication System

10.1 Introduction Optical CommunicationThe information carrying capacity of telecommunication system is proportional to its bandwidth,which in turn is proportional to the frequency of the carrier. Fiber optic communication systems uselight as a carrier with the highest frequency among all the other practical communication systems.The copper wire has the frequency range of 1 MHz. the microwave ranges from 300 MHz to 3GHz, moreover, boosters and transceivers are needed to maintain the communication after every 30– 50 km. However, the bandwidth of optical fiber system is up to 40 GHz. Fiber optics uses light tosend information (data) over long distances. More formally, fiber optics is the branch of opticaltechnology concerned with the transmission of radiant power (light energy) through fibers.

History of Fiber OpticsThe history of optical communication dates back to when the native North American and Chinesedeveloped smoke signals to achieve long distance communication. The use of visible light forcommunication has been in use for many years now. Simple systems such as signal fires, reflectingmirrors and signaling lamps have provided successful, if limited, information exchange. A note-worthy advancement in the optical communication was the development of optical telegraph in thelate 18th century by a French engineer Claude Chappe. This system by the mid-19th century wasreplaced by the electric telegraph, leaving a scattering of "Telegraph Hills" as its most visible legacy.Another major contribution was made in 1870 by John Tyndall. His experiment wasbased on theprinciple of Total Internal Reflection (TIR)-explained by Jean Daniel Colladon in 1841. Tyndallcarried out a simple experiment using water as medium and a source of light. In his experiment,the water flowed from one container to the other and source of light was placed in such a way thatits reflection could be seen in the water. His experimentation showed that light follows a zig-zagpattern as the water flows from one container and travels through the curved of the water that flowsin another container.

Alexander Graham Bell patented an optical telephone system, which he called the Photophone,in 1880, which transmitted voice signals over a beam of light, but his earlier invention, the telephone,proved far more practical. He dreamed of sending signals through the air, but the atmosphere didn’ttransmit light as reliably as wires carried electricity. In the decades that followed, light was used fora few special applications, such as signaling between ships, but otherwise optical communications,like the experimental Photophone Bell donated to the Smithsonian Institution, languished on theshelf.

During the 1950s Brain-O-Brain and Sir, Henry Hophims developed the two layered fiberconsisting of an inner core in which light propagated and an outer layer surrounding the core calledcladding which is used to confine the light. The fiber was later used by the same scientists todevelop the fiber scope i.e., a device capable of transmitting an image from one end to opposite end.

Throughout the 1960s and 1970s major advances were, made in the quality and efficiency ofoptical fibers and semiconductor light sources. By the late 1970s and early 1980s every majortelephone communication company was rapidly installing new and more efficient fibber systems.Today telephone and cable television companies can cost-justify installing fiber links to remote sitesserving tens to a few hundreds of customers. However, terminal equipment remains too expensiveto justify installing fibers all the way to homes, at least for present services. Instead, cable andphone companies run twisted wire pairs or coaxial cables from optical network units to individualhomes.

10.2 Basics of LightThe concepts concerning the nature of light have undergone several changes. Earlier it was believedthat light consisted a stream of minute particles. It was assumed that these particles travel in straightline and could penetrate the transparent materials but were reflected from opaque ones. Maxwell in

Page 177: Copyrights @ Higher Education Commission

10.2 Basics of Light 177

1864 theorized that light waves must be an electromagnetic in nature. Furthermore, observation ofpolarization effects indicated that light waves are transverse (i.e., the wave motion is perpendicularto the direction in which the wave travels). From our basic knowledge, we know that visible lightoccupies a range between 0.4 m and 0.7 m in the electromagnetic spectrum, refer the Fig. 10.1.

Figure 10.1: Electromagnetic Spectrum

However, there is more to the definition of light when studying it from the perspective of opticalcommunication. This is because multiple of light sources are used to transmit the information fromone end to the other. The wavelength of the light is measured in angstroms. However, when instudying fibre optics, the wavelength is typically measured in micrometers and nanometers.

10.2.1 Reflection

We are very familiar with light that is reflected from a flat, smooth surface such as a mirror. Thesereflections are the result of an incident ray and the reflected ray. Ideally speaking reflection can onlyoccur when light strikes on a perfect homogeneous smooth surface. Therefore angle of incidenceθi is exactly equal to the angle of reflection θr and the velocity of incident vi and reflected wavesvr also remain same. The only change observed between incident and reflected wave is that of itsdirection i.e., θi =−θr,vi = vr. The angle of reflection is determined by the angle of incidence.

Reflections in many directions are called diffuse reflection and are the result of light beingreflected by an irregular surface.

Snell’s Law

Snell’s law mathematically defines the laws of Reflection. The angle of incidence θ1 and refractionθ2 are related to each other by Snell’s law, which states:

n1sinθ1 = n2sinθ2 (10.1)

10.2.2 Refraction

Another property of light is refraction. Refraction occurs when light enters from one medium toanother. As a result refraction causes a change in the speed of light as it pass through differentmediums such as air, water, glass, and other transparent substances. Furthermore in refraction theangle of incidence is also different from angle of refraction. Therefore when light experiencesrefraction it observes changes not only in its angle θi 6=−θr but also in its speed vi 6= vr.

Page 178: Copyrights @ Higher Education Commission

178 Chapter 10. Introduction to Optical Communication System

Refractive Index

The refractive index can be stated as the ratio of velocity of light in a vacuum c to the velocity oflight in medium v. A ray of light travels slower in denser medium, and the refractive index givesthe measure of this effect. Mathematically:

n =speed of light in a vacuumspeed of light in a vacuum

=cv=

1√µ0ε0

1√µε

(10.2)

Here µ (pronounced "mu") is the permeability and ε (pronounced as “epsilon") is the per-mittivity, µ0 = 4π×10−7H/m(Henry/meter) is the magnetic constant and ε0 = 8.854187817...×10−12F/m(farads per metre) is the electric constant. Please note that ?? is a very simplistic defini-tion as the refractive index also depends on the wavelength, however, for the level of this book thisdefinition is more than sufficient.

Since the speed of light in a medium is always less than it is in a vacuum, the refractive indexis always greater than one. In air, the value is very close to 1. The refractive index varies withthe wavelength of light. In a homogeneous medium, that is, one in which the refractive index isconstant, light travels in a straight line. Only when the light meets a variation or a discontinuity inthe refractive index will the light rays be bent from their initial direction. 11.2 shows the refractiveindices of various materials.

R The light travelling into an optically denser medium (with higher refractive index) would bebent toward the normal, while light entering an optically rarer medium would be bent awayfrom the normal.

MEDIUM REFRACTIVE INDEX SPEED OF LIGHT

Air 1 300,000 km/sWater 1.33 230,000 km/sGlass 1.50 200,000 km/sDiamond 2.40 125,000 km/sGallium Arsenide 3.60

Table 10.1: Refractive indices of different mediums

10.3 Block diagram of Optical Fiber Communication

An optical fiber communication (OFC) system is similar in basic concept to any type of communi-cation system, the function of which is to convey the signal from the information source over thetransmission medium, to the destination. Fig. 10.2

For OFC, the information source (usually, an LED or laser) provides an electrical signal to atransmitter comprising an electrical stage that drives an optical source to give modulation of thelight wave carrier. The transmission medium consists of an optical fiber cable and the receiverconsists of an optical detector that drives a further electrical stage and hence provides demodulationof the optical carrier. Photodiodes (p-n. p-i-n, or avalanche) and, in some instances, phototransistorsand photoconductors are used as detectors.

Page 179: Copyrights @ Higher Education Commission

10.4 Optical Fiber waveguide 179

Figure 10.2: Block diagram of Optical Fiber Communication System

10.4 Optical Fiber waveguide

10.4.1 Optical Fiber Cable

Fiber optics (optical fibers) are long, thin strands of very pure glass about the diameter of a humanhair. They are arranged in bundles called optical cables. If you look closely at a single optical fiber,you will see that it has the following parts (See Fig. 10.3):

Figure 10.3: Parts of Optical Fiber Cable, 3D-view and basic cross section

1. Core The core is a cylindrical rod of dielectric material. Dielectric material conducts noelectricity. Light propagates mainly along the core of the fiber. The core is generally made ofglass. The core is described as having a radius of (a) and an index of refraction n1.

2. Cladding Outer optical material surrounding the core that reflects the light back into thecore. The cladding layer is made of a dielectric material with an index of refraction n2. Theindex of refraction of the cladding material is less than that of the core material. The claddingis generally made of glass or plastic. The cladding performs the following functions:• Reduces loss of light from the core into the surrounding air• Reduces scattering loss at the surface of the core• Protects the fiber from absorbing surface contaminants• Adds mechanical strength

3. Buffer coating For extra protection, the cladding is enclosed in an additional layer calledthe coating or buffer. The coating or buffer is a layer of material used to protect an opticalfiber from physical damage and moisture. The material used for a buffer is a type of plastic.The buffer is elastic in nature and prevents abrasions. The buffer also prevents the opticalfiber from scattering losses caused by microbends. Microbends occur when an optical fiberis placed on a rough and distorted surface.

The 3D view of optical fiber cable is shown in (See Fig. 10.3). Hundreds or thousands of theseoptical fibers are arranged in bundles in optical cables. The bundles are protected by the cable’s

Page 180: Copyrights @ Higher Education Commission

180 Chapter 10. Introduction to Optical Communication System

outer covering, called a jacket.

10.4.2 Formation of Fiber Optic Cable

The Fibre optic cable is made using the optical glass. However, it should be noted, that this glass isnot the one which is used to construct the window panes. The glass used to manufacture the windowpanes is comparatively thick due to lots of impurities present in it. These impurities make the glassless transparent. In case of fibre optic cable, the optical glass has relatively less impurities than theglass used to construct window panes. There are three major steps involved in the preparation of afibre optic cable:

Preparation of the Preform

The preparation of the preform begins with a mix of certain gases including silicon, tetrachloride,germanium tetrachloride and phosphorus oxychloride. The glass for the preform is made througha process called Modified Chemical Vapor Deposition (MCVD) process. The MCVD processcauses the bubbles of oxygen through the mix of silicon, tetrachloride, germanium tetrachlorideand phosphorus oxychloride. The mixture is then conducted inside a highly-refined quartz tube ofabout 4 ft long and having a diameter of 1inch inside a lathe-a machine to shape some material.As the lathe turns, As the lathe turns, a torch is moved up and down the outside of the tube whichcauses the gases to be burned and produce the deposit on the inside of the tube. The heating causesthe preform to melt the tube to about 13mm.

Drawing Fibers from the Preform Obtained

After going through the MCVD process, the quartz rod is positioned vertically in a drawing towerand is drawn downwards via computer-controlled melting and drawing process. This yields a fine,high quality fiber thread having diameter approximately of 125 µm and length of 6.25km. The core(optically pure center) is surrounded by the cladding which is less optically pure quartz.

Testing of Fiber

The last step is to test the data such bandwidth, refractive index, cladding thickness, time reflec-tometer response related to the fiber. Practically, a fiber will be tested for the following parameters:

1. 1. Tensile strength - Must withstand 100,000 lb/in2 or more2. Refractive index profile - Determine numerical aperture as well as screen for optical defects3. Fiber geometry - Core diameter, cladding dimensions and coating diameter are uniform4. Attenuation - Determine the extent that light signals of various wavelengths degrade over

distance5. Information carrying capacity (bandwidth) - Number of signals that can be carried at one

time (multi-mode fibers)6. Chromatic dispersion - Spread of various wavelengths of light through the core (important

for bandwidth)7. Operating temperature/humidity range8. Temperature dependence of attenuation9. Ability to conduct light underwater - Important for undersea cables.

10.4.3 Characteristics of Fiber Cable

There are many factors that determine the characteristics of the transmission of the light. The fourof the major factors are:

1. Composition of the fiber: The composition of the fiber determines its refractive index. This isachieved by a process called doping. The process of doping involves mixing other materialswhich alters the refractive index.

Page 181: Copyrights @ Higher Education Commission

10.5 Transmission of Light through Fibre Optics 181

2. Size: The size i.e. length and diameter of the fiber determines its mode of operation. Themode of operation refers to the physical and mathematical aspects of the propagation ofenergy through a medium. The fiber can support either single mode of operation or themultimode of operation. The details about the single and multimode are discussed in the nextsection.

3. Refractive Index Profile: It is the relationship between the multiple indices that exist in thecore and cladding of a particular fiber. The two major indices are:

(a) Step Index: Refers to the abrupt index change from the core to cladding.(b) Graded Index: In this case the highest index is at the center and decreases gradually

until it reaches the index number of the cladding that is near the surface.

10.5 Transmission of Light through Fibre Optics

The transmission of light along optical fibers depends not only on the nature of light, but also onthe structure of the optical fiber. Two theories are used to describe how light is transmitted alongthe optical fiber.

10.5.1 Ray Theory

Ray theory uses the concepts of light reflection and refraction and treat light as a simple ray. Theadvantage of the ray approach is that you get a clearer picture of the propagation of light along afiber. The ray theory is used to approximate the light acceptance and guiding properties of opticalfibers. Two types of rays can propagate along an optical fiber, meridional rays and skew rays.Meridional rays pass through the axis of the optical fiber. Meridional rays are used to illustrate thebasic transmission properties of optical fibers. Skew rays are rays that travel through an opticalfiber without passing through its axis.

Total Internal Reflection

The light in a fiber-optic cable travels through the core (hallway) by constantly bouncing from thecladding (mirror-lined walls), a principle called Total Internal Reflection. When light is incidentupon a medium of lesser index of refraction, the ray is bent away from the normal, so the exit angleis greater than the incident angle. Such reflection is commonly called "internal reflection". The exitangle will then approach 90 for some critical incident angle c, and for incident angles greater thanthe critical angle there will be total internal reflection.See Fig. 10.4.

Figure 10.4: Total Internal Reflection

Page 182: Copyrights @ Higher Education Commission

182 Chapter 10. Introduction to Optical Communication System

Critical AngleThe critical angle can be calculated from Snell’s law by setting the refraction angle equal to 90deg.

sinθc =n2

n1(10.3)

Relative Refractive Index DifferenceIt defines the difference in the core and cladding refractive indices.

∆ =n2

1−n22

2n21

n1−n2

n1(10.4)

Acceptance AngleThe acceptance angle (θa) is the largest incident angle ray that can be coupled into a guided raywithin the fiber. Fig. 10.5 shows the acceptance cone of an optical fiber.

Figure 10.5: Acceptance Cone

Numerical Aperture (NA)The acceptance angle (θa) is related to the refractive indices of the core, cladding, and mediumsurrounding the fiber. This relationship is called the numerical aperture of the fiber. The numericalaperture (NA) is a measurement of the ability of an optical fiber to capture light. The NA is alsoused to define the acceptance cone of an optical fiber.

NA = n0sinθa =√

n12−n22 (10.5)

The ability of the fiber to accept optical energy from a light source is related to ∆. ∆ also relatesto the numerical aperture by

NAn1√

2∆ (10.6)

10.5.2 Limitations of Ray TheoryRay theory describes only the direction a plane wave takes in a fiber. In reality, plane wavesinterfere with each other. Therefore, only certain types of rays are able to propagate in an opticalfiber. Optical fibers can support only a specific number of guided modes. In small core fibers, thenumber of modes supported is one or only a few modes. Mode theory is used to describe the typesof plane waves able to propagate along an optical fiber.

Page 183: Copyrights @ Higher Education Commission

10.5 Transmission of Light through Fibre Optics 183

10.5.3 Mode TheoryMode theory treats light as electromagnetic waves. The mode theory describes the behavior oflight within an optical fiber. The mode theory is useful in describing the optical fiber properties ofabsorption, attenuation, and dispersion.

The mode theory uses electromagnetic wave behavior to describe the propagation of light alonga fiber. A set of guided electromagnetic waves is called the mode of the fiber. Depending on theboundary conditions, two types of modes are to be distinguished: If the electromagnetic field iszero at infinity, the mode is guided. This implies that the guided mode spectrum is discrete. If not,it is a radiation mode. They exist in a continuum.

Depending on the boundary conditions, two types of modes are to be distinguished:• If the electromagnetic field is zero at infinity, the mode is guided;. This implies that the

guided mode spectrum is discrete.• If not, it is a radiation mode. They exist in a continuum.

Plane WaveThe mode theory suggests that a light wave can be represented as a plane wave. A plane waveis a wave whose surfaces of constant phase are infinite parallel planes normal to the direction ofpropagation. A plane wave is described by its direction, amplitude, and wavelength of propagation.The planes having the same phase are called the wavefronts. The wavelength, , of the plane wave isgiven by:

λ = cn f

Where c is the speed of light in a vacuum, f is the frequency of the light, and n is the index ofrefraction of the plane-wave medium.

Normalized FrequencyThe normalized frequency (a dimensionless quantity) determines how many modes a fiber cansupport. Normalized frequency N f is defined as:

N f =2πaλ

√n12−n22 (10.7)

Where n1 is the core index of refraction, n2 is the cladding index of refraction, a is the coreradius, and λ is the wavelength of light in air. The number of modes that can exist in a fiber is afunction of N f . As the value of N f increases, the number of modes supported by the fiber increases.

Phase VelocityPlane waves form a surface with constant phase points called wavefronts. As a monochromaticlight wave propagates along a waveguide in the direction of propagation these points of constantphase travel at a phase velocity vp given by:

vp =ω

β

where ω is the angular frequency of the wave.

Group VelocityOften the situation exists where a group of waves with closely familiar frequencies propagate sothat their resultant forms a packet of waves. This wave packet does not travel at the phase velocityof the individual waves but is observed to move at a group velocity given by:

vg =δω

δβ

Page 184: Copyrights @ Higher Education Commission

184 Chapter 10. Introduction to Optical Communication System

10.6 Types of Fiber Optic Cable

One way to categorize optical fibers is in terms of how many light signals they can allow topropagate through them and in these terms it comes in two types:

1. Single-mode fibers2. Multi-mode fibers

As the name implies, Single-mode fibers allow only one electromagnetic wave or ray topropagate within it. Single-mode fibers (see Fig. 10.6) have small cores (about 3.5×10−4 inchesor 9 microns in diameter). Multi-mode fibers (See Fig. 10.6) allow set of electromagnetic wavesto propagate. Multi-mode fibers have larger cores (about 2.5× 10−3 inches or 62.5 microns indiameter) and transmit infrared light (wavelength = 850 to 1,300 nm) from light emitting diodes(LEDs).

Figure 10.6: Types of Fiber cables

While it might appear that Multimode fibers have higher information carrying capacity, in factthe opposite is true. Single-mode fibers retain the integrity of each light pulse over longer distances,allowing more information to be transmitted. This high bandwidth has made Single-Mode fiberthe ideal transmission medium for many applications. Multimode fiber today is used primarilyin premise applications, where transmission distances are less than two kilometers. Some opticalfibers can be made from plastic. These fibers have a large core (0.04 inches or 1 mm diameter) andtransmit visible red light (wavelength = 650 nm) from LEDs.

The single mode and multimode fibers are further classified based on their refractive indexprofiles.

10.6.1 Single Mode Step Index

A small-core optical fiber through which only one mode will propagate. The optical fiber with asmaller core of radius a and a constant refractive index n1 and a cladding of slightly lower refractiveindex n2 is known as single mode step index fiber. This is because the refractive index profile forthis type of fiber makes a step change at the core – cladding interface, as indicated in Fig. 10.7(a).

Page 185: Copyrights @ Higher Education Commission

10.6 Types of Fiber Optic Cable 185

Figure 10.7: Refractive Index Profiles of Step Index and Graded Index

10.6.2 Multimode Step Index

Multimode step index fibers allow the propagation of a finite number of guided modes along thechannel. A multimode step-index fiber has a core of radius a and a constant refractive index n1. Acladding of slightly lower refractive index n2 surrounds the core. Fig. 10.7(a) shows the refractiveindex profile n(r) for this type of fiber. Notice the step decrease in the value of refractive index atthe core-cladding interface.

The number of modes that multimode step-index fibers propagate depends on ∆ and coreradius a of the fiber. The number of propagating modes also depends on the wavelength λ ofthe transmitted light. Multimode step-index fibers have relatively large core diameters and largenumerical apertures. A large core size and a large numerical aperture make it easier to couplelight from LED into the fiber. Unfortunately, multimode step-index fibers have limited bandwidthcapabilities. Dispersion, mainly modal dispersion, limits the bandwidth or information-carryingcapacity of the fiber. Short-haul, limited bandwidth, low-cost applications typically use multimodestep-index fibers.

10.6.3 Multimode Grade Index

A multimode graded-index fiber has a core of radius a. Unlike step-index fibers, the value of therefractive index of the core n1 varies according to the radial distance r i.e., such fibers do not haveconstant refractive index in the core. The value of n1 decreases as the distance r from the center ofthe fiber increases. The value of n1 decreases until it approaches the value of the refractive indexof the cladding n2. The value of n1 must be higher than the value of n2 to allow for proper modepropagation.

The shape of refractive profile of the core in the case of multimode graded index fiber dependson the profile parameter, α as shown in Fig. 10.7(b) whose value varies between 1 (for perfecttriangular shape) and ∞ (for perfect square or step index shape). Typically the graded index fibersare manufactured with the values of α being 2 to 10. See Fig. 10.7(c).

Page 186: Copyrights @ Higher Education Commission

186 Chapter 10. Introduction to Optical Communication System

10.7 Optical Sources

The optical sources convert electrical energy to light energy suitable for the channel and modulatingsignal characteristics. It is considered as an active component in the optical communication link.The channel can be optical fiber, diffuse wireless or point to point wireless in a communicationsystem.

Three main types of optical light source are available, namely:• Wideband ‘continuous spectra’ sources (incandescent lamps)• Monochromatic incoherent sources (light-emitting diode, LEDs)• Monochromatic coherent sources (LASERs)Although several devices can convert electrical signals to light waves, only two sources namely

the light-emitting diode (LED) and the injection laser diode (ILD) satisfy these requirements andare suitable for fiber-optic communication systems and therefore, we’ll discuss only them.

10.7.1 Light Emitting Diode

It is a forward-biased p-n junction that emits visible light when energized. Electrons injected into itby the forward current through the junction populate the normally empty conduction band of thesemiconductor, and light is generated when these electrons recombine with holes in the valenceband to emit a photon. The LED can operate at lower current densities than the injection laser,but the emitted photons have random phases and the device is incoherent optical source. Also,the energy of emitted photons is only roughly equal to the bandgap energy of the semiconductormaterial, which gives a much wider spectral linewidth than the injection laser.

LED Operation

Charge-carrier recombination takes place when electrons from the n-side cross the junction andrecombine with the holes on the p-side. Now, electrons are in the higher conduction band on then-side whereas holes are on the lower valence band on the p-side. During recombination, some ofthe energy difference is given up in the form of heat and light (i.e., photon). For Si and Ge junctions,greater percentage of this energy is given up in the form of heat so that the amount of emitted lightis insignificant. But in the case of other semiconductor materials like Gallium-Arsenide (GaAs),Gallium-Phosphide (GaP), and Gallium-Arsenide-Phosphide (GaAsP), a greater percentage ofenergy released during recombination is given in the form of light. If the semiconductor material istranslucent, light is emitted and the junction becomes a light source (i.e., light-emitting diode), asshown schematically in Fig. 10.8.

LED as Optical Source

LED as Optical Source Although LEDs because of their linear output power versus drive currentcharacteristics, are attractive for analog modulation, their non-linear distortion properties due tonon-radiative combination at low drive current levels create problems in implementation of manyintensity modulation analog applications. The modulation rate of LEDs in digital applications islimited by the life-time of carriers in the junction region. The lifetime of carriers depends upon• Doping levels in junction regions• Number of injected carriers, and• Recombination velocity at defects• Thickness of junction regionTypical modulation bandwidths for LEDs due to reasons above are limited to 50 MHz.The

LEDs have spectral width of 30 to 50 nm in the 850 nm range and 70 to 100 nm at the 1300 nmrange. Thus, limiting the use of such devices to about 34 Mbps. At present, LEDs have severaldisadvantages in comparison with injection lasers. These include:• Usually lower modulation bandwidth

Page 187: Copyrights @ Higher Education Commission

10.7 Optical Sources 187

Figure 10.8: Light Emitting Diode (LED)

• Generally lower optical power coupled into a fiber (microwatts)• Harmonic distortionAlthough these problems may initially appear to make LED a less attractive, the device has a

number of distinct advantages which have given it a prominent place in optical fiber communicationssuch as:• Simpler fabrication• Cheap• Reliability• Linearity• Simpler derive circuitry

These advantages combined with the development of high radiance, relatively high bandwidthdevices have ensured that the LED remains an extensively used source for non-telecommunications(short-haul) applications. As discussed earlier, light emitting diodes as sources have their own placein meeting some of the demands of OFC system. The most commonly used LEDs are the edgeemitting and the super luminescent light emitting diodes. The characteristics of these devices arediscussed briefly.

10.7.2 LASERsThe laser is a light source that exhibits unique properties and a wide variety of applications. Lasersare used in welding, surveying, medicine, communication, national defense, and as tools in manyareas of scientific research. Many types of lasers are commercially available today, ranging in sizefrom devices that can rest on a fingertip to those that fill large buildings. All these lasers havecertain basic characteristic properties in common.

A laser produces coherent light through a process termed "stimulated emission." The word"LASER" is an acronym for "Light Amplification by Stimulated Emission of Radiation." Thelight emitted by lasers is different from that produced by more common light sources such asincandescent bulbs, fluorescent lamps.

Following functional elements are necessary in lasers to produce coherent light by stimulatedemission of radiation. Fig. 10.9 illustrates these functional elements.

Page 188: Copyrights @ Higher Education Commission

188 Chapter 10. Introduction to Optical Communication System

Figure 10.9: Elements of a LASER

Active MediumThe active medium is a collection of atoms or molecules that can be excited to a state of invertedpopulation; that is, where more atoms or molecules are in an excited state than in some lowerenergy state (or ground state). The active medium may be a gas, a liquid, a solid material, or ajunction between two slabs of semiconductor materials.

The two states chosen for the lasing transition must possess certain characteristics. First, atomsmust remain in the upper lasing level for a relatively long time to provide more emitted photonsby stimulated emission than by spontaneous emission. Second, there must be an effective methodof "pumping" atoms from the highly-populated ground state into the upper lasing state in order toincrease the population of the higher energy level over the population in the lower energy level.The active medium of a laser can be thought of as an optical amplifier. A beam of coherent lightentering one end of the active medium is amplified through stimulated emission until a coherentbeam of increased intensity leaves the other end of the active medium. Thus, the active mediumprovides optical gain in the laser.

Excitation Mechanism (Pumping)The excitation mechanism is a source of energy that excites, or pumps the atoms in the activemedium from a lower to a higher energy state in order to create a population inversion. In gas lasersand semiconductor lasers, the excitation mechanism usually consists of an electrical-current flowthrough the active medium. Solid and liquid lasers most often employ optical pumps; for example,in a ruby laser, the chromium atoms inside the ruby crystal may be pumped into an excited state bymeans of a powerful burst of light from a flashlamp containing xenon gas.

Feedback MechanismThe feedback mechanism returns a portion of the coherent light originally produced in the activemedium back to the active medium for further amplification by stimulated emission. The amountof coherent light produced by stimulated emission depends upon both the degree of populationinversion and the strength of the stimulating signal. The feedback mechanism usually consists oftwo mirrors–one at each end of the active medium–aligned in such a manner that they reflect thecoherent light back and forth through the active medium.

Lasing ActionWhen the excitation mechanism of a laser is activated, energy flows into the active medium, causingatoms to move from the ground state to certain excited states. In this way, population inversion iscreated. Some of the atoms in the upper lasing level drop to the lower lasing level spontaneously,emitting incoherent photons at the laser wavelength and in random directions. Most of thesephotons escape from the active medium, but those that travel along the axis of the active mediumproduce stimulated emission. The beam produced is reflected back through the active medium by

Page 189: Copyrights @ Higher Education Commission

10.8 Optical Detectors 189

the mirrors. A portion of the light is emitted as the output beam.

10.8 Optical Detectors

The optical detector converts the received optical signal into an electrical signal, which is thenamplified for further processing. The detectors for use in optical communication systems are mostsuitable when having high sensitivity and fast response time, permitting the detection of a low levelsignal at high bit rates, in turn permitting longer spans for high bit rate systems. Photodiodes andavalanche photodiodes are the most commonly used optical detectors.

To detect optical radiation (photons) both external and internal photoemission of electron may beutilized. External photoemission devices typified by photomultiplier tubes and vacuum photodiodesmeet some of the performance criteria but are too bulky, and require high voltages. However,internal photoemission devices especially semiconductor photodiodes with or without internal(avalanche) gain provide good performance and compatibility with relatively low cost. Thesephotodiodes are made from direct bandgap semiconductor materials such as silicon, germaniumand group III-V alloys.

The internal photoemission process may take place in both intrinsic and extrinsic semicon-ductors. However, for fast response coupled with efficient absorption of photons, the intrinsicabsorption processes is preferred and at present all detectors for optical fiber communications useintrinsic photo detection.

10.8.1 PN Photodiode

A PN-photodiode is the most basic junction type of photoconductive device used for opticalcommunication systems. Here P represents p-type material and N represents n-type material.Photodiodes are provided with either a window or optical fibre connection, in order to let in thelight to the sensitive part of the device. Photodiodes can be used in either zero bias or reverse biasbut never in forward bias because we want the output from photodetector to be only sensitive to theincident light. In zero bias, light falling on the diode causes a voltage to develop across the device,leading to a current in the forward bias direction. This is called the photovoltaic effect, and is thebasis for solar cells - in fact a solar cell is just a large number of big, cheap photodiodes.

Diodes usually have extremely high resistance when reverse biased. This resistance is reducedwhen light of an appropriate frequency shines on the junction. Hence, a reverse biased diode can beused as a detector by monitoring the current running through it. Circuits based on this effect aremore sensitive to light than ones based on the photovoltaic effect.

10.8.2 PIN Photodiode

The three letters of PIN indicates the p-type, intrinsic-type, and n-type material. It is a diode witha large intrinsic region sandwiched between p-doped and n-doped semiconducting regions. Inorder to allow operation at longer wavelengths where the light penetrates more deeply into thesemiconductor material a wider depletion region is necessary. To achieve this the n-type material isdoped so lightly that it can be considered intrinsic, and to make a low resistance contact a highlydoped n type (n+) layer is added, this creates a p-i-n (or PIN) structure.

10.8.3 Avalanche Photodiode

An avalanche photodiode is a semiconductor containing a pn-junction consisting of a positivelydoped p region and a negatively doped n region sandwiching an area of neutral charge termedthe depletion region. These diodes provide gain by the generation of electron-hole pairs from anenergetic electron that creates an "avalanche" of electrons in the substrate.

Page 190: Copyrights @ Higher Education Commission

190 Chapter 10. Introduction to Optical Communication System

It operates with a reverse-bias voltage (50 to 400 V) that causes the primary photocurrent toundergo amplification by cumulative multiplication of charge carriers. More recently, however,it should be noted that devices which will operate at much lower bias voltages (15 to 25 V) havebecome available.

Avalanche photodiodes are made of group-IV materials as well as III-V alloys e.g., InGaAsP orInP. This has a more sophisticated structure than the PIN photodiode in order to create extremelyhigh field region. The result of this construction is that besides the depletion region where absorptiontakes place and the primary carrier pairs are generated, there is a high field region where electronscan acquire sufficient energy to excite new electron-hole pairs. This process is known as impactionization, and is the same phenomenon which leads to avalanche breakdown in ordinary reversebiased diode.

Avalanche diodes are very similar in design to the silicon p-i-n diode; however the depletionlayer in an avalanche photodiode is relatively thin, resulting in a very steep localized electrical fieldacross the narrow junction.

10.9 Pros and Cons of Optical Fiber Technology

Pros

Communications using optical fibers as guided transmission media has a number of extremelyattractive features, several of which were apparent when the technique was originally conceived.Furthermore, the advances and developments in the technology since last three decades havereduced even the most inevitable losses in traditional transmission media making optical fibersthe best choice for guided communications. Compared to conventional metal wire (copper wire),optical fibers are:• Potential Low cost - presently (August 2003) the cost of fiber is comparable to copper at

approximately "”0.2to”"0.4 per yard and is expected to drop as it becomes more widely used.Several miles of optical cable can be made cheaper than equivalent lengths of copper wire.• Thinner and light weight - Optical fibers have very small diameter which are often no

greater than the diameter of human hair. An optical cable weighs less than a comparablecopper wire cable. Fiber-optic cables take up less space in the ground.• Higher carrying capacity (Bandwidth) - Because optical fibers are thinner than copper

wires, more fibers can be bundled into a given-diameter cable than copper wires. This allowsmore data lines to go over the same cable.• Electrical isolation – glass and polymer plastics which form optical fibers are electrical

insulators and therefore do not exhibit earth loop and interfacing problems. Furthermore, thisproperty suits optical fibers to be used in electrically hazardous areas as the fibers create noarcing or spark hazard at abrasion or short circuits.• Immunity to interference and crosstalk – optical fibers form a dielectric waveguide and

are therefore free from• Less signal degradation- Fibers have been fabricated with losses as low as 0.2 dB/km and

this feature has become major advantage of optical fiber communications. The loss of signalin optical fiber is less than in copper wire. Moreover, it facilitates communication links withextremely wide repeater spacing thus, reducing both cost and complexity.• Light signals- Unlike electrical signals in copper wires, light signals from one fiber do not

interfere with those of other fibers in the same cable.• Low power- Because signals in optical fibers degrade less, lower-power transmitters can be

used instead of the high-voltage electrical transmitters needed for copper wires.• Digital signals- Optical fibers are ideally suited for carrying digital information, which is

especially useful in computer networks.

Page 191: Copyrights @ Higher Education Commission

10.9 Pros and Cons of Optical Fiber Technology 191

• Non-flammable- Because no electricity is passed through optical fibers, there is no firehazard.• Flexible- Because fiber optics are so flexible and can transmit and receive light, they are

used in many flexible digital cameras for the following purposes:– Medical imaging - in bronchoscopes, endoscopes, laparoscopes– Mechanical imaging - inspecting mechanical welds in pipes and engines (in airplanes,

rockets, space shuttles, cars)– Plumbing - to inspect sewer lines

• Security - Since light does not radiate significantly from fiber, the data is immune to anyexternal noises. Moreover, it is nearly impossible to tap the cable without specialized skills.

Because of these advantages, fiber optics replaces copper in many industries, most notablytelecommunications and computer networks.

ConsNothing is perfect in this world and so do the fibers. Despite of having tremendous advantages itsurely has some disadvantages like:• Interfacing cost• Strength• Remote powering of devices

Chapter Summary• Fiber optics is the branch of optical technology concerned with the transmission of radiant

power (light energy) through fibers.• A noteworthy advancement in the optical communication was the development of optical

telegraph in the late 18th century by a French engineer Claude Chappe.• In 1870, John Tyndall carried out an experiment which was based on the principle of Total

Internal Reflection (TIR).• Light that is reflected from a flat, smooth surface such as a mirror. These reflections are the

result of an incident ray and the reflected ray. The angle of reflection is determined by theangle of incidence.• Reflections in many directions are called diffuse reflection and are the result of light being

reflected by an irregular surface.• Refraction is caused by a change in the speed of light as it passes through different mediums

such as air, water, glass, and other transparent substances.• The refractive index can be stated as the ratio of velocity of light in a vacuum to the velocity

of light in medium.• The ray theory is used to approximate the light acceptance and guiding properties of optical

fibers. Meridional rays pass through the axis of the optical fiber• Skew rays are rays that travel through an optical fiber without passing through its axis.• When light is incident upon a medium of lesser index of refraction, the ray is bent away from

the normal, so the exit angle is greater than the incident angle. Such reflection is commonlycalled "internal reflection"• Snell’s Law: The angle of incidence 1 and refraction 2 are related to each other by Snell’s

law, which states:

n1sinθ1 = n2sinθ2

• Critical Angle The critical angle can be calculated from Snell’s law by setting the refractionangle equal to 90.

sinθc =n2n1

Page 192: Copyrights @ Higher Education Commission

192 Chapter 10. Introduction to Optical Communication System

• Relative refractive Index Difference It defines the difference in the core and cladding refractiveindices.

∆ =n2

1−n22

2n21

n1−n2n1

• Acceptance Angle: The acceptance angle θa is the largest incident angle ray that can becoupled guided ray within the fiber.• Numerical Aperture: The numerical aperture (NA) is a measurement of the ability of an

optical fiber to capture light.

NA≈ n1√

2∆

• The mode theory uses electromagnetic wave behavior to describe the propagation of lightalong a fiber.• Core- The core is a cylindrical rod of dielectric material. Dielectric material conducts no

electricity.• Cladding- Outer optical material surrounding the core that reflects the light back into the

core.• Buffer coating-. For extra protection, the cladding is enclosed in an additional layer called

the coating or buffer.• Single-mode fibers allow only one electromagnetic wave or ray to propagate within it.• Multi-mode fibers allow set of electromagnetic waves to propagate.• Multimode step index fibers allow the propagation of a finite number of guided modes along

the channel.

Multiple Choice Questions

1. The copper wire has frequency range of:(a) 1 MHz approximately(b) 300 MHz-3GHz approximately.(c) 0.5 MHz approximately.(d) None of them.

2. The Bandwidth of optical fibre system is:(a) Up to 30GHz.(b) Up to 40GHz.(c) Up to 20 GHz(d) None of them.

3. Optical telegraph was devised by:(a) Claude Chappe.(b) John Tyndall.(c) Jean Daniel Colladon.(d) Alexander Graham Bell.

4. Visible light occupies range from:(a) 0.6 µm to 0.8 µm.(b) 0.3 µm to 0.6 µm.(c) 0.4 µm to 0.7 µm.(d) 0.2 µm to 0.4 µm.

5. Reflection in many direction is called:(a) Diffuse Reflection.

(b) Scatter Reflection.(c) Refraction.(d) Incident Ray.

6. Refractive index n is the ratio given by:(a) speed of light in medium

speed of light in vacuum

(b) speed of light in airspeed of light in vacuum

(c) speed of light in vacuumspeed of light in medium

(d) None of them.

7. Refractive index is always:(a) Greater than 1.(b) Less than 1.(c) Equal to 1.(d) Exactly 1.

8. Which of the following is true about fibreoptic cable?

(a) It is long thin strand of pure glass.(b) They have diameter of human hair.(c) Deals with transmission of light.(d) All of them.

9. Homogeneous medium is the one:(a) In which refractive index is constant.

Page 193: Copyrights @ Higher Education Commission

10.9 Pros and Cons of Optical Fiber Technology 193

(b) In which refractive index is greaterthan 1.

(c) In which refractive index is less than1.

(d) In which refractive index is equal to1.

10. Which of the following part of fibre opticis made of dielectric material?

(a) Cladding(b) Core.(c) Buffer Coating.(d) None of them.

11. Which of the following is NOT true aboutcladding?

(a) Adds mechanical strength.(b) Reduces loss of light from the core

into the surrounding air.(c) Made up of di-electric material.(d) Protects the fibre from absorbing sur-

face contaminants.

12. Abrupt index change from core tocladding is:

(a) Step Index.(b) Graded Index.(c) Refractive Index.(d) None of them.

13. First generation optical communicationsources were designed to operate between:

(a) 0.8 µm -0.9 µm(b) 0.7 µm-0.8 µm(c) 0.6 µm – 0.4 µm(d) None of them.

Review Questions

1. Define Fibre Optics.2. Briefly describe main events in history of advancement of fibre optics.3. Explain and illustrate the Tyndall’s experiment.4. Define Reflection and Refraction.5. What are Meridional Rays and Skew Rays.6. List out the advantages of Ray Theory.7. Explain ‘Total Internal Reflection’.8. Exlain the following:

• a) Snell Law.• b) Critical Angle.• c) Acceptance Angle.• d) Numerical Aperture.

9. What is guided mode and radiation mode.10. What are main parts of a fibre optic cable? Explain the major characteristics of each with the

help of suitable diagram.11. Enlist and explain briefly the main steps involved in the preparation of fibre optic cable.12. Enlist the criteria that optical sources designed from communication purpose should satisfy.13. Briefly discuss with the help of a suitable diagram the LED operation.

Review Questions

1. Research and gather all the necessary information about LASER?2. Enlist disadvantages of fibre optic technology other than those mentioned in text.3. List out a few practical examples of LED.4. List out the disadvantages of LED.

Bibliography

1. G. J. Holzmann and B. Pehrson, The Early History of Data Networks, IEEE ComputerSociety Press, Los Alamitos, CA, 1995; also available on the Internet at the address

Page 194: Copyrights @ Higher Education Commission

194 Chapter 10. Introduction to Optical Communication System

http://www.it.kth.se/docs/early net.

2. D. Koenig, “Telegraphs and Telegrams in Revolutionary France,” Scientific Monthly, 431(1944).

3. AlQuwaiee, Hessa, Imran Shafique Ansari, and Mohamed-Slim Alouini. "On the per-formance of free-space optical communication systems over double generalized gammachannel." IEEE journal on selected areas in communications 33, no. 9 (2015): 1829-1840.

4. A. Jones, Historical Sketch of the Electrical Telegraph, Putnam, New York, 1852.

5. A. G. Bell, U.S. Patent No. 174,465 (1876).

6. T. H. Maiman, Nature 187, 493 (1960).

7. W. K. Pratt, Laser Communication Systems, Wiley, New York, 1969.

8. Trichili, Abderrahmen, Carmelo Rosales-Guzmán, Angela Dudley, Bienvenu Ndagano,Amine Ben Salem, Mourad Zghal, and Andrew Forbes. "Optical communication beyondorbital angular momentum." Scientific reports 6 (2016): 27674.

9. Kaushal, Hemani, and Georges Kaddoum. "Optical communication in space: Challenges andmitigation techniques." IEEE communications surveys tutorials 19, no. 1 (2017): 57-96.

10. S. E. Miller, Sci. Am. 214 (1), 19 (1966).

11. K. C. Kao and G. A. Hockham, Proc. IEE 113, 1151 (1966); A. Werts, Onde Electr. 45, 967(1966).

12. F. P. Kapron, D. B. Keck, and R. D. Maurer, Appl. Phys. Lett. 17, 423 (1970).

13. I. Hayashi, M. B. Panish, P. W. Foy, and S. Sumski, Appl. Phys. Lett. 17, 109 (1970).

14. A. E. Willner, Ed., IEEE J. Sel. Topics Quantum Electron. 6, 827 (2000). Several historicalarticles in this millennium issue cover the development of lasers and optical fibers.

15. H. Kogelnik, IEEE J. Sel. Topics Quantum Electron. 6, 1279 (2000).

16. K. Fukuchi, T. Kasamatsu, M. Morie, R. Ohhira, T. Ito, K. Sekiya, D. Ogasahara, and T. Ono,Paper PD24, Proc. Optical Fiber Commun. Conf., Optical Society of America, Washington,DC, 2001.

17. G. Vareille, F. Pitel, and J. F. Marcerou, Paper PD22, Proc. Optical Fiber Commun. Conf.,Optical Society of America, Washington, DC, 2001.

18. N. Kashima, Passive Optical Components for Optical Fiber Transmission, Artec House,Norwood, MA, 1995.

19. M. M. K. Liu, Principles and Applications of Optical Communications, Irwin, Chicago,1996.

20. M. Cvijetic, Coherent and Nonlinear Lightwave Communications, Artec House, Norwood,MA, 1996.

21. Kazovsky, S. Bendetto, and A. E. Willner, Optical Fiber Communication Systems, ArtecHouse, Norwood, MA, 1996.

22. R. Papannareddy, Introduction to Lightwave Communication Systems, Artec House, Nor-wood, MA, 1997.

23. G. Lachs, Fiber Optic Communications: Systems Analysis and Enhancements, McGrawHill,New York, 1998.

24. A. Borella, G. Cancellieri, and F. Chiaraluce, WDMA Optical Networks, Artec House,

Page 195: Copyrights @ Higher Education Commission

10.9 Pros and Cons of Optical Fiber Technology 195

Norwood, MA, 1998.

25. R. A. Barry, Optical Networks and Their Applications, Optical Society of America, Washing-ton, DC, 1998.

26. T. E. Stern and K. Bala, Multiwavelength Optical Networks, Addison Wesley, Reading, MA,1999.

27. R. Sabella and P. Lugli, High Speed Optical Communications, Kluwer Academic, Norwell,MA, 1999.

28. P. J. Wan, Multichannel Optical Networks, Kluwer Academic, Norwell, MA, 1999.

29. A. Bononi, Optical Networking, Springer, London, 1999.

30. G. E. Keiser, Optical Fiber Communications, 3rd ed., McGraw-Hill, New York, 2000.

31. K. M. Sivalingam, Optical WDM Networks: Principles and Practice, Kluwer Academic,Norwell, MA, 2000.

Page 196: Copyrights @ Higher Education Commission
Page 197: Copyrights @ Higher Education Commission

11. Introduction to RADARs

Learning Focus

RADARs are used for surveillance and ranging purposes. They have evovled themselvesinto very sophisticated systems capable to measure distance, speed of objects apart fromtracking and identifying them. This chapter attempts to give the basic idea of what RADARsare and how do they work The chapter serves the following objectives:

1. To familiarize the reader with the basics of RADAR .2. To outline basic RADAR working principle3. To familiarize reader with Pulse RADAR and Continuous Wave RADAR4. To acquaint the reader with the major applications of RADAR5. To enable the user to perform simple calculations of range and velocity

11.1 Introduction to RADARsOne of the major application of communication system is that of surveillance and ranging. This isachieved by a system or device called RADAR. RADAR is short for RAdio Detection And RangingSystem. The term RADAR was first coined by U.S. Navy Lieutenant Commander Samuel M.Tucker and F.R. Furth.

Majority of RADARs typically operate in ultra-high frequency (UHF) and microwave range. Itsends the electromagnetic energy into the space in the direction of intended target and monitors thereflected signal (often called ECHO) from objects. Due to this, RADARs are used to locate theposition of the airplanes, ships and other objects that are invisible to the naked eye. In other wordsthey provide electronic ranging and surveillance services.

11.2 Operation of RADARFig. 11.1 shows the basic blocl diagram of RADAR, consisting of a Transmitter, Receiver, Antennaand a Duplexer. Duplexer is a switching device used to connect the antenna with transmitter while

Page 198: Copyrights @ Higher Education Commission

198 Chapter 11. Introduction to RADARs

sending the electromagnetic energy and than to the receiver when the RADAR is expecting theecho signal to be received. Obviouslly this is only required when one antenna is used for bothtransmitting and receiving.

Figure 11.1: Block diagram of RADAR system

The transmitted module in the RADAR system sends an electromagnetic energy generated byan oscillator. If that energy is intercepted by the target (for example an airplane or ship) a part ofthat energy is reflected back. This reflected signal is collected by the antenna and fed to the receivermodule of the RADAR system and processed further.The reception of ECHO signal signified thedetection of object but it can also be exploited by using basic physical principle to extract thelocation, speed, velocity, altitude, and shape of the target among many other things.

Range MeasurementThe range of the object is measured by noting the time taken for radar signal to radiate from theantenna, intercepted by the object and receiving the echo. We know that electromagnetic energyradiate in free space with the velocity of c = 3×108 m/s. Let’s say it takes τ seconds for a radarsignal to complete one round-trip process i.e., from generation to the reception of echo signal. Thanthe range R of the object from the radar system is:

R =cτ

2(11.1)

The factor of 2 in the denominator is due to the fact that signal covers the same distance twice,i.e., from the radar to the object and than from object to the radar.

11.2.1 Speed MeasurementThe speed of an object is measured by RADAR systems using Doppler Effect. The doppler effect isa well known frequency principle observed in nature whereby the frequency is increased when theobject approaches closer to the observer and it decreased when the object moves further away fromthe observer. This is very commonly observed when we hear the sound of train approaching towardsor moving away from us. As the train approaches towards us the its sound gradually increases andwhen it is moving away from us.

11.3 RADAR Frequency BandsAs mentioned earlier typically RADAR makes use of radio and microwave portion of frequencyband for its operation. However these are not the only frequencies used. There are also some

Page 199: Copyrights @ Higher Education Commission

11.4 Pulse RADAR 199

devices which uses sound frequencies known as SONARs (SOund Navigation And Ranging forranging and detection purposes. Others utilise light for the same purpose and they are calledLIDARs (LIght Detection And Ranging).

Following is the list of different frequency bands used for ranging and detection.

ITU BandsThe frequency bands designed by International Telecommunications Union (ITU) for radar systemsare showin in Table 11.1.

BAND FREQUENCY RANGE

VHF 38 - 144 MHz, 216 - 225 MHz

UHF 420 - 450 MHz, 890 - 942MHz

L 1.215 - 1.400 GHz

S 2.3 - 2.5 GHz, 2.7 - 3.7 GHz

C 5.250 - 5.925 GHz

X 8.500 - 10.680 GHz

Ku 13.4 - 14.0 GHz, 15.7 - 17.7GHz

K 24.05 - 24.25 GHz

Ka 33.4 - 36.0 GHz

Table 11.1: Nomenclature of standard radar frequency ranges

Radio BandsRadio band designations are summarized in Table 11.2.

11.4 Pulse RADARThe signal transmitted by radars is either a continuous wave signal or is a series of short durationsomewhat rectangular shaped pulses known as Pulse Train. In the Pulse Radar such short pulse istransmitted and a sufficient wait period is given to receive the echo signal before transmitting nextpulse. Fig. 11.2 shows the block diagram of Pulse Radars.

11.4.1 Pulse Repetition FrequencyThe rate at which these pulses are transmitted depends on the longest range at which objects areexpected and is called pulse repetition frequency (PRF) as shown in Fig. 11.3. This rate shouldneither be too high so that we miss the echo nor it should be too low so that we miss the targets.If the pulse repetition frequency is too high some echo pulses might arrive after the transmissionof next or multiple pulses and calculates the false ranges. Such echoes which arrive after thetransmission of second (next) pulse is called second-time around echoes and those which arriveafter the transmission of third pulse are called third-time around echoes and so on.

The time between the beginning of one pulse and the start of the next pulse is called PulseRepetition Time (PRT) and is equal to the reciprocal of prf as follows:

PRT =1

PRF

Page 200: Copyrights @ Higher Education Commission

200 Chapter 11. Introduction to RADARs

BAND Nomenclature Frequency Wavelength

ELF Extremely Low Fre-quency

3 - 30 Hz 100,000 - 10,000km

SLF Super Low Frequency 30 - 300 Hz 10,000 - 1,000 km

ULF Ultra Low Frequency 300 - 3000 Hz 1,000 - 100 km

VLF Very Low Frequency 3 - 30 kHz 100 - 10 km

LF Low Frequency 30 - 300 kHz 10 - 1 km

MF Medium Frequency 300 - 3000kHz

1 km - 100 m

HF High Frequency 3 - 30 MHz 100 - 10 m

VHF Very High Frequency 30 - 300 MHz 10 - 1 m

UHF Ultra High Frequency 300 - 3000MHz

1 m - 10 cm

SHF Super High Frequency 3 - 30 GHz 10 - 1 cm

EHF Extremely High Fre-quency

30 - 300 GHz 1cm - 1mm

Table 11.2: Refractive indices of different mediums

11.4.2 Maximum Unambiguous RangeThe range beyond which targets appear as second-time around echoes is called the maximumunambiguous range and is given as (11.2).

Runamb =c

2 fr=

cτr

2(11.2)

where fr is the pulse repetition frequency in Hz.

11.5 Continuous Wave RADAR

As the name implies CW radars transmit a continuous signal instead of pulse train. Therefore theyrequire two antennas separately one for the transmitter and one for receiver. When radio-frequencyenergy transmitted from a fixed point continuously strikes an object that is either moving towardor away from the source of the energy, the frequency of the reflected energy is changed. Thisshift in frequency is known as the DOPPLER EFFECT. The difference in frequency between thetransmitted and reflected energy indicates both the presence and the speed of a moving target.Fig.11.4 shows the block diagram of CW radar.

Speed MeasurementThe speed of an object is measured by RADAR systems using Doppler Effect. The doppler effect isa well known frequency principle observed in nature whereby the frequency is increased when theobject approaches closer to the observer and it decreased when the object moves further away fromthe observer. This is very commonly observed when we hear the sound of train approaching towardsor moving away from us. As the train approaches towards us the its sound gradually increases andwhen it is moving away from us.

Page 201: Copyrights @ Higher Education Commission

11.6 Applications of RADAR 201

Figure 11.2: Block diagram of Pulse RADAR system

Figure 11.3: Block diagram of Pulse RADAR system

The velocity can be calculated from doppler shift using Eq. (11.3)

Velocity(MPH) =Doppler Shift (Hz)

3.9×RF Frequency (Hz)(11.3)

The continuous-wave, or Doppler, system is used in several ways. In one radar application,the radar set differentiates between the transmitted and reflected wave to determine the speed ofthe moving object. The Doppler method is the best means of detecting fast-moving objects thatdo not require range resolution. If an object is moving, its velocity, relative to the radar, can bedetected by comparing the transmitter frequency with the echo frequency (which differs because ofthe Doppler shift). The DIFFERENCE or BEAT FREQUENCY, sometimes called the DOPPLERFREQUENCY fd , is related to object velocity.

11.6 Applications of RADARRADARs initially were only thought of to be used for governmental or military applications. In1938 at Bell Labs the first commercial device was fitted to aircraft Lab unit on some United AirLines aircraft. This is no longer the case many applications such as autonomous cars and GPSmake extensive use of RADARs or its principles. RADARs are used in the submarines, airplanes,autonomous and military vehicles devices that warn of aircraft or other obstacles in or approachingtheir path. RADARS are also used in meteorological applications and display weather information,and give accurate altitude readings. Such radars systems are immune to the channel effects such aspoor lighting, fog, rain etc. Under all these adverse conditions where human observing capabilitiesare severely hampered, RADARs perform with ease and accurately.

Normal radar functions:1. Range (from pulse delay)

Page 202: Copyrights @ Higher Education Commission

202 Chapter 11. Introduction to RADARs

Figure 11.4: Block diagram of CW RADAR system

2. Angular direction (from antenna pointing) Signature analysis and inverse scattering:3. Velocity (from Doppler frequency shift)4. Target shape and components (return as a function of direction)5. Target size (from magnitude of return)6. Moving parts (modulation of the return)7. Material compositionThe complexity (cost size) of the radar increases with the extent of the functions that the radar

performs.

Bibliography

1. Guerlac, H. E.: "OSRD Long History," vol. V, Division 14, "Radar," available from Office ofTechnical Services, U.S. Department of Commerce.

2. British Patent 13,170, issued to Christian Hiilsmeyer, Sept. 22, 1904, entitled " Hertzian-waveProjecting and Receiving Apparatus Adapted to Indicate or Give Warning of the Presence ofa Metallic Body, Such as a Ship or a Train, in the Line of Projection of Such Waves."

3. Marconi, S. G.: Radio Telegraphy, Proc. IRE, vol. 10, no. 4, p. 237, 1922.4. Breit, G., and M. A. Tuve: A Test of the Existence of the Conducting Layer, Phys Rev., vol.

28, pp. 554-575, September, 1926.5. Englund, C. R., A. B. Crawford, and W. W. Mumford: Some results of a Study of Ultra-short-

wave Transmission Phenomena, Proc. IRE, vol. 21, pp. 475-492, March, 1933.6. CIS. Patent 1,981,884, "System for Detecting Objects by Radio," issued to A. H. Taylor, L.

C. Young, and L. A. Hyland, Nov. 27, 1934.7. Vieweger, A. L.: Radar in the Signal Corps, IRE Trans., vol. MIL-4, pp. 555-561, October,

1960.8. Origins of Radar: Background to the Awards of the Royal Commission, Wireless World, vol.

58, pp. 95-99, March, 1952.9. Wilkins, A. F.: The Story of Radar, Research (London), vol. 6, pp. 434-440, November,

1953.10. Rowe, A. P.: "One Story of Radar," Cambridge University Press, New York, 1948. A very

readable .i description of the history of radar development at TRE (TelecommunicationsResearch Establishment, England) and how TRE went about its business from 1935 to the

Page 203: Copyrights @ Higher Education Commission

11.6 Applications of RADAR 203

end of World War 11.11. Watson-Watt, Sir Robert: "Three Steps to Victory," Odhams Press, Ltd., London, 1957; "The

Pulse of Radar," The Dial Press, Inc., New York, 1959.12. Susskind, C.: "The Birth of the Golden Cockerel: The Development of Radar," in preparation13. Price, A.: "Instruments of Darkness," Macdonald and Janes, London, 1977.14. Lobanov, M. M.: "Iz Proshlovo Radiolokatzii" (Out of the Past of Radar), Military Publisher

of the Ministry of Defense, USSR, Moscow, 1969.15. IEEE Standard Letter Designations for Radar-Frequency Bands, IEEE Std 521-1976, Nov.

30, 1976.16. Villard, 0. G., Jr.: The Ionospheric Sounder and Its Place in the History of Radio Science,

Radio Science, vol. 11, pp. 847-860, November, 1976.

Page 204: Copyrights @ Higher Education Commission
Page 205: Copyrights @ Higher Education Commission

Glossary

1. A-Law: A-law encoding is the method of encoding sampled audio wave forms used in the2.048 Mbps, 30-channel PCM primary system known as E-carrier.

2. Aliasing: An effect that occurs when an analog signal is sampled digitally at a samplingfrequency less than twice the signal frequency is called aliasing or spectral folding.

3. Amplitude Modulation (AM): With amplitude modulation, the carrier’s amplitude is variedin accordance with the modulating signal.

4. Analog: Transmission Continuous wave transmission expressed by bandwidth or range offrequencies. Broadcast and cable TV, and AM/FM radio are transmitted on analog channels.

5. Analog signal A signal that changes gradually or continuously during the cycle. A sine waveis an example of an analog signal. On the other hand, a square wave is an example of a digitalsignal.

6. Analog/Digital: Two opposite kinds of communications signals. Analog is the continuouslyvarying electrical signal in the shape of a wave (such as a radio wave), transmitted electroni-cally in a form analogous to the spoken work. Digital is based on a binary code in which thepicture or audio information is sent as a series of "on" and "off" signals; it is more preciseand less subject to interference than analog.

7. Angle Modulation: It is modulation in which the angle of a sine-wave carrier is varied by amodulating wave.

8. Asynchronous: A form of concurrent input and output communication transmission withno timing relationship between the two signals. Slower-speed asynchronous transmissionrequires start and stop bits to avoid a dependency on timing clocks (10 bits to send one8-bitbyte).

9. Band Pass Filter: An active or passive circuit, which allows signals within the desiredfrequency band to pass through but impedes signals outside this pass band from gettingthrough.

10. Bandwidth: In telecommunications, bandwidth is some range of frequencies (that form aband) through which the data can be transferred.

11. Baseband: Any transmission path that transmits only one signal at a time. For example, the

Page 206: Copyrights @ Higher Education Commission

206 Chapter 11. Introduction to RADARs

connecting cables between networked computers are baseband because they carry only onedigital signal at a time. This differs from a broadband cable such as a TV cable that transmitsmany separate signals or channels at the same time.

12. Baud: Older term being replaced by bps bits per second. In an average data stream, onebaud is roughly equivalent to one bit per second on a digital transmission circuit.

13. Bit: The smallest amount of information that can be transmitted. In binary digital transmis-sion, a bit has one of two values: 0 or 1 for "on" or "off". A combination of bits can indicatean alphabetic character, a numeric digit, or perform a signaling, switching or other function.

14. Bit Error Rate: The fraction of a sequence of message bits that are in error. A bit error rateof 10-6 means that there is an average of one error per million bits.

15. Bps: bits or bytes per second depending on upper or lower case B respectively.16. Carrier: A high frequency signal modulated by information signal.17. Carson’s rule: A rule of thumb for approximating the bandwidth of an FM wave produced

by a single-tone modulating function. The transmission bandwidth isBT = 2( fd + fm)

18. Channel: A telecommunications path (pipe) of a specific capacity (speed) between twolocations in a network.

19. Channel Bandwidth: The transmission medium must have a bandwidth wide enough topass the spectral content of the encoded signal.

20. Circuit: A switched or dedicated communications path with a specified bandwidth (trans-mission speed/capacity).

21. Coax, Coaxial Cable: The copper-wire cable that carries audio and video signals and radiofrequency (RF) energy, consisting of an outer conductor concentric and inner conductor,separated from each other by insulating material. It can carry a much higher bandwidth thana wire pair.

22. Companding: Process of compressing a signal prior to transmission and expanding thesignal at the receiver.

23. Data Rate: Analog transmission media is specified in bandwidth (usually in Hertz) and signalto noise (usually in dB). Since the principles behind digital transmission are so different,media are specified in different parameters. Rather than how much analog information ispassed, a digital user is concerned with how many bits per second can be sent down thechannel.

24. Delta Modulation: it is a technique used to convert an analog signal into bits. It differs fromPCM in that a single digit binary code is used. Delta modulation DM is also known as onelinear modulation.

25. Demodulation: it is the reverse process of modulation in which the original signal is detectedfrom the modulatedsignal.

26. Detector: A means or circuit designed to convert modulated signal recovered originalwaveform, which contains the desired intelligence.

27. Digital Channel Banks or D-Banks: A device that does the analog-to-digital and digital-to-analog conversion process is called D-Bank, this is commonly used with analog switches.

28. Digital Pulse Modultion: The process of analog-to-digital conversion is sometimes referredto as digital pulse modulation. The use of the terminology “pulse modulation” is justified byvirtue of the fact that the first operation performed in the conversion of an analog signal intodigital form involves the representation of signals by a sequence of uniformly spaced pulses,the amplitude of which is modulated by the signal.

29. Digital signal: A signal that changes discontinuously or in abrupt steps during the cycle. Asquare wave is an example of a digital signal. On the other hand, a sine wave is an example ofan analog signal, one that changes continuously or gradually during the cycle. Less sensitive

Page 207: Copyrights @ Higher Education Commission

11.6 Applications of RADAR 207

to noise and interference and permit higher transmission speed. When analog signals arereceived and amplified, so too is noise. A digital signal is detected and regenerated ratherthan amplified.

30. Digital vs. Analog: An analog electrical signal (sound or light, etc.) is noted by a fundamen-tal change in character with respect to information being conveyed; i.e. an AM radio stationchanges the amplitude of a carrier signal to varying degrees depending on the amplitude ofthe music it is carrying. A digital signal is always in one of two states (on or off), but varyingat a rate fast enough that information encoded into numbers (quantized) can be transferred.Another way to look at the difference is that an analog signal has an infinite number ofdegrees of changes that convey information; a digital signal has only two. One of the largestadvantages of digital transmission is that as long as a receiver can distinguish between thetwo states in the signal, noise will have no effect onit.

31. Frequency: The number of cycles per second of a electromagnetic transmission, usuallydescribed in hertz. Generally, high frequency transmissions can carry more information atgreater speeds than low frequency transmissions.

32. Frequency conversion: it is technique to convert the restricted single frequency SSB signalto the desired operating frequency.

33. Frequency Domain: the frequency domain shows a waveform’s frequency-versus-amplitudedisplay, which is similar to spectrum analyzer.

34. Frequency Modulation (FM): With frequency modulation, the carrier’s frequency is variedin accordance with the modulating signal.

35. Gbs: Giga-bits per second - Defines a rate multiple at which data/information may betransferred across a communications line. 1 Gbs equals 1,000,000,000 bits per second, orapproximately 125 million characters per second (assuming 8 bits per character).

36. GHz: Gigahertz. Unit of frequency equal to one billion Hertz or one thousand megaHertz orcycles persecond.

37. Granular Noise: The difference between the original signal and the reproduced signal iscalled “quantization distortion” or granular noise and is typical of the DM process.

38. Hertz: A unit of frequency equal to one cycle per second (cps). One kilohertz equals 1000cps; one megahertz equals 1 million cps; one gigahertz equals 1 billion cps.

39. HF or hf: 3 Mhz to 30 Mhz and comprises amateur radio, short wave broadcasters among ahost of others. Largely becoming superseded by satellite transmissions.

40. Hub: A point or piece of equipment where a branch of a multipoint network is connected. Ina telegraph network, signals appear as dc pulses at the hub. A network may have a number ofgeographically distributed hubs or bridging points.

41. Information theory is the scientific study of information and of the communication systemsdesigned to handle it.

42. Instantaneous Sampling: Ideal sampling is called instantaneous sampling.43. K: One Thousand, 1,024 (binary; 2 to the 10th power)44. Kbs: Kilobits Per Second. A rate at which data/information may be transferred across a

communications line. 1 Kbs equals 1,000 bits per second, or approximately 125 charactersper second (assuming 8 bits per character).

45. LF: Abbrev. for low frequency.46. Limiter: any device that automatically sets the boundary value or values upon signal. The

term is usually applied to a device which, for inputs below specified instantaneous value,gives an output proportional to the input, but for inputs above that value gives a constantpeak output.

47. Lower Side band: it is a frequency, which is equal to the difference of carrier- and modulat-ing signal’s frequency.

Page 208: Copyrights @ Higher Education Commission

208 Chapter 11. Introduction to RADARs

48. Mbs: Megabit per second. A rate at which data/information may be transferred across acommunications line; 1 Mbs equals 1,000,000 bits per second, or approximately 125,000characters per second (assuming 8 bits per character).

49. Modem: Modulator/Demodulator. An electronic device used to allow a computer to sendand receive data, typically over a phone line.

50. Modulating factor see Percent of modulation51. Modulating signal: A signal that cause to vary the characteristic of a sine wave.52. Modulation: Modulation is a process of varying some characteristic of a high frequency

sine wave in accordance with n informationsignal.53. Modulation Index, m: It is the ratio of the amplitude of the modulating signal to the

amplitude of the carrier signal.54. Modulator: a modulator is a circuit that causes modulation either in its internal circuitry or

in another associated circuit.55. Multiplexer: Hardware that brings together several low-speed communications lines, trans-

forms them into one high-speed channel, and reverses the operation at the other end. Forexample, an M1-3 MUX combines 28 DS-1s into a DS-3.

56. Multiplexing: The process of combining a number of individual channels into a commonfrequency band or into a common bit stream for transmission. The converse equipment orprocess for separating a multiplexed stream into individual channels is called demultiplexer.

57. MUX: Multiplex or Multiplexer. A device that performs multiplexing.58. Networking: The tying together of multiple sites for the reception and possible transmission

of information. Networks can be composed of various transmission media, including copperwire, terrestrial microwave, or coaxial.

59. Noise: noise is any unwanted form of energy tending to interfere with the reception of wantedsignals.

60. Nyquist rate: the minimum frequency at which the modulating waveform can be sampled.61. Oscillator: An oscillator is an electronic circuit where some of the amplified output is fed

back to the input to maintain a flywheel effect or oscillations.62. Percent Modulation: It is simply the degree of modulation normally expressed as a percent-

age from 0 to 100. However, it is also known as the modulating factor, which varies from 0to 1.

63. PCM/ADPCM: Pulse Code Modulation. This is the technique used by CD players and otherdevices to "digitize" audio. The codec converts PCM to Adaptive Differential PCM in orderto conserve media bandwidth.

64. Phase Modulation (PM): With phase modulation, the carrier’s phase is varied in accordancewith the modulating signal.

65. Pulse: one of a series of transient disturbance recurring at regular intervals. A pulse consistsof a voltage or current that increases from zero (or a constant value) to a maximum and thendecreases to zero (or a constant value) , both in a comparatively short time.

66. Pulse Modulation: a form of modulation in which pulses are used to modulate the carrierwave or, more commonly, in which a pulse train is used as the carrier.

67. Pulse Amplitude Modulation (PAM): In PAM, the amplitude of the pulse varies in propor-tion to the amplitude of the signal.

68. Pulse-code modulation (PCM): in PCM the message signal is subjected to a great numberof operations. The essential of operation of the transmitter PCM system are sampling,quantizing, and encoding.

69. Pulse Time Modulation (PTM): in pulse time modulation, the samples are used to varythe time of occurrences of some parameters of pulses. These parameters may be position,duration, and frequency.

Page 209: Copyrights @ Higher Education Commission

11.6 Applications of RADAR 209

70. Pulse Duration Modulation (PDM): This type of PTM is also called the Pulse Width orPulse Length Modulation; however, pulse duration modulation (PDM) is the perfered term.There are three different classes of PDM: symmetrical PDM, leading edge PDM, and trailingedge PDM.

71. Pulse Position Modulation (PPM): in this type position of the pulse, relative to a referencepulse, is varied in accordance with the modulatingsignal.

72. Quantizing: this term refers to the use of the finite set of amplitude level and the selection ofthe level nearest to a particular sample value of the message signal as the representation for it

73. Real Time: Sending and receiving of messages occurs simultaneously without delay, "live".A transaction which occurs without significant delay from start to finish, i.e., taking a classfrom a "live" instructor and getting immediate feedback as opposed to watching a videotapeof the class sometime after the actual event.

74. Receiver: A collection of circuits designed to receive signals over one or more bands ofinterest and covering one or more modes of operation.

75. RF: Abbrev. for Radio Frequency.76. Sample: sample is actually a measure of the modulating signal at a specific time.77. Sampling: a technique in which only some portion of an electrical signal are measured and

are used to produce a set of discrete values that is representative of the information containedin the whole.

78. Sampling circuit: it is a circuit used to produce a set of discrete values representative of theinstantaneous values of the input signal. It is also called sampler.

79. Sampling Theorem: it states that: The sampling frequency in any pulse modulation systemmust be equal to, or exceeds, twice the higher signal frequency in order to convey all theinformation of the original system.

80. Signal-to-Noise Ratio (SNR): The signal-to-noise ratio (SNR) is defined as the ratio of thesignal power to the noise power.

81. Synchronous: A form of communication transmission with a direct timing relationshipbetween input and output signals. The transmitter and receiver are in sync and signals aresent at a fixed rate. Information is sent in multibyte packets. It is faster than asynchronouscharacter transmission, since start and stop bits are not required. It is used for mainframe-to-mainframe and faster workstation transmission.

82. TDM/TDMA: Time Division Multiplex/Multiple Access Method for combining multipledata circuits into one circuit (or vice versa) by assigning each circuit a fixed unit of time forits data transmission.

83. Throughput: the effective rate at which user’s data bits are delivered.84. Time Constant: the time required for unidirectional electrical quantity to decrease to 1/e

(approximately 0.368) of its initial value or to increase to (1-1/e)(approximately 0.632) of itsfinal value in response to change in the electrical conditions.

85. Time Domain: the time domain shows a waveform’s time-versus-amplitude display, just asan oscilloscope does.

86. Transceiver: A unit that contains both the transmitter and receiver. It has the advantage thatcommon electronic circuits are shared rather than duplicated if you operated the two unitsseparately.

87. Transmission System: The foundation of communication capacity between two points. It isgoverned by the equipment type generating the (optical) signals. The capacity of a single fibercan be increased by installing higher-speed (higher-cost) transmission systems (end-to-end).

88. Twisted Pair: A cable composed of two small-insulated conductors twisted together withouta common cover. The two conductors are usually substantially insulated so that the combina-tion is a special case of the cord. Telephone signals are the most common use of twisted pair

Page 210: Copyrights @ Higher Education Commission

210 Chapter 11. Introduction to RADARs

technology.89. Upper Side band: it is a frequency equal to the sum of modulating - and carrier signal’s

frequency.90. Watt: The fundamental unit of power consumed.91. Wave: a periodic disturbance, either continuous or transient, that is propagated through a

medium or through space and in which the displacement from a mean value is a function oftime or position or both.

92. Waveform: the instantaneous values of the periodically varying quantity plotted against timegives a graphical representation of the wave that is known as the waveform.

93. Wavelength: the wavelength is the distance between two displacement of the same phasealong the direction of propagation.

94. Word: The series of pulses, which represents a single sample from a single channel, is calleda word.

Page 211: Copyrights @ Higher Education Commission

Index

Symbols

. Quadrature Amplitude Modulation (QAM)50

A

AM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40AM and FM Comparison . . . . . . . . . . . . . . . . 72AM Equation . . . . . . . . . . . . . . . . . . . . . . . . . . . 43Amplitude Shift Keying. . . . . . . . . . . . . . . . . .99Analog-to-Digital (Pulse) Modulation . . . . . 89Analog-to-Digital Conversion . . . . . . . . . . . . 85Applications . . . . . . . . . . . . . . . . . . . . . . . . . . . 115Asymmetric Data Subscriber Lines (ADSL)141Asynchronous TDM. . . . . . . . . . . . . . . . . . . . 117Avalanche Photodiode . . . . . . . . . . . . . . . . . . 189

B

bipolar signaling . . . . . . . . . . . . . . . . . . . . . . . . 95

C

Channel Capacity . . . . . . . . . . . . . . . . . . . . . . 155Cladding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 27Classification of Spread Spectrum . . . . . . . 128Co-axial Cable . . . . . . . . . . . . . . . . . . . . . . . . . . 26Code Division Multiple Access . . . . . . . . . . 118Code Spreading and De-Spreading. . . . . . .120Coherence Bandwidth . . . . . . . . . . . . . . . . . . 138

Communication Terminology . . . . . . . . . . . . 19Companding. . . . . . . . . . . . . . . . . . . . . . . . . . . .94Comparison Between Line Coding Schemes98Core . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 26Critical Angle . . . . . . . . . . . . . . . . . . . . . . . . . 182cyclic prefix (CP) . . . . . . . . . . . . . . . . . . . . . . 141

D

Data Transmission Mediums . . . . . . . . . . . . . 24Delay Spread . . . . . . . . . . . . . . . . . . . . . . . . . . 141Demultiplexer . . . . . . . . . . . . . . . . . . . . . . . . . 110Differential Manchester Coding . . . . . . . . . . 97Digital Communication System. . . . . . . . . . .80Digital Modulation . . . . . . . . . . . . . . . . . . . . . . 80Digital Transmission . . . . . . . . . . . . . . . . . . . . 80Direct & Indirect FM Generation . . . . . . . . . 65Direct FM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65Direct Sequence Spread Spectrum . . . . . . . 128DM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93Doppler Effect . . . . . . . . . . . . . . . . . . . . 198, 200DPM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 93DSP . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 84DSSS Performance in presence of Interference

130

E

Electromagnetic Spectrum . . . . . . . . . . . . . . . 20Encoding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 87

Page 212: Copyrights @ Higher Education Commission

212 INDEX

Entropy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 152

F

Features of TDM . . . . . . . . . . . . . . . . . . . . . . 118Fibre Optic Cable . . . . . . . . . . . . . . . . . . . . . . . 26FM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 60FM Working . . . . . . . . . . . . . . . . . . . . . . . . . . . . 61Frequency Division Multiplexing (FDM) . 112FSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 99

G

Geostationary Earth Orbit (GEO) . . . . . . . . 164Group Velocity . . . . . . . . . . . . . . . . . . . . . . . . 183Guard band. . . . . . . . . . . . . . . . . . . . . . . . . . . .113Guided Media . . . . . . . . . . . . . . . . . . . . . . . . . . 25

H

HDSL . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141History of Communication . . . . . . . . . . . . . . . 18Huffman Coding . . . . . . . . . . . . . . . . . . . . . . . 155

I

Indirect FM . . . . . . . . . . . . . . . . . . . . . . . . . 65, 66Information Theory . . . . . . . . . . . . . . . . . . . . 150Inter Symbol Interference (ISI) . . . . . . . . . . 141Introduction to Spread Spectrum . . . . . . . . 126

K

Key Components of DSP . . . . . . . . . . . . . . . . 84Keying Techniques . . . . . . . . . . . . . . . . . . . . . . 98

L

LASERs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 187Least Significant Bit (LSB) . . . . . . . . . . . . . . 88Level codes . . . . . . . . . . . . . . . . . . . . . . . . . . . . 95Light Emitting Diode . . . . . . . . . . . . . . . . . . . 186Line Codings . . . . . . . . . . . . . . . . . . . . . . . . . . . 94Line of Sight (LOS) . . . . . . . . . . . . . . . . . . . . 138Low Earth Orbit (LEO) . . . . . . . . . . . . . . . . . 163lower-sideband . . . . . . . . . . . . . . . . . . . . . . . . . 45LTE . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 144

M

Manchester Coding. . . . . . . . . . . . . . . . . . . . . .97Medium Earth Orbit (MEO) . . . . . . . . . . . . 164

Meridional . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181Message . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 151mode . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 183Mode of Communication . . . . . . . . . . . . . . . . 23Modulation Index . . . . . . . . . . . . . . . . . . . . . . . 68Modulation Index and Percentage of Modula-

tion . . . . . . . . . . . . . . . . . . . . . . . . . . . 42Modulation Index in Angle Modulation . . . 68Modulation Schemes . . . . . . . . . . . . . . . . . . . . 39Most Significant Bit (MSB) . . . . . . . . . . . . . . 88Multi-Carrier Modulation . . . . . . . . . . . . . . . 141Multimode Grade Index . . . . . . . . . . . . . . . . 185Multimode Step Index . . . . . . . . . . . . . . . . . . 185Multiplexer . . . . . . . . . . . . . . . . . . . . . . . . . . . .110Multiplexing . . . . . . . . . . . . . . . . . . . . . . . . . . 110

N

Narrow-band FM modulator . . . . . . . . . . . . . 66Narrowband Communication . . . . . . . . . . . . 138Need of Modulation . . . . . . . . . . . . . . . . . . . . . 38Need of Multiplexing. . . . . . . . . . . . . . . . . . .110Non Return to Zero . . . . . . . . . . . . . . . . . . . . . 95non return to zero (NRZ) . . . . . . . . . . . . . . . . 95Normalized Frequency . . . . . . . . . . . . . . . . . 183NRZ-I . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96Numerical Aperture (NA) . . . . . . . . . . . . . . . 182Nyquist Criteria . . . . . . . . . . . . . . . . . . . . . . . . . 86

O

Optical Detectors . . . . . . . . . . . . . . . . . . . . . . 189Optical Feedback . . . . . . . . . . . . . . . . . . . . . . 188Optical Sources . . . . . . . . . . . . . . . . . . . . . . . . 186Orbits . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163Orthogonal Frequency Division Multiplexing

(OFDM) . . . . . . . . . . . . . . . . . . . . . . 137Orthogonality . . . . . . . . . . . . . . . . . . . . . . . . . 143

P

PAM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 91Permeability . . . . . . . . . . . . . . . . . . . . . . . . . . .178Phase . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 67Phase Deviation. . . . . . . . . . . . . . . . . . . . . . . . .68Phase Locked Loop (PLL) Demodulator . . . 72Phase Velocity . . . . . . . . . . . . . . . . . . . . . . . . . 183photovoltaic effect . . . . . . . . . . . . . . . . . . . . . 189Plane Wave. . . . . . . . . . . . . . . . . . . . . . . . . . . .183PM . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 66

Page 213: Copyrights @ Higher Education Commission

INDEX 213

PN Photodiode . . . . . . . . . . . . . . . . . . . . . . . . 189Polar Line Coding . . . . . . . . . . . . . . . . . . . . . . . 97Power in FM Signal . . . . . . . . . . . . . . . . . . . . . 65PSK . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 100Pulse Code Modulation PCM . . . . . . . . . . . . 92Pulse Duration Modulation PDM . . . . . . . . . 92Pulse Position Modulation PPM . . . . . . . . . . 92Pulse Repetition Time (PRT) . . . . . . . . . . . . 199Pumping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188

Q

Quantization . . . . . . . . . . . . . . . . . . . . . . . . . . . .86

R

RADARs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . .197Ratio Detector . . . . . . . . . . . . . . . . . . . . . . . . . . 72Reflection . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177Refraction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 177Relationship Between PM and FM . . . . . . . . 70Relative Refractive Index Difference . . . . . 182Return to Zero . . . . . . . . . . . . . . . . . . . . . . . . . . 96return to zero (RZ) . . . . . . . . . . . . . . . . . . . . . . 95RZ. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .97

S

Satellite Communication . . . . . . . . . . . 162, 176Serial Transmission of Data . . . . . . . . . . . . . . 88Shannon-Hartley Capacity Theorem . . . . . 155Shannon’s capacity . . . . . . . . . . . . . . . . . . . . .156sidebands . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 45Sidebands in Angle Modulation . . . . . . . . . . 69Signal to Noise Ratio (SNR) . . . . . . . . . . . . 156Single Mode Step Index . . . . . . . . . . . . . . . . 184Single Sideband Modulation . . . . . . . . . . . . . 49Skew rays . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 181Slope Detector . . . . . . . . . . . . . . . . . . . . . . . . . . 71Snell’s Law. . . . . . . . . . . . . . . . . . . . . . . . . . . .177Spectral efficiency . . . . . . . . . . . . . . . . . . . . . 137Synchronous TDM . . . . . . . . . . . . . . . . . . . . . 117

T

Telecommunication . . . . . . . . . . . . . . . . . . . . . 18Time Division Multiplexing . . . . . . . . . . . . . 115Transition codes . . . . . . . . . . . . . . . . . . . . . . . . 95Types of Multiplexing . . . . . . . . . . . . . . . . . . 112

U

Unguided Media . . . . . . . . . . . . . . . . . . . . . . . . 27Unipolar Coding . . . . . . . . . . . . . . . . . . . . . . . . 95unipolar signaling . . . . . . . . . . . . . . . . . . . . . . . 95upper-sideband (USB) . . . . . . . . . . . . . . . . . . . 45

V

Vestigial Sideband Modulation (VSB) . . . . . 49

W

Waves in EM Spectrum . . . . . . . . . . . . . . . . . . 22What is Modulation . . . . . . . . . . . . . . . . . . . . . 38Why Use Spread Spectrum? . . . . . . . . . . . . . 126Wideband Communication . . . . . . . . . . . . . . 138WiMAX . . . . . . . . . . . . . . . . . . . . . . . . . . 137, 144WLAN. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141Working of CDMA. . . . . . . . . . . . . . . . . . . . .118Working Operation of PM . . . . . . . . . . . . . . . . 67

Page 214: Copyrights @ Higher Education Commission

214 INDEX

Page 215: Copyrights @ Higher Education Commission

Higher Education Commission, Islamabad

Dr. Fahim Aziz Umrani is currently Associate Professor in the Department of Telecommunication Engineering Mehran UET, Pakistan. He received his B.E in Electronics from Faculty of Electrical, Electronics & Computer Engineering at Mehran UET, Pakistan and his PhD from Faculty of Advanced Technologies at the University of South Wales (previously known as Glamorgan University) in 2004 and 2009 respectively. His research interests include Optical CDMA, Spectral Amplitude Coding, Software Dened Radio, Multiple access techniques, & Body area networks.

Engr. Saima Mehar did her bachelor's degree in Computer Systems Engineering in 2016 from Computer Systems Engineering Department of Mehran University of Engineering and Technology, Jamshoro, Pakistan. She is currently pursuing her Master's degree from National University of Science & Technologies, Islamabad.

Abdul Waheed Umrani, have been associated with academia since 1996. He received his bachelors degree in Electronic Engineering in 1996 from Mehran University and his masters and PhD degrees from NTU Singapore. Currently working as a Full Professor in the Department of Telecommunication Engineering, Mehran University of Engineering and Technology (MUET) Jamshoro, Pakistan. In addition to that he has also been working as Registrar of this University. His previous assignment was the Dean Faculty of Engineering, Dawood University of Engineering & Technology (DUET), Karachi. Have also worked as a founding Director of cross-disciplinary research center named CIRCLE (Center for Innovation, Research, Creative Learning and Entrepreneurship) at DUET Karachi. His current research interests include, digital Communications over Fading Channels, Multiple Antenna Systems and Cognitive Radios, Channel Modeling and Spectrum Sensing and Application of ICTs in Social Sciences.

About the authors