Telecommunications and Networks

292

Transcript of Telecommunications and Networks

Page 1: Telecommunications and Networks
Page 2: Telecommunications and Networks

Telecommunications and Networks

Page 3: Telecommunications and Networks

By the same Author:

Network Analysis, Prentice Hall, 1969Development of Information Systems for Education, Prentice-Hall, 1973Information Processing Systems for Management, Richard Irwin, 1981 and 1985Information Resource Management, Richard Irwin, 1984 and 1988The Computer Challenge: Technology, Applications and Social Implications, Macmillan, 1986Information Systems for Business, Prentice Hall, 1991 and 1995Management of Information, Prentice Hall, 1992Artificial Intelligence and Business Mangement, Ablex, 1992Knowledge-Based Information Systems, McGraw-Hill, 1995Managing Information Technology, Butterworth-Heinemann, 1997

Page 4: Telecommunications and Networks

Telecommunications andNetworks

K.M. HussainD.S. Hussain

Page 5: Telecommunications and Networks

Butterworth-HeinemannLinacre House, Jordan Hill, Oxford OX2 8DPA division of Reed Educational and Professional Publishing Ltd

A member of the Reed Elsevier plc group

OXFORD BOSTON JOHANNESBURGMELBOURNE NEW DELHI SINGAPORE

First published 1997

© K. M. and Donna S. Hussain 1997

All rights reserved. No part of this publication may be reproduced inany material form (including photocopying or storing in any medium byelectronic means and whether or not transiently or incidentally to someother use of this publication) without the written permission of thecopyright holders except in accordance with the provisions of the Copyright,Designs and Patents Act 1988 or under the terms of a licence issued by theCopyright Licensing Agency Ltd, 90 Tottenham Court Road, London,England W1P 9HE. Applications for the copyright holder’s writtenpermission to reproduce any part of this publication should be addressedto the publishers

British Library Cataloguing in Publication DataA catalogue record for this book is available from the British Library.

ISBN 0 7506 2339X

Library of Congress Cataloguing in Publication DataA catalogue record for this book is available from the Library of Congress.

Typeset by Laser Words, Madras, IndiaPrinted in Great Britain byMartius of Berwick

Page 6: Telecommunications and Networks

CONTENTS

Acknowledgements xii

1 Introduction 1Changes in technology 1Management of telecommunications 3Applications of telecommunications 6Case 1.1: Network disaster at Kobe, Japan 10Case 1.2: Networking at the space centre 10Supplement 1.1: Milestones for network development 10Bibliography 11

PART 1: TECHNOLOGY 13

2 Teleprocessing and networks 15The rise of distributed data processing 15Transmission channels 17Interconnectivity 20Networks in the 1990s 21Issues facing corporate management 22Summary and conclusions 23Case 2.1: Delays at the Denver Airport 24Supplement 2.1: Top telecommunications companies in 1994 24Bibliography 25

3 Transmission technologies 26Introduction 26Wiring 26Microwave 28Satellite 28Wire-less/cordless systems 29Comparison of transmission systems 32Summary and conclusions 33Case 3.1: The Iridium project 35Case 3.2: CT-2 and PCs in the UK 35Case 3.3: Transmission at a trillion bits per second 36Bibliography 36

4 Switching and related technologies 38Introduction 38Router, bridge, repeater and gateway 39Compression 40Addressing 41Modems 42Smart and intelligent 43

v

Page 7: Telecommunications and Networks

Contents

Protocols 45Hubs 45Summary and conclusions 46Case 4.1: Networking at the space centre 47Bibliography 47

5 LANs: local area networks 48Introduction 48Interconnectivity 48Characteristics of networks 50Networking as a computing paradigm 50Topologies and switches 52Access methods 55Ethernet 55The token ring 55Circuit switching 55FDDI 56Frame relay 56SONET 56Wire-less networks 56Summary and conclusions 57Case 5.1: A network in the UK 57Case 5.2: The Stentor network in Canada 57Case 5.3: A network planned for China 58Bibliography 58

6 MAN/WAN 59Introduction 59MAN and WAN 59Planning for a WAN 60Performance of a MAN/WAN 61Bandwidth management 61Switching management 62The ATM 62Summary and conclusions 65Case 6.1: ATM at sandia 66Case 6.2: Navigating LANs/WANs in the UK 67Supplement 6.1: Wan technologies 67Supplement 6.2: Survey on WANS 67Supplement 6.3: Projected pricing of ATM 68Bibliography 68

7 ISDN 69Introduction 69The computing environment 69The resource environment 72What is ISDN? 72Implementation of ISDN 73Summary and conclusions 75Case 7.1: ISDN at West Virginia University (WVU) 76Case 7.2: ISDN in France 76Case 7.3: ISDN for competitive bridge across the Atlantic 77Bibliography 77

vi

Page 8: Telecommunications and Networks

Contents

8 Network systems architecture 78Introduction 78Systems network architecture (SNA) 78The OSI model 80The APPN 82TCP/IP 83Multiple protocols 84Summary and conclusions 85Case 8.1: AAPN in HFC Bank 87Case 8.2: Hidden costs of APPN 87Case 8.3: Networking at SKF, Sweden 87Bibliography 88

PART 2: ORGANIZATION FOR TELECOMMUNICATIONS AND NETWORKS 89

9 Organization for networking 91Introduction 91Location and organization of network management 91Structure of network administration 93Planning for networking 95Planning variables 95Dynamics of network planning 96Planning process 97Implementing a plan 98Summary and conclusions 99Case 9.1: Headaches for network management 99Bibliography 99

10 The client server paradigm 100Introduction 100Components and functions of a client server system 100Organizational impact 106Advantages of the client server system 108Obstacles for a client server system 108Summary and conclusions 109Case 10.1: Client server at the 1994 Winter Olympics 110Case 10.2: Citibank’s overseas operations in Europe 110Case 10.3: Applications of client server systems 111Bibliography 111

11 Standards 112Introduction 112What are standards? 112The development of OSI 116The ISO 117European standards organizations 118TTC in JAPAN 120Summary and conclusions 120Case 11.1: Development of international standards for the B-ISDN in the US 121Case 11.2: Networking standards in Europe 122Bibliography 122

12 Security for telecommunication 123Introduction 123

vii

Page 9: Telecommunications and Networks

Contents

Security 124Terminal use controls 124Authorization controls 126Communications security 128Security for advanced technology 129Computer viruses 130Policies for security 132Administration of authorization 133How much security? 134Summary and conclusions 135Case 12.1: Examples of hacking 137Case 12.2: Examples of malicious damage 137Case 12.3: German hacker invades US defence files 137Case 12.4: Buying the silence of computer criminals 138Case 12.5: The computer ‘bad boy’ nabbed by the FBI 138Case 12.6: Miscellaneous cases using telecommunications 138Supplement 12.1: Popular viruses 139Bibliography 139

13 Network management 140Introduction 140Management of networks 141Software for network management 144User management 144Development of networks 145Resources for network management 147Summary and conclusions 151Case 13.1: Networking in the British parliament 151Case 13.2: Analyser at Honda auto plant 151Supplement 13.1: Prices of LAN management software in 1995 152Bibliography 152

14 Resources for teleprocessing 153Introduction 153Parallel processing 153Software for telecommunications 155Summary and conclusions 158Case 14.1: Replacement of mainframes at EEI 159Supplement 14.1: Top world telecommunications equipment manufacturers in 1994 160Bibliography 160

15 National information infrastructure 161Introduction 161NIIs around the world 162NII in the US 163Issues for NII 167Summary and conclusions 167Case 15.1: Alliances and mergers between carriers 168Case 15.2: Share of the European VAN market 168Case 15.3: Telecommunications law in the US 168Supplement 15.1: Milestones towards the development of an NII 170Bibliography 170

16 Global networks 171Introduction 171

viii

Page 10: Telecommunications and Networks

Contents

Global networks 172Consequences of global networks 173Telecommunications and developing countries 174Global outsourcing 175Transborder flow 176Protection of intellectual property 178Global network and business 179Summary and conclusions 180Case 16.1: Global outsourcing at Amadeus 181Case 16.2: Telstra in Australia 181Case 16.3: Telecom leap-frogging in developing countries 182Case 16.4: Slouching towards a global network 182Case 16.5: Alliance between French, German and US companies 183Supplement 16.1: World-wide software piracy in 1994 183Supplement 16.2: Index of global competitiveness 183Supplement 16.3: Telecommunications media for selected countries in 1994 183Supplement 16.4: Telecommunications end-user service available in regions

of the world 184Bibliography 184

PART 3: IMPLICATIONS OF NETWORKS 185

17 Messaging and related applications 187Introduction 187Teleconferencing 188Electronic data interchange (EDI) 188Standardization 189Electronic transfer of funds 190EFT spin-offs 192Cooperative processing 194Message handling systems (MHS) 195Summary and conclusions 196Case 17.1: GE bases global network on teleconferencing 197Case 17.2: Electronic data exchange (EDI) in the UK 198Supplement 17.1: Costs of message handling and related processing 198Bibliography 198

18 Multimedia with telecommunications 199Introduction 199Multimedia and distributed multimedia 199Requirements of multimedia 200Resources needed for multimedia processing 201Servers 202Networks 203Clients 203Standards 204Applications of distributed multimedia 204Video-conferencing 204Video/film-on-demand 206Telemedicine 206Digital library 207Distance learning 208Multimedia electronic publishing 209Organizational implications of distributed multimedia 210

ix

Page 11: Telecommunications and Networks

Contents

Summary and conclusions 211Case 18.1: MedNet 213Case 18.2: Electronic publishing at Britannica 213Case 18.3: The access projects at the British Library 214Case 18.4: Video-conferencing in telemedicine at Berlin 214Case 18.5: Telemedicine in Kansas 214Bibliography 215

19 Telecommuters, e-mail and information services 216Introduction 216Telecommuting 216Implementation of teleworking 217The benefits and limitations of telecommuting 219When telecommuting? 220Future of teleworkers 220E-mail 221Resources for e-mail 223Information service providers 225Summary and conclusions 225Case 19.1: Examples of teleworkers 227Case 19.2: Comments from teleworkers 227Case 19.3: Telecommuting at American Express 228Case 19.4: Advice from teleworkers 228Case 19.5: Teleworking at AT&T 228Case 19.6: Teleworking in Europe 228Case 19.7: Telecommuting in the US (in 1994) 229Case 19.8: Holiday cheer by electronic mail 229Bibliography 229

20 Internet and cyberspace 231Introduction 231Cyberspace 232Internet 232Connecting to the Internet 232Surfing on the Internet 234Internet and businesses 236Security on the Internet 236Organization of the Internet 240The Internet and information services 241Summary and conclusions 241Case 20.1: Intrusions into Internet 242Case 20.2: Bits and bytes from cyberspace 243Case 20.3: Business on the Internet 244Case 20.4: Home-page for Hersheys 245Case 20.5: Court in France defied in cyberspace 245Case 20.6: Internet in Singapore 245Case 20.7: English as a lingua franca for computing? 246Supplement 20.1: Growth of the Internet 246Supplement 20.2: Growth in Internet hosts around the world 246Supplement 20.3: Computers connected to the Internet in 1994 246Supplement 20.4: Users of the Internet in 1994 246Supplement 20.5: Build or rent a Web site? 247Supplement 20.6: Users of the Web for business 247

x

Page 12: Telecommunications and Networks

Contents

Supplement 20.7: Milestones in the life of Internet 247Bibliography 247

21 What lies ahead? 249The future in the context of the past 250Trends in telecommunications technology 251Network management and the future 255Management of the Internet 255Standards 257A telematique society 257Advanced applications 259Predictions often go wrong! 261Summary and conclusions 263Case 21.1: Amsterdam’s digital city 265Case 21.2: Spamming on the Internet 265Case 21.3: Minitel its past and its future 266Supplement 21.1: Percentage growth of phone lines connected to digital exchanges

in the 1990s in Europe 267Supplement 21.2: World-wide predications for 2010 compared to 1994 267Bibliography 267

Glossary of acronyms and terms in telecommunications and networking 269Acronyms in telecommunications and networking 270Glossary 272Index 277

xi

Page 13: Telecommunications and Networks

ACKNOWLEDGEMENTS

The authors wish to thank colleagues for their helpful comments and corrections to the manuscript.These include Linda Johnson, Frank Leonard, Chetan Shankar and Derek Partridge. Thanks to TahiraHussain for her help, especially in the preparation of the diagrams using the PowerPoint program. Anyerrors that still remain are the responsibility of the authors.

Page 14: Telecommunications and Networks

1

INTRODUCTIONThe agricultural age was based on ploughs and the animals that pulled them; the industrial age, onengines and fuel that fed them. The information age we are now creating will be based on computersand networks that interconnect them.

Michael L. Dertouzos, 1991

Telecommunications is an old and stable tech-nology if you think only of telephones andtelegraph. But then in the 1960s came comput-ers and the processing of data. Soon after, weneeded data communications to transmit data toremote points; the connection of remote points bytelecommunication is referred to as a network.Later, these points of communication increasedin number, with the transmission no longer beinglimited to data but included text, voice, and evenimages and video. This extended use of telecom-munications is the subject of this book. We shallexamine the technology in the first part of thebook, the management of the telecommunica-tions in the second part, and in the third andfinal part the many applications that are nowpossible because of telecommunications.

Changes in technologyWe start with an overview of the technology inChapter 2. This will provide us with a frameworkin which we can then place the many compo-nents of the technology of telecommunicationsand networks. The first of these technologies tobe examined is transmission. The earliest trans-missions were by telephone for voice and tele-graph for the written word. Telephones and tele-graph were complemented by post and organizedas a utility better known as the PT&T (Post Tele-phone and Telegraph). In the USA and UK, theseservices have been privatized and other coun-tries may follow the path away from monopolytowards privatization and free competition. Butthis is a controversial question of politics andgovernment policy-making, a ‘soft’ subject that

we chose to avoid here. Instead, we will confineourselves to the more ‘hard’ and stable topics oftechnology: the management and applications ofthe technology.

Back to transmission. Early transmission wasby wire, copper wires to be more precise. Butcopper is both expensive (and sometimes scarce)and bulky. It has been replaced by fibre optics,which uses thin glass fibres that are both cheaperand less scarce than copper. Fibre is also lessbulky than copper and much lighter. One strandof fibre thinner than a human hair can carrymore messages than a thick copper cable. A typ-ical fibre optic cable can carry up to 32 000 longdistance telephone calls at once, the equivalentof 2.5 billion bits of data per second. Recently,Bell Labs developed the rainbow technology thatsent three billion bits of information down onefibre optic thread in one second, the equivalentof 18.75 million pages of double spaced text.

Fibre optics is less expensive than stringingwire across telephone poles and even less expen-sive in capital cost than cable. Its advantages are,however, restricted to the distance transmitted.For long distances, radio broadcasts and satelliteare superior. But from the broadcasting and satel-lite station, the connection to the home or theoffice must still be made by wire or by fibre. Withthe increased volume and complexity of messagesnow being sent, the need for fibre is great and nolonger in dispute. In the USA, the use of fibre fordata communications has risen 500% during theperiod 1985 90. Many countries are turning tofibre, with Germany and Japan in the lead andthe UK and USA not far behind. Fibre opticswill be used for short distance transmission andwill complement radio broadcasting and satellite.

1

Page 15: Telecommunications and Networks

Telecommunications and networks

Wireless

Fibre

Satellite

Radio

Copper Wire / Cable

Figure 1.1 Evolution of transmission media

However, emerging strongly is the demand forwireless or cellular phones. They make transmis-sion so much more portable that one can nowtransmit while driving a car, waiting at the air-port, or even while walking the dog. This evolu-tion of transmission is the subject of Chapter 3and is summarized in the spiral of change shownin Figure 1.1.

Transmission is just one of the technologiesenabling telecommunications. Other technolo-gies include the many devices that make telecom-munications possible by contributing to thetransport of messages over networks. One set ofsuch devices include the bridge that connectshomogeneous (similar) networks and the gate-way that connects non-homogeneous (dissimilar)networks.

One device that determines the route (path)that a message takes across switches, bridges(and/or) gateways is the router. This device con-tributes to the effectiveness of the transportationof the message, but the efficiency of the transmis-sion depends largely on the message being trans-ported. Many messages tend to have redundan-cies and even blanks (as in the sentences of thisbook). Eliminating these redundancies and com-pressing the message into a smaller sized messagewithout losing any content is called compres-sion. This makes the transmission efficient bytaking less space (and time) to transmit the mes-sage. These devices are the subject of Chapter 4.

The technologies described above are fairlystable and have well established standards that

are universally accepted. One technology thatdoes not have universal acceptance and is verycontroversial is the international standard for anarchitecture and protocol for networks that isopen to varying designs of hardware and soft-ware. This is the OSI (Open Systems Intercon-nection) model that was designed as a frame-work for the structure of telecommunicationsand networking. A description of the OSI andits competitors in the US (the SNA and theTCP/IP) are examined in Chapter 7. An inter-national standard and one that is accepted glob-ally is the ISDN (Integrated Systems DigitalNetwork). ISDN will enable the transmission ofanalogue signals, which are now carried on tele-phone lines, as digital signals, like those used bythe common desktop computer. This enables usto have just one signal, digital, instead of the twosignals (analogue and digital) that we now carry,requiring equipment for interfacing and result-ing in both inefficiencies and high cost. Inte-grated digital transmission is faster, and is easierand cheaper to maintain and operate, but trans-mission needs a modem, a device that translatesfrom an analogue signal of, say, a telephone to adigital signal of a computer and vice versa.

Conversion to ISDN is expensive and slow butsteady in the US, as reflected in the expenditureson ISDN which have doubled in the last threeyears since 1994. This conversion of the analogueworld to the digital world has already resulted inthe infrastructure becoming overloaded and over-whelmed. The demand for services to be trans-mitted has resulted in plans to extend ISDN toB-ISDN (Broad-band ISDN), which is now in thestages of getting international standards. BothISDN and B-ISDN are the subject of Chapter 8.The evolution of these enabling technologies isshown in Figure 1.2.

We have mentioned networks as beinginterconnected points of communication. Theearliest network was implemented by theUS Department of Defense to facilitate thecommunication between their researchers andacademics working on defence projects. Theseindividuals were technical and inquisitive andbecame interested in developing a more reliableand efficient way of communicating not justtheir research projects but everything elseincluding their daily mail. They unknowinglysowed the seeds of e-mail (electronic mail) andmany other applications of telecommunications.These researchers (and later others in private

2

Page 16: Telecommunications and Networks

Introduction

ISDN, B-ISDN

OSI / SNA / TCP/IP

Smart devices

Bridges / Gateways

Compression

Modems

Figure 1.2 Spiral of enabling technologies

industry) were also interested in developinga worldwide network of communications, andso ARPANET and the many technologies thatit developed eventually led to the Internet.The Internet is a network of other networks.Despite no formal initiation or structure, ithas become a very effective and popular meansof communication. In 1984, the Internet had

1074 interconnected computers. In ten years,the number grew to 3.8 million and is stillgrowing. The Internet is now being used notjust by researchers but by individuals and alsoby businesses. The Internet is discussed as anapplication of telecommunications in Part 3,more specifically in Chapter 20.

Along the way to the Internet, the ARPANETcontributed to the evolution of formal network-ing first in small local areas better known asthe LAN (Local Area Network). This is the sub-ject of Chapter 5. Networking in a broader geo-graphic area is referred to as MAN (Metropoli-tan Area Network), and in a yet wider networkthe WAN (Wide Area Network). With large vol-umes of data, systems like the SMDS (SwitchedMultimegabit Data Service) will become morecommon in the future. The MAN and the WANare the subject of Chapter 6. Their evolution isshown in Figure 1.3. These networks use archi-tectures and protocols discussed in Chapter 8and may or may not use the ISDN examined inChapter 7.

Management oftelecommunications

The technologies mentioned above (and definedoperationally) are all examined in detail in

S M D S (Switched Multimegabit Data Service)

G A N (Global Area Network) (Ch. 6)

Internet (Ch. 20)

WAN (Wide Area Network) (Ch. 6)

MAN (Metropolitian Area Network) (Ch. 6)

LAN (Local Area Network) (Ch. 5)

ARPANET (Ch. 5)

Figure 1.3 Spiral of networks

3

Page 17: Telecommunications and Networks

Telecommunications and networks

Part 1 of this book. Part 2 is concerned withthe management of these technologies. Westart in Chapter 9 with the location andorganization of telecommunications as part ofIT (Information Technology) and as part ofa corporate organization structure. The earliestorganization structure was to centralize the largecomputer processors and mainframes that servedall the local and remote users. This facilitated theeconomic use of the scarce resource of computerpersonnel as well as the expensive equipment.But then came the PC, the Personal Computer,and a parallel increase in the ability and desirefor the centralized power to be decentralized tothe remote nodes where the computing needsresided. This led to DDP, Distributed DataProcessing. (At that time, most of computerprocessing was for data and only later didit extend to text, voice, video and images).PCs made distribution economically feasibleand telecommunications which interconnectedthese nodes made it a feasible proposition.These organizational approaches are examined inChapter 9.

Parallel to the growth of PCs was thedissatisfaction of the end-user of the centralizedapproach which was slow and unresponsive.

The end-user (ultimate user of computeroutput) was becoming computer literate, andno longer cowed by the computer specialist atthe centralized and remote location. The end-users now had the desire (and sometimes witha passion) for the control of local operations.The end-users were willing to accept manyof the responsibilities of maintaining andeven selecting resources and developing systemsneeded at the remote nodes. They wantedthe centre to do the planning of commonlyneeded resources (equipment, databases and eventechnical human resources) and the developmentof mission critical applications whilst leavingthe computing at the nodes to the end-user.Thus evolved the client server system, wherethe computer at a remote node is a client andthe common computing resources (like data,knowledge and application programs) reside oncomputers called servers. Such a system requiressolutions to special computing resources and thesolution to many organizational and managerialissues. These are identified and discussed inChapter 10.

The client server approach is appropriate fora corporation or institution. But at a national and

G I I (Global Info. Infrastructre) (Ch. 16)

N I I (National Info. Infrastructure) (Ch. 15)

Client−Server System (Ch. 10)

Distributed (Ch. 9)

Decentralized (Ch. 9)

Centralized (Ch. 9)

Figure 1.4 Spiral of organization for telecommunications

4

Page 18: Telecommunications and Networks

Introduction

regional level an infrastructure for telecommu-nications is desirable that will not only meet thehigh demands of volume but the diverse demandsof not just data but also voice and image. The car-rying capacity has to increase. For example, oneneeds 64 000 bits per second capacity to transmitvoice, 1.2 million bits per second to transmit highfidelity music, and 45 million bits per secondto transmit video. Just as the infrastructure forroad transportation changes from a city and localtransportation to a motorway (freeway or auto-bahn) with all its interconnections, so also weneed an entirely different set of transmissioncapacities and enabling technology for intercon-nectivity to connect and handle transmission.Such an infrastructure for national telecommu-nication (NII) is discussed in Chapter 15 andfor a Global Information Infrastructure (GII) isdiscussed in Chapter 16. This evolution in theorganization of telecommunications is shown inFigure 1.4.

The managerial issues of telecommunicationsis the subject of the following four chapters(Chapters 11 15). The management of stan-dards is the subject of Chapter 11; and of secu-rity in Chapter 12. Chapter 13 is an overviewof the management and administration of all oftelecommunications and networking. The acqui-sition and organization of telecommunicationresources is covered in Chapter 14.

Standards is one of the issues faced bymanagement of telecommunications. Standardsare agreed upon conventions and rules ofbehaviour are part of our daily life and certainlynot new to IT where we have standards forhardware and software and even standards foranalysis and design. We have all these typesof standards for telecommunications plus a fewmore. It is important for telecommunications, ifyou consider the fact that telecommunicationsinvolves remotely located parties. In the caseof global telecommunications this may be acontinent away or across the oceans. If allof us were to pursue our own preferencesin design and conventions for operations wewould never be able to communicate witheach other and there will be no compatibilityand interoperability of devices and protocols(procedures). We do have agreement of manystandards including international standards, butcertainly not enough. One can experience that bygoing to another country and trying to plug in acomputer. It is likely that your plug may not fit

into the socket in the wall. We do not have all theinternational standards we need. They take a lotof effort and time. The international standardson network architecture mentioned earlier, theOSI, took ten years. But the timing was wrong.It came too late and had to face entrenchedvested interests of manufacturers and suppliersof telecommunications equipment. But standardsin high tech industries like telecommunicationsmust not come too early before the technologyis stabilized for that will ‘freeze’ and discouragenewer approaches and innovations. Thus thetask of telecommunications management is tocorrectly select the best technology and to assessthe timing of adopting now and run the riskof being outdated or waiting and not benefitfrom existing advances in technology. The needfor standards and the process of agreeing onstandards by balancing the often conflictinginterests is the subject of Chapter 11.

Another concern of telecommunications man-agement is security. Again, as with standards,this is not new to IT management. But intelecommunications there are additional dimen-sions. The potential population of those who canpenetrate the system is larger since there are nowmore people who have computers and know howto use them. Also, the temptation is larger. Thereis more data (and computer programs) that canbe accessed and there is also more money that canbe transacted across the lines of telecommunica-tion. We thus need to control the access to net-works by building fire-walls to protect our assets;encryption and other approaches are also neededto protect selected messages that are transmit-ted. The question for management (corporateand telecommunications) is not whether we needsecurity but how much and where. Managementmust assess the cost of security and compare itwith the risk of exposure. These subjects are dis-cussed in Chapter 12.

Acquisition of telecommunications resourcesis the subject of Chapter 14. The process ofacquisition is not new to either IT or toany corporation. What is new is the natureof resources that have to be acquired. Atthe corporate level the decision is that ofselecting a LAN or MAN or WAN and notof selecting the devices for connectivity or themedia of transmission which is part of theinfrastructure. But there is the need to select theprocessors needed for accessing the network. Ina client server environment, the client may be

5

Page 19: Telecommunications and Networks

Telecommunications and networks

Standards Management (Ch. 11)

Security Management (Ch. 12)

Resource Acquisition Mgmt. (Ch. 14)

Planning Development

Administration and Management ofTelecommunications (Ch. 13)

OPERATIONS

Figure 1.5 Management of telecommunications

a PC or a workstation. The server is a computerthat could vary from a powerful PC to a minior mainframe. But the servers for tomorrowhave to be capable of handling not just databut multimedia. And so we need to considernot just file servers and application serversbut also video servers. We have evolved fromthe stand-alone computer system to a systemwith a variety of computers that serve as clientsor servers or both and are interconnected bytelecommunications.

The chapter on management of telecommuni-cations (Chapter 13) is more of a summary ofall the related chapters. It is concerned withthe planning, acquisition and maintenance ofall the resources needed for telecommunica-tions. A summary of these activities is shownin Figure 1.5. This includes personnel resourcesdiscussed in Chapter 9 on the organization at thecorporate level.

The importance of telecommunications man-agement can be gauged by the statistic that cor-porate spending on telecommunications in theUS has more than doubled in the three yearssince 1994.

The last two chapters on management of tele-communications (Chapters 15 and 16) go beyondthe corporate level. Chapter 15 is concerned withintegration at the national level by providingan infrastructure for telecommunications muchlike we have an infrastructure for communica-tions by road or plane. This infrastructure, oftencalled the information highway, provides theinterconnections for exchange of information andenables the integration of all the sources anddestinations of information whether this be thehome, office, business, school, library, medicalfacility or government agency. We need more

than standards for such integration and many ofthe issues that arise are not just technologicalbut economic and political. These are examinedin Chapter 15. We compound all these prob-lems when we consider global communicationsand have additional issues of transborder flow,global outsourcing and the protection of intel-lectual property. These issues are examined inChapter 16.

Applications oftelecommunicationsThe next and final part of the book is concernedwith applications of telecommunications andnetworks. Message handling is Chapter 17, mul-timedia is Chapter 18, teleworking is Chapter 19,the Internet in Chapter 20, integrated applica-tions is Chapter 21, and a look into the future isChapter 22. A graphic summary of this flow oftopics is shown in Figure 1.6.

Our first discussion of applications will beon message handling applications in Chapter 17.Some of these applications have been aroundfor a long time. An example of such latematuring applications is e-mail (electronic mail)that has suddenly ‘taken-off’ with high rates ofgrowth and become a ‘killer’ application. It is soimportant an application that it will be discussedat great length later in Chapter 19.

Other applications of message handling arenot so conspicuous but just as important. Oneis EDI, Electronic Data Interchange, which isused extensively for transfer of documents andfiles by businesses. Another application, also inbusiness but restricted to financial institutions

6

Page 20: Telecommunications and Networks

Introduction

Message Handling and Related Applications

Distributed Multimedia Applications

(Ch. 17)

(Ch. 18)

Teleworkers, e-mail and Information Services

(Ch. 19)

Internet and Cyberspace (Ch. 20)

What Lies Ahead (Ch. 21)

Figure 1.6 Applications of telecommunications

like banks, is the transfer of money by EFT,Electronic Funds Transfer. As James Martin onceput it, ‘Money is merely information, and as suchcan reside in computer storage, with paymentsconsisting of data transfers between one machineand another.’ Such money transfers are for bil-lions of dollars a day all across the world. Wetake such electronic transactions for granted lit-tle realizing that if it were not for telecommuni-cations our bank deposits and withdrawals wouldnot be as easy or as fast as they now are. Ofcourse, transactions may not be as safe either.These and related problems as well as their solu-tions are the subject of Chapter 17.

Another message handling application is thatof teleconferencing but with the coming ofmultimedia, this may well evolve into video-conferencing. Other applications, like homeshopping, distance learning and electronicpublishing, are also becoming multimediathereby greatly improving the quality of what istransmitted.

Applications still in the development stagesinclude the delivery of video-on-demand, films,etc., delivered to the home at any time of the dayor night. These applications along with interac-tive games will change the way we entertain our-selves, though we may want the ability for moreself-control over the content.

Other exciting applications include the digitallibrary that will enable you to read any articleor book without having to go to the library, orbrowse through the contents of the Tate Galleryin London without having to physically visit the

place. This may well affect our learning as wellas our patterns of how we spend our leisure time.

One final application of multimedia to be dis-cussed here briefly is the use of telecommunica-tions in medicine. It allows our entire medicalrecord (in archives or observations taken in realtime), including X-ray or CAT-scan pictures, tobe transferred to an expert anywhere in the worldfor a second opinion. Telemedicine could also bevaluable as a first opinion for those who may belocated remotely (permanently or temporarily aswhen travelling). Again, as in many teleprocess-ing applications, there are problems of security,privacy and economics. These issues are exam-ined in Chapter 18.

In Chapter 17 we mention e-mail. It is cur-rently used extensively for correspondence (pri-vate and business) as well as for copying (down-loading) computer programs residing at othercomputer server sites. It is much faster and morereliable than traditional mail, even air-mail. E-mail including foreign mail through the Internetis often available through local information ser-vice providers accessed by the telephone. Theseproviders also offer many services that includeentertainment, news, weather forecasts and edu-cation. Some of the services are interactive suchas chat sessions where one can exchange viewsand information from someone that you may notknow and someone who may be across the oceans.Information services could be customized so thatyou select what you want from the diverse optionsand do not have to take what is edited and passeddown as is the case with the 12 000 newspapersand magazines and the many TV stations.

One service provider is CompuServe. It startedby renting computer time from an insurancecompany that had purchased a computer andhad unexpected excess capacity. In 1995, Com-puServe was one of the three largest on-lineservice providers with around two million sub-scribers.

Information services may well be at its take-off stage approaching a killer application. In1995 there were over eight million subscribers,with over two million subscribers joining justone information provider (AOL), in just onecountry (US). It may well become as ubiquitousas the telephone or TV. Its usage will increase asthe usage of computers in the home increases.In 1995, 30% of all homes in the US ownedcomputers and computer sales surpassed TV inannual sales for the first time ever.

7

Page 21: Telecommunications and Networks

Telecommunications and networks

Will computers and information servicesbecome as ubiquitous as the telephone and theTV? Will they be as end-user friendly andaccessible as are telephones and TVs today? Willit take two to three decades to be accepted in themainstream as it did for the telephone and TV?Must information services be regulated? Will allthis information around threaten our privacy?Some insights into the answers to such questionswill be found in Chapter 19, or in Chapter 20which is on the Internet and cyberspace.

Before we get to Chapter 20, we discuss oneother application that depends on computersand telecommunications. This is telecommuting,which is working at home using a computerand being connected to the corporate databasethrough telecommunications. Telecommuters arealso big users of information services, especiallyof e-mail.

With the boundaries of the workplace get-ting ‘fuzzy’, teleworking is a viable and attrac-tive alternative to the crowded downtown officethat must often be reached after fighting traf-fic jams and traffic lights. Telecommuting willrequire special resources and raises many issuesespecially of productivity and evaluation. Theseissues are the subject of Chapter 19.

The Internet is the subject of Chapter 20 andhas been mentioned earlier as an outgrowth of

ARPANET and LANs as well as in the contextof information providers. If you cannot afford themonthly subscription of an information providerand do not have access to a LAN (through youremployer or university) then you can alwaysgo to a cafe like the Cafe Cyberia in Londonwhere for an hourly payment you can surf theInternet.

Discussing the Internet will allow us to entersome of the space of cyberspace, where it is usednot only by individuals but increasingly by busi-nesses. Currently businesses do a lot of their com-munications and some of their advertising on theInternet but not much business in the sense ofsales. This is because there is not yet any safeway to transact money on the Internet. There ismuch talk about cybercash, cybermoney, dig-icash and digimoney, but you are advised notto trust your credit card to cyberspace, at leastnot yet. The problems of security and privacy ofinformation are among the issues to be examinedin Chapter 20.

Chapter 21 is on integration towards aglobal systems through telecommunications.Without telecommunications we have problems(and solutions) of logical integration of files.With telecommunications we have problemsof interconnectivity plus integration that caneventually lead to computer applications across

LAN/MAN/WANARPANET/Internet

Proprietary systems/

platforms/protocols

objects

Analogue world

Narrow bandwidth

User unfriendlysystems

FUNCTIONAL

APPLICATIONS

LAN/MAN/WAN/GANInternetInformation superhighway

Open Systems/

platforms/protocols/

objects

Digital world

Broad bandwidth (Gigabits)

Video communications

Multimedia

End-user friendly

systems

WIRED CITIES

TELEMATIQUE SOCIETY

Figure 1.7 Trends from past present to present future

8

Page 22: Telecommunications and Networks

Introduction

space and distance. We can then integrateapplications not only in a corporation but ina city, and not just a city but a region andeventually anywhere in the world that we sodesire. Of course this requires an internationalinfrastructure and international standards anda few other prerequisites that will enable us toreach for a wired world or, as the French Normaand Mink called it, the telematique society.

Our final chapter also looks at the future froma historical perspective. One view is that theremay not be any dramatic breakthrough in thetechnology of telecommunications in the imme-diate future but that we will continue to evolveon a steady growth curve consolidating much ofwhat we have. Thus we will convert the ana-logue world into the digital world, enlarge thenarrow band and single media to the broad-band and multimedia, enhance LANs, WAN andthe Internet, transform proprietary systems toopen systems through standardization, and fosterthe growth from functional applications to thewired and telematique society through integra-tion and the interconnectivity of telecommuni-cations. These trends and evolution are summa-rized in Figure 1.7. We shall revisit this figureat the end of the book in Chapter 21 and thenidentify the technologies that led to each of thetransformations.

Much of the future of telecommunicationswill depend of the response of the end-userand consumer as well as on the computing andtelecommunications industry. This is difficult topredict because it requires predicting not onlythe technologies related to telecommunicationand networking but also the environmentwhere telecommunications and networks will beused. There will also be changes in relatedindustries such as those of hardware, software,cable, telephone, and even the publishingand entertainment industries. Each of theseindustries are a multibillion dollar industry justin the US alone. Some companies have the cashto buy; some have the technology and experienceto integrate; others have the connections intohomes and offices. Possible combinations offirms in these industries are many and sometimesreferred to as the metamedia industry. One entryinto this new industry was announced in theUS in early 1994 between the cable companyTCI and the telephone company Pacific Atlantic.They were to have over 30 billion in assetsand promised a 500 channel two-way interactive

video-on-demand entertainment. But then theregulated rates for TV were reduced and thetelephone partner wanted a reduced price tobuy. The cable company did not budge, and sobefore the year was out the proposed merger wasdissolved. There may well be more failures, butmeanwhile there is much experimentation goingon in the US and in Europe with different mediaand a varied mix of services to test the public onwhat they want and what they are willing to pay.

All around the world new industrial struc-tures are rising out of the deregulation oftraditional PT&Ts (Post Telephone and Tele-graph). In about 100 countries, various value-added information related services are openingup to competition. There is a growth in thedemand of information related services leadingto an information-intensive global society thatis propelled by consumer markets. Competitionand ease of entry will accelerate the creation ofnew services and new markets.

. . . The new technology creates new demand.

. . . While the telecommunications industrywill provide the network for the informationintensive society and the computer industrywill provide the advance processing capability,the information service industry will continueto evolve with the passage of time. Althoughmany of the information service providers arelikely to emerge from the telecommunicationsand computer industries, new providers willcreate their own niches in the market in thefuture . . . We are entering as era when globalinformation networks and services will becomereality. (Nazem, 1993: p. 19)

The fierce competition ahead may well resultin a better integration of delivery and servicesoffered to the consumer but the shakedown maytake long and will most likely be painful andcostly. Eventually there will be a winner andmany winners before the new industry stabilizes.Whatever the final product offered and whateverthe delivery mode, the process will be exciting.

In the next few years we will see improvedtechnologies and infrastructures, new andenhanced servers with marked differentiationand specialization for different applications, andincreasing competition in the telecommunica-tions and network industries. This will enable usto communicate and cooperate with each otherin ways that were not hitherto possible and

9

Page 23: Telecommunications and Networks

Telecommunications and networks

could well result in the redefinition of the oldparadigms of communication and work.

The problems still facing us are those of stan-dards which is thwarting competition The highlevel of continuous technical communication inthe telecommunications and computing indus-try has resulted in the industry refusing to set-tle down. This provides the end-users with morechoice but for the network manager it is a greaterchallenge.

The next chapter is another summary andoverview but only of the technologies to beexamined in Chapters 3 8.

Case 1.1: Network disaster atKobe, Japan

In 1995, an earthquake struck Japan at Kobekilling more than 5000 people and causingdamage of over $100 billion. This damageincluded $300 million to the physical plantand infrastructure damage disrupting serviceto about 285 000 of the 1.44 million circuitsin the region and knocking out over 50% ofthe overall services offered by the nationaltelecommunications utility NTT, the NipponTelephone and Telegraph Company. Manybusinesses were severely disrupted and eventhe recovery operations were greatly hamperedbecause of the lack of telecommunications.Some businesses, however, survived because theyhad good network management and a planfor disaster recovery. One of them was aninformation provider that had planned for anearthquake in Kobe even though the odds of anearthquake there were very small. This firm hadleased lines from NTT serving its main officesin four cities in Japan in addition to leasingdomestic satellite services from a VSAT (VerySmall Aperture Terminal) satellite installation tobypass the domestic network. It also had a back-up generator to ensure that the system couldbe up and running even if all the local powerlines had snapped. In addition, it had a back-up centre, not in Japan, but in Singapore. Whenthe earthquake hit, the firm started up its back-up generator and was in full operation withinthe day.

Source: Data Communications, July, 1995,pp. 47 8.

Case 1.2: Networking at the spacecentre

The space centre at Houston, Texas, has con-trolled the flights of all the early spacecraft. In1995, a new command and control centre waspartly operational and entered its beta phase oftesting to replace the old centre and to preparefor the space shuttle into the twenty-first century.The new centre has over 19 kilometres of fibreoptic cables connecting its hundreds of PCs andworkstations with the larger computer systemsowned by NASA. These computers and worksta-tions are all interconnected in addition to beingconnected to the tracking stations all around theworld.

One design specification of this complex andimportant networking systems was that almost allthe equipment must be ‘off the shelf’. This wasspecified in order to keep maintenance easy andnot as costly as in the previous centre.

The design specification is a commentary onthe state-of-the-art of telecommunications andnetworking. Even a large and in some ways veryimportant real-time system can be constructedfrom products that are commonly available andare no longer ‘high tech’.

Supplement 1.1: Milestones fornetwork development

1969 The US Department of Defense commis-sions ARPANET for networking amongits research and academic advisers.

1972 SNA by IBM offers the first systems net-work architecture for a commercial net-work.

1974 Robert Metcalfe’s Harvard Ph.D. thesisoutlines the Ethernet.

1974 Vinton Cerf and Bob Kahn detail the TCPfor packet network intercommunications.

1976 X.25 is the first public networking service.1978 Xerox Company, Intel and DEC give the

first Ethernet specification.198O The FCC in the US deregulates tele-

com equipment at customer premises andallows AT&T to offer tariffed data ser-vices and computer companies to offernon-tariffed communications services.

10

Page 24: Telecommunications and Networks

Introduction

1981 IBM introduces the personal computer,PC.

1982 Equatorial Communications Services buystwo transponders and the Weststar IVsatellites, giving birth to the first verysmall aperture service (VSAT) industry.

1984 AT&T divests ownership in local telecoms.1984 The UK’s Telecommunications Act autho-

rizes the privatization of British Telecom-munications Ltd. It is licensed and a reg-ulatory authority is established.

1985 The Japanese government enacts theTelecommunications Business Law, whichabolishes the monopolies of the country’sdomestic and international carriers.

1987 The Commission of European Commu-nity publishes the Green Paper whichcalls for open competition in the supplyof equipment and the provision of dataand value-added service.

1990 The ARPANET is officially phased outand the Internet is born. (For milestonesin the life of the Internet, see Chapter 20.)

1993 British Telecom buys 20% of MCI andthis marks the beginning of a truly globalmarket.

Bibliography

Budway, J. and Salameh, A. (1992). From LANs toGANs. Telecommunications, 26(7), 23 27.

Campbell-Smith, D. (1991). The newboys: a survey oftelecommunications. Economist, 5 Oct. 1 52.

Doll, D.R. (1992). The spirit of networking: past,present, and future. Data Communications, 21(9),25 28.

Financial Times, 19 July, 1989, pp. 1ff. Special issue onSurvey of International Telecommunications.

Malone, T.W. and Rockart, J.F. (1991). Computers,networks and the corporation. Scientific American,265(3), 128 136.

Nazem, S. (1993). Telecommunications and the infor-mation society: a futuristic view. Information Man-agement Bulletin, 6(1 & 2), 3 19.

Sankar, C.S., Carr, H. and Dent, W.D. (1994). ISDNmay be here to stay . . . But it’s not plug-and-play.Telecommunications, 28(10), 27 33.

Sproul, L. and Kiaster, S. (1991). Computers, net-works and work. Scientific American, 265(3),116 123.

Tillman, M.A. and Yen, D. (Chi-Chung) (1990). SNAand OSI: three stages of interconnection. Communi-cations of the ACM, 33(2), 214 224.

Weinstein, S.B. (1987). Telecommunications in thecoming decades. IEEE Spectrum, 23(11), 62 67.

11

Page 25: Telecommunications and Networks

This Page Intentionally Left Blank

Page 26: Telecommunications and Networks

Part 1

TECHNOLOGY

Page 27: Telecommunications and Networks

This Page Intentionally Left Blank

Page 28: Telecommunications and Networks

2

TELEPROCESSING AND NETWORKSIn 1899, the director of the US Patent Office urged President William McKinley to abolish hisdepartment. According to the director, everything that could be invented had been invented.

Although the processing speed of a CPU is mea-sured in micro-, nano- or picoseconds, users willnot get the full benefit of this speed if tapes anddisks on which input is recorded are physicallytransported to the computer for data entry. Like-wise, the delivery of reams of paper output to theuser can be time consuming, particularly whenusers are not located in the same building as theCPU, for example in a distant sales office, branchoffice or warehouse.

With teleprocessing (the processing of datareceived from or sent to remote locations by wayof a telecommunications line, such as coaxialcable or telephone wires), input and output isinstantaneous. This is the mode of processing formultiuser systems where people located in dis-persed locations share a computer but need toinput data and access up-to-date information atall times. You can see why the term ‘teleprocess-ing’ is often used as a synonym for telecommuni-cations, data communications and informationcommunications.

The technology of telecommunications, whichlinks input/output terminals to distant CPUs,advanced in the 1970s to allow the linkage ofworkstations, peripherals and computers intonetworks. Networks are valued by organiza-tions because they promote the exchange ofinformation among computer users (many busi-ness activities require the skills of many people),the collection of data from many sources and thesharing of expensive computer resources. Net-works may be:

1. Local area networks (LANs) which permitusers in a single building (or complex ofbuildings) to communicate between termi-nals (often microcomputers), interact witha computer host (normally a mini or main-frame) or share peripherals.

2. Linked LANs within a small geographic area.3. National networks such as ARPANET to

link computer users in locations across thecountry. Database services also fit into thiscategory.

4. International (wide area) networks, the mostexpensive networks because of long dis-tances between nodes; the most difficultto implement because standards and regu-lations governing telecommunications varyfrom country to country.

5. A combination of the above.

This chapter surveys the technology ortelecommunications, discusses the importance oftelecommunications to business and looks at theproblems of connectivity that corporate managersmust resolve.

The rise of distributed dataprocessingWhen computers were first introduced, mostorganizations established small data processingcentres in divisions needing information. Thesecentres were physically dispersed and had nocentralized authority coordinating their activi-ties. Data processed in this manner were oftenslow to reach middle and senior managementand frequently failed to provide the informa-tion needed for decision-making. Because of thescarcity of qualified computer specialists, thecentres were often poorly run. In addition, theywere unnecessarily expensive. By failing to con-solidate computer resources, organizations didnot take advantage of Grosch’s law (applicable toearly computers) which states that the increasein computational power of a computer is thesquare of the increase in costs; that is, doublingcomputer costs quadruples computational power.

15

Page 29: Telecommunications and Networks

Telecommunications and networks

The need for centralized computing facilitieswas soon recognized. Firms hoped that central-ization would result in lower costs, faster deliv-ery of output, elimination of redundancy inprocessing and files, tighter control over dataprocessing, increased security of resources andgreater responsiveness to the information needsof users. While consolidation of computing wastaking place, computer technology was advanc-ing. By the time third-generation computers wereinstalled in computer centres, users no longerhad physically to enter the centre to access thecomputer but could do so from a distant ter-minal connected by telecommunications. Time-sharing had also been developed whereby severalusers could share simultaneously the resourcesof a single, large, centralized computer. Withcentralized processing, teleprocessing (also calledremote processing) became the norm.

However, not all of the expectations forimproved service were realized when centraliza-tion took place. Complaints about slow infor-mation delivery and the unresponsiveness of thecentres to user information needs were receivedby corporate management. Users resented the redtape that computer centres required to justifyand document requests for information services.In turn, computer specialists at the centreschaffed at criticism, feeling overworked, under-paid and unfairly reproached by those withno understanding of the problems of systemsdevelopment and the management of computingresources. This general dissatisfaction with oper-ations led to a reorganization of processing onceagain to distributed data processing (DDP)the removal of computing power from one largecentralized computer to dispersed sites whereprocessing demand was generated.

Although DDP sounds like a return to thedecentralization of the past, it was not. Bythe time DDP was initiated, minicomputerswith capabilities exceeding many former largecomputers were on the market at low cost.Computers were much easier to operate andmaintain. Chip technology had increased CPUand memory capacity while reducing computersize. Desktop microcomputers were for sale.Strides in telecommunications meant that noprocessing centre had to be isolated, but couldbe linked to headquarters or to other processingcentres (nodes) in a network. Furthermore,experience with data processing had given usersconfidence that they could manage and operate

their own processing systems without the aid (orintervention) of computer specialists.

Distributed data processing includes both theinstallation of stand-alone minis or mainframesunder divisional or departmental jurisdictionand the placement of stand-alone microcomput-ers for personal use on the desktops of end-users.But it is generally associated with the linkageof two or more processing nodes within a singleorganization, each centre with facilities for pro-gram execution and data storage. (These nodesmay be computers of all sizes, from microcom-puters to mainframes.) Figure 2.1 shows sampleDDP configurations. A host computer may pro-vide centralized control over processing as in thestar network or the nodes may be coequals. (Fail-ure of the central computer impairs processingfor the entire system if the host computer breaksdown in the star configuration. The ring struc-ture overcomes this problem because reroutingcan take place should one processing centre orits link fail.) The hardware at each node is some-times purchased from the same vendor, whichfacilitates linkage. But generally networks con-tain a mix of equipment from different manufac-turers which complicates information exchange.This is discussed further later in this chapter.

We now look at equipment configurationsand technology to support teleprocessing andnetworks.

Star Ring

Star-Star Ring-Star

= Host computer

= Node computerBus

Figure 2.1 Examples of DDp configurations

16

Page 30: Telecommunications and Networks

Teleprocessing and networks

Transmission channels

Data or information may be transmitted a fewfeet within a single office building or over thou-sands of miles. When planning for telecommu-nications, corporate management must considerwhat type of transmission channel is most appro-priate for organizational needs and whether touse private or public carriers.

Types of channel

A simplex communications, line or channelenables communication of information in onedirection only. No interchange is possible.There is neither any indication of readiness toaccept transmission nor any acknowledgementof transmission received. A half-duplex systemallows sequential transmission of data inboth directions, but this involves a delaywhen the direction is reversed. The abilityto transmit simultaneously in both directionsrequires a duplex or full-duplex channel, amore costly system. An advantage in computerprocessing is that output can be displayedon a terminal while input is still being sent.Figure 2.2. illustrates channel differences andlists applications for their use. Some channelscarry voice transmissions, some data. Current

technology allows voice and data messages to becarried long distances over the same line at thesame time and at an affordable cost.

Transmission speed or signalling speed is mea-sured in bits per second. In most communica-tions lines, 1 baud is 1 bit (binary digit 0 or 1)per second. The capacity of the channel is mea-sured in bandwidths or bands. These give a mea-sure of the amount of data that can be transmit-ted in a unit of time.

A range of transmission options exists by com-bining different types of channel, transmissionspeeds and bandwidths the cheapest and mostlimited being a simplex telegraphic-grade chan-nel, the most expensive and versatile a full-duplex broadband system. Wire, cable, radio,satellite, telephone, television, telegraph, facsim-ile and telephoto are sample communicationschannels which vary in the types of data or infor-mation they transmit and transmission features.

Public or private carrier?

In the USA, telecommunications lines thatserve the public are licensed by the FederalCommunications Commission (FCC). There areover 2000 telecommunications carriers availableto the public (called common carriers) such asAT&T for telephone and Western Union for wire

Type Transmission Graphic Exampledirection representation

Simplex One direction only A B Radio

Television

Half-duplex

One direction only atany one time. Can bein both directions insequence

A B Walkie-talkieIntercom

Duplex orfull duplex

In both directionssimultaneously

A B

Picture-telephoneDedicated separate

transmissionlines (such as apresidential‘hot-line’)

Figure 2.2 Types of channel in telecommunications

17

Page 31: Telecommunications and Networks

Telecommunications and networks

and microwave radio communications. Someprovide point-to-point service on a dedicatedline; others, switched services, routing datathrough exchanges and switching facilities,sometimes in a roundabout route to reach afinal destination. A variation of the latter is apacket-switching service which breaks a datatransmission into packets, each containing aportion of the original data stream, and transmitsthe packets over available open lines. Uponarrival, the data in the packets are reassembledin their original continuous format. Packet-switching networks can support simultaneousdata transmission from thousands of users andcomputers.

A shared rather than dedicated point-to-pointcommunications line reduces the outlay of a com-pany for long-distance communications circuits.Most packet-switching services have anotheradvantage as well: they support a standard proto-col (rules governing how two pieces of equipmentcommunicate with one another) like X.25 whichis vendor independent. That is, they will transmitdata to and from equipment sold by many differ-ent manufacturers. As a result, a user at a singleterminal can access non-homogeneous hardwareconnected in the network.

An alternative to a common carrier transmis-sion facility is a private data network. Suchnetworks are economically feasible over shortdistances, which explains why they are calledlocal area networks (LANs). Some LANs arevendor specific: that is, they support connectiv-ity only between hardware manufactured by onemanufacturer or manufacturers of compatibleequipment. Examples of such networks includeIBM’s token ring network, Wang’s Wangnet, andXerox’s Ethernet. (See Figure 2.3 for an illustra-tion of Ethernet use.) Some LANs are all-purposenetworks. Connectivity and protocol support isprovided for the equipment of many vendors. Pri-vate branch exchanges (PBXs), primarily tele-phone systems that connect hardware, are a thirdoption.

Choosing between these options is a difficulttask that involves many technical issues, includ-ing speed, capacity, cabling and multivendorsupport. The network must fit into the existingenvironment and meet the organization’s func-tional needs. Furthermore, no company wants toinvest in a system that will require the replace-ment of existing hardware or the addition ofcostly interfacing equipment; nor does any cor-porate manager want a system that will quickly

Office workshop

Printer

Informationprocessingcentre

Processor

Electronicfile cabinet

Interface

Production workshop

Productionmachine

Micro graphic cell

Terminalinoffice

Otherethernets

Gateway∗

computer

Typing system

Printer

TerminalsEthernetmultiplexer

Electricprinter

Eth

erne

t cab

le

Figure 2.3 Ethernet

18

Page 32: Telecommunications and Networks

Teleprocessing and networks

become obsolete or outgrown. The variety ofLAN products adds to the dilemma and theintense competition among vendors to sell LANsystems puts pressure on corporate managers atthe time a network decision is being made.

Interface equipment

To transfer information by telecommunications,many computer systems must add interface com-ponents: that is, hardware and software to coor-dinate the receipt and delivery of messages.

To illustrate, most terminals produce digitalsignals (pulses representing binary digits),whereas many telecommunications lines transmitonly analogue signals (transmission in acontinuous waveform). As a result, equipmentis required to convert digital data to analoguesignals (a process called modulation) when amessage is sent and to reconvert the waveform

back to pulses (demodulation) at the receivingend (see Figure 2.4). A peripheral called amodem performs this conversion, a name derivedfrom modulation and demodulation.

In addition, a multiplexer may be added tocombine lines from terminals that have slowtransmission speeds into one high-capacity line(see Figure 2.5). Sometimes a number of ter-minals share a channel (or channels). A con-centrator is equipment that regulates channelusage, engaging terminals ready to transmit orreceive data when channels are free or send-ing a busy signal. For long-distance networks,a repeater acts like an amplifier and retrans-mits signals down the line. A bridge has a sim-ilar interface function but retransmits betweentwo different LANs of homogeneous equipment.A router not only retransmits but determineswhere messages should be forwarded. A gate-way connects networks that use different equip-ment and protocols. (Although managers shouldbe familiar with these terms, they rely on the

0 1 0 1 1 0Modem Modem

0 1 0 1 1 0

Digital signal Analogue signal

Digital signal

Transmission

Figure 2.4 Digital and analogue signals

Multiplexer High speedfull duplexTerminals

Low speedhalf-duplex lines

Figure 2.5 Multiplexer

19

Page 33: Telecommunications and Networks

Telecommunications and networks

expertise of telecommunications specialists fornetwork design.)

A LAN of microcomputers, peripherals andinterconnections with other networks may havea component that caters to all the requests of thenetworked computers. For example, a disk serveris a component that acts like an extra disk drive:it is usually partitioned so that each computercan access a particular private storage area. Afile server is more sophisticated, allowing accessto stored data by file name.

Large mainframe computer systems generallyinclude a front-end processor programmed torelieve the CPU of communications tasks. Forexample, a front-end processor may receive mes-sages, store transmitted information and routeinput to the CPU according to pre-establishedpriorities. It may validate data and preprocessthe data as well. Another major function of front-end processors is to compensate for the relativelyslow speed of transmission compared with theprocessing speed of the CPU. Front-end proces-sors may also:

1. Perform message switching between termi-nals.

2. Process data when teleprocessing load is lowor absent.

3. Act as multiplexers and concentrators.

4. Provide access to external storage and otherperipherals.

5. Check security authorizations.6. Keep teleprocessing statistics.7. Accept messages from local lines with mixed

modes of communication.8. Facilitate the use of the CPU by several users

in a time-sharing system.

Figure 2.6 illustrates a sample teleprocessingsystem that incorporates some of the equipmentdescribed in this section.

InterconnectivityEach computer system may have a unique config-uration of computing resources such as computerspeed, file capacity and peripherals that includefast printers and optical scanners. As the man-agement of each computer system may not beable to afford all the resources that they need, itis desirable to be able to share resources whenthey are not being fully utilized. This can beachieved through interconnectivity, the linkingof computer systems by telecommunications andnetworks.

It is telecommunications that provides the linkand connectivity between computers that enablesthe sharing of resources and communicationbetween users of different systems. When

Remoteterminals

Voice grade level

MultiplexerWidebandchannel

Front-endprogrammableprocessor

Hostcomputer

Keyboardterminals

CRTs Touch-tone telephone

Otherterminals

Figure 2.6 Example of a teleprocessing system

20

Page 34: Telecommunications and Networks

Teleprocessing and networks

interconnectivity creates a network at the locallevel we call it a LAN, local area network. Thisis discussed in great detail in Chapter 5. Whenthe interconnectivity is within a metropolitanarea, we have a MAN, metropolitan area network;when extended to a wide area it is knownas a WAN, wide area network. And when theinterconnectivity is global, we have a GAN,global area network. The MAN, WAN and GANare compared in Chapter 6.

A GAN providing international connectivity isalso known as the Internet and is discussed inChapter 20.

Networks in the 1990sThe 1980s was a decade in which a large numberof LANs were installed. In the 1990s, many ofthese LANs will be joined into national andinternational networks. Already the rewiring ofEurope and the USA is under way to create acoast-to-coast network to carry voice, images anddata messages simultaneously over the same lineat low cost.

How will these integrated networks affect busi-ness communications? Information transmissionwill be faster. For example, phone companies areinstalling digital computer switches and supple-menting low capacity copper transmission lineswith microwave and high capacity fibre-opticcables that will transmit information more thanseven times faster than current rates.

A single network will often suffice. Many com-panies are currently part of several communica-tions networks, each serving a different purpose(eg. LANs, telephone traffic, facsimile machine).

New services will become available such as cel-lular and mobile phones. An engineer at a con-struction site will be able to look at electronicblueprints simultaneously with the architect atthe office who drew up the plans. A reporter cov-ering the earthquake in Japan may send photosto London headquarters for distribution in anelectronic newspaper delivered to the computerscreen of subscribers. Salespeople may be able tosell and deliver their products without ever hav-ing to make personal calls to customers.

Telecommunications will become more reli-able. When equipment breaks down at a location,transmission will be routed to avoid the bottle-neck.

The price of telecommunications services willdrop. Although the development and installation

costs of integrated networks are staggering,revenues generated by telecommunications arehigh. The market is growing and competition inthe telecommunications industry is fierce, factorsthat traditionally favour the customers by leadingto lower costs.

The ‘bottom-line’ measure of the worth oftelecommunications and networks in the 1990swill be in its applications. Some applicationslike EFT (Electronic Fund Transfer), EDI (Elec-tronic Data Interchange) and e-mail (electronicmail) will not change conceptually but they willbe used more creatively and for a wider rangeof uses. For example, in 1996 for the first timethe US troops abroad (in Bosnia) were commu-nicating daily with their families at home by e-mail. The US government provided equipmentand training facilities to do this. These commu-nication services are part of message handling, asubject discussed in detail in Chapter 17.

E-mail and many applications in telecommu-nications in the early 1990s were textual. Soonthe stream of text will be integrated with othermedia such as voice and pictures giving us mul-timedia applications that could be very use-ful in education, medical services, entertainmentand many a business where communications willno longer be by letter or even e-mail but byteleconferencing and video-conferencing. Theseapplications are discussed in Chapter 18. Mul-timedia will also be part of the discussion oninformation services and telecommuting, sub-jects discussed in Chapter 19; and in the dis-cussion of the Internet, a subject examined inChapter 20.

The Internet is perhaps an area where thegrowth was much larger and faster than anyone had predicted. Controlling the content andprivacy on the Internet and improving globalcommunications, not just for transfer of databut for selling and buying products with secureinternational transfer of funds on the Internet,will be a high priority in the late 1990s.

The most popular design for a global networkis called Integrated Services Digital Network(ISDN) initiated in 1984 by the InternationalTelecommunications Union, an organizationof the United Nations comprising telephonecompanies around the world. However, thisgroup has not yet agreed on standards forhardware and software standards that arerequired if computing power, information andtelecommunications are to be integrated in a

21

Page 35: Telecommunications and Networks

Telecommunications and networks

single transportation system. It may take years(possibly decades, according to some detractors)before differences can be resolved.

Nevertheless, technical and market tests forintegrated digital networks have been made bythe state-owned telephone companies in Ger-many and Japan. Numerous ISDN trials con-ducted by American telephone companies arealso under way. The success of ISDN will affectall companies with a vested interest in telecom-munications. For instance, a private data net-work run by IBM permits the exchange ofinformation between companies with compati-ble IBM machines. If this network were meshedwith ISDN, the network would be able toexpand its services. For computer manufactur-ers, the success of ISDN may accelerate sales.Companies already in telecommunications andcomputer companies wanting a share of thetelecommunications market are likewise follow-ing ISDN projects with interest, looking for waysto attract telephone defectors to network servicesof their own.

Issues facing corporatemanagementThe quality of decision-making by managersshould improve with integrated data networksbecause more information and more timely infor-mation will become available on which to makedecisions. However, telecommunications add tothe responsibilities of corporate management asexplained next.

Organization of information resources

The duty of corporate management is to planfor data access, cost-effective usage of comput-ing resources, and the sharing and distributionof information within and between departments.A computer network does facilitate data collec-tion, processing and information exchange at lowcost, but is not the only option. In fact, thearray of options in the organization of comput-ing resources is what makes the manager’s roleso difficult.

To illustrate, a multiuser system that usestime-sharing to link terminals, called a shared-logic system may be preferable to the installationof a local area network. (A shared-logic systemutilizes terminals connected to a centralized

computer in which all processing occurs.Local area networks tie otherwise independentcomputers, usually microcomputers, together.) Amanager must be familiar with the strengthsand weaknesses of both shared-logic systems andLANs in order to evaluate their relative benefitsand trade-offs. In choosing an appropriatesystem, the following questions should be asked:.

1. How are computing resources used inthe company? If the system is primarilyfor high-volume transaction applications,shared-logic technology should be favoured.If for general use, such as word processingand spreadsheets, then a LAN is appropri-ate. (If files do not require constant updat-ing by many different people, perhaps amultiuser system is not necessary after all.)

2. Are concurrent requests for informationfrom databases likely? Shared-logic systemsare better able to respond to such requests.In addition, most provide file and recordlocking and offer transaction logging andrecovery facilities.

3. Is peripheral sharing a primary require-ment? Sharing is convenient and cheapwith a LAN. In addition, the incrementalcost of adding resources to a LAN is lowwhereas expanding a shared-logic systemmay require a complete change in the CPU.

4. Is ease of applications development impor-tant? Many users who write their own pro-grams find development tools for personalcomputers easier to use than shared-logicapplications development tools.

5. Is growth expected? LANs can be upgradedwith ease since each added workstationbrings its own CPU resources.

6. How dispersed are users? LANs are notdesigned for wide area access. Most servea single building.

7. Is data security an issue? A LAN allowsdecentralized data under user jurisdiction.With a shared-logic system, a security offi-cer can impose strict control over data useand storage.

8. Are users willing and able to take respon-sibility for systems operations? If not, ashared-logic system would be advisable.

9. Are gateways to other computer networksrequired? Although much work is cur-rently being done on gateway technology,applications that must be integrated with

22

Page 36: Telecommunications and Networks

Teleprocessing and networks

other large systems are better served atpresent by shared-logic systems.

10. Will the network contain products of dif-ferent manufacturers? Many LANs enablesuch connectivity whereas connectivitybetween products of different makes is min-imal with most shared-logic systems.

In general, dissatisfaction with the currentoperations is the driving force towards theestablishment of LANs. In organizations witha proliferation of personal computers, a LANis considered when users need to share data,management wants better processing control,the need to integrate new systems exists, andinput/output inefficiencies are a concern. In ashared-logic minicomputer environment, a LANmight prove the answer to poor response time,excessive downtime, high costs, user pressure forpersonal computers, and the lack of applicationavailability when needed.

There are other resource configurations to con-sider still. For example, a microcomputer can behooked up to a mini or mainframe with excesscapacity and used to create data, upload data tomainframe storage or download data for micro-computer processing. Or employees with compat-ible hardware might pass around disks holdingfiles to be shared.

These organizational structures are not mutu-ally exclusive. A single firm may have one ormore LANs to supplement a shared-logic system.In addition, stand-alone microcomputers may beon the desktops of some workers. In-house com-puters may also be linked to external computerresources. Thus the organization of computingresources can be tailored to the unique operat-ing environment of each firm. It is management’sresponsibility to decide how telecommunicationscan best serve the company’s long-term interests.

Organization of telecommunications and net-works is crucial to the orderly operations andgrowth of most computing. Its organization struc-ture is examined in Chapter 9 with a popularconfiguration, the client server approach, exam-ined in Chapter 10. Whatever the organizationstructure, there are tasks and issues that facenetwork managers. One of these tasks is theacquisition of the necessary resources needed fortelecommunications and networking. In today’smarket, the network manager has a spectrumof choices in both hardware and software andmust decide how best they are used for working

as individuals or in groups. Selecting most (ifnot all) of the resources from one vendor istempting for it will eliminate problems of inter-facing with vendors and the incompatibility ofresources. But in the real world, computer prod-ucts are put together as and when the budgetallows. Many corporations have developed sys-tems one at a time, so they often represent dif-ferent generations of technology with the prob-lems of connectivity and compatibility not havingbeen addressed. The result is a mix of systemswith dissimilar architectures and operating sys-tems unable to exchange information without‘patch-work’ and inefficient interfaces.

One solution for interconnectivity, at least onthe hardware side, is to have industry standardsfor hardware manufacture such as the ISDNmentioned elsewhere. However, reaching agree-ment on standards is a slow and difficult process.The problem is further complicated in networksand telecommunications because, for meaningfulcommunication in a worldwide market, telecom-munications have to be global and standards haveto be not just nationally agreed upon but agreedupon internationally. This subject is examined inChapter 11.

Security is also an issue with computer systemsespecially when they use telecommunicationsand networks for now they are exposed to manysources of infiltration and systems violation.Data/knowledge must now be protected fromunauthorized modification, capture, destructionor disclosure. This problem is addressed inChapter 12.

Network management is the subject of Chap-ter 13. The resources managed are examined inChapter 14.

Thus far the discussion concerns networkmanagement at the corporate level. However,corporations must communicate with othercorporations and individuals within the countryand need a national infrastructure. This isthe subject of Chapter 15. Communicatingacross national borders is becoming increasinglyimportant in our worldwide economy, and is thesubject of Chapter 16.

Summary and conclusions

In the 1980s, many business organizationsinstalled local area networks to supplement theircomputer systems by having interconnectivity

23

Page 37: Telecommunications and Networks

Telecommunications and networks

and the capability of sharing resources. In the1990s, more and more businesses (and non-profitorganizations, including government agencies)will participate in regional and national networksof linked LANs. The future trend is towardsintegrated digital networks extending nationallyand internationally. The result will be faster,more reliable telecommunication services for thebusiness community far beyond the present daye-mail (electronic mail), EFT (Electronic FundTransfer), teleconferencing and access to on-line remote database services used in officestoday. These applications are the subject ofChapter 17 20.

Telecommunications provide managers withmore information and more timely informationthan in the past which should improve decision-making. But telecommunications also adds tomanagement’s responsibilities in areas suchas the organizations of information resourcesand making them operational and secure.Such management of computing resources areexamined in Chapters 9 16.

To enable us to discuss applications oftelecommunications and the management oftelecommunications and networks needed forthese applications, we need to know more aboutthe technology of telecommunications. In thischapter we took an overview of some of the basictelecommunication technologies. The details willbe the subject of the Chapters 3 8.

This chapter is in a sense an overview of thisbook with an introduction to some of the basictechnology of telecommunications and networks.In this chapter we looked at the front-end and theback-end of a telecommunications system whichare discussed in detail in Chapter 4. In betweenare the transmission channels such as the tele-phone, radio, cable, and the wireless and cord-less channels. These are the subject of our nextchapter.

Case 2.1: Delays at the DenverAirport

In 1995, the new airport at Denver in the USopened after long delays and a cost of $5 billion.It was designed to be the state-of-the-art structuredesigned for air transportation well into the nextcentury. The airport had a sophisticated networkthat automated many subsystems at the airport

in addition to maintaining telecommunicationsnot only in the airport but with pilots in theair, travel agents in town and airports around theworld.

One subsystem was designed to deliver bag-gage from the plane to the airport building evenbefore the passengers were ready to claim theirbaggage. The $300 million subsystem used ATMtechnology (to be discussed later in this book)along with 55 computers and was designed tohandle 30 000 items of luggage daily. This sub-system delayed the opening of the airport andthe contractor for the subsystem claimed thatthey were rushed and they needed more time toinstall the system to start with but were not giventhat time. How much longer had they wanted?Around 16 months which happens to be aboutthe time for which the opening was delayed.

One lesson that has been drawn from this sadstory is that tomorrow’s technology should notbe installed today without adequate preparationand good risk assessment. It has also been arguedthat in this situation the risk was worth taking.If there were not some managers who took cal-culated risks in computing and telecommunica-tions, then we would not have many of the appli-cations that we now have today.

Source: Data Communications, July 1994, p. 30,and International Herald Tribune, Feb. 28, 1996,p. 4B.

Supplement 2.1: Toptelecommunications companiesin 1994

Company Revenue Mainlines(Million US$) (Millions)

NTT (Japan) 68.9 59.8AT&T (USA) 43.7Deutsche Telekom 37.7 39.2

(Germany)France Telecom 23.3 31.6BT (UK) 21.3 27.1Telecom Italia (Italy) 18.0 24.5GTE (USA) 17.4 17.4Bell South (USA) 16.8 20.2Bell Atlantic (USA) 13.8 19.2MCI (USA) 13.3

24

Page 38: Telecommunications and Networks

Teleprocessing and networks

Source: International Herald Tribune, Oct. 11,1995, p. 12.

Bibliography

Cerf, V.G. (1991). Networks. Scientific American, 265(3), 72 81.

Derfler, F.J. Jr. (1991) PC Magazine Guide to Connec-tivity. Ziff-Davis Press.

Derfler, F.J. Jr. and Freed, L. (1993). How NetworksWork. Ziff-Davis Press.

Dertouzes, M.L. (1991). Communications, computersand networks. Scientific American, 265 (3), 62 71.

Doll, D.R. (1992). The spirit of networking: past,present, and future. Data Communications, 21 (9),25 28.

Financial Times, 19 July, 1989, pp. 1ff. Special issue on‘Survey of International Telecommunications’.

Flanagan, P. (1995). The ten hottest technologies intelecom: a market research perspective. Telecommu-nications, 29 (5), 31 41.

Interfaces, 23 (2), 2 48. Special issue on ‘Telecommu-nications’.

International Herald Tribune, 4 11 October, 1995.Special series on ‘Telecommunications in Europe’.

Soon, D.M. (1994). Remote access: major develop-ments in 1995. Telecommunications, 28 (1), 57 58.

25

Page 39: Telecommunications and Networks

3

TRANSMISSION TECHNOLOGIESThe new mobile workforce doesn’t so much need computer devices that communicate as they needcommunication devices that compute.

Samuel E. Bleeker

IntroductionIn this chapter we will look at the transmissionmedia needed for communication. The mostcommon (and oldest) is the copper wire andits variations. Such wiring is best for shortdistances and small capacities. However, forlonger distances and for a variety of traffic suchas voice and video, we need glass fibre opticcables. Even the glass fibre has limitations fordistance and then we need microwave or satellitecapability. Each of these media will be discussedin turn for their advantages, limitations andapplications.

A more recent transmission media is the cord-less and wire-less person-to-person communi-cations. It is aptly described by Arnbak as a‘(R)evolution’. We will examine the evolution ofthis technology and its revolutionary implicationsfor the way we may communicate in the future.

WiringThe oldest and still commonly used transmissionmedia is the copper wire. It comes in one of manyforms: solid or stranded; unshielded, shielded,or coaxial. The shielding is required to protectthe conductor from outside electrical signals andreduces the radiation of interior signals. The con-ductor carrying the electrical pulse that repre-sents a message itself can be solid or stranded ortwisted; the stranding and/or twisting of a pair ofwires provides a shielding reducing the absorp-tion and radiation of electrical energy. Shieldedtwisted wires are relatively expensive and difficultto work with. They are also difficult to install.

Whether single or stranded, a shield could beof woven of copper braid or metallic foil which

has the same axis as the central conductor andhence is referred to as a coaxial cable.

It is easy to install connectors to a coaxialcable but the connectors must be good since abad connection can adversely affect the entiretransmission system. Such connectors are oftenmade of tin or silver; the latter is more expensivebut more reliable.

The main problems with copper wiring arethreefold: it has a low capacity, it is slow and itis adequate for only a short distance. As distancesincrease and as larger demands are made on thecapacity (by volume as well as the nature of traffic,such as video demanding greater capacity), and asthe need for greater speed becomes relevant, thencopper cables are inadequate. Another cable madeof glass fibre is more appropriate. A glass fibreis thinner than a human hair, stronger than steel,and 80 times lighter than a copper wire of the sametransmission capacity. The capacity of a fibre isone billion times the capacity (in bits/second) ofa copper wire (for the same cross-section).

Glass fibre is made of silicon, a substance ascommon as sand. Fibre transmits pulses of infor-mation in the form of laser emitted light waves. Itis not only fast in transporting data but it is effec-tive for greater distances than is copper; and fibreis also more reliable. The distance for fibre linksis more than 11 times the maximum distance forcoaxial cable and 15 times the distance for sometwisted wire systems. Even for short distances,fibre is used because it can carry a mix of mul-timedia traffic that includes data, text, imagesand voice. Thus even for the short distances asfor internal wiring in an aircraft, fibre is used tocarry voice and music.

Fibre is also reliable because it does not pickup extraneous electrical impulses and signals.

26

Page 40: Telecommunications and Networks

Transmission technologies

Table 3.1 Comparison of wiring approaches

Unshielded Shielded Coaxial Fibretwisted wire twisted wire cable optics

Speed and throughput: Fast enough Very fast Very fast FastestAverage cost/node: Least expensive Expensive Inexpensive Most expensiveMedia size: Small Large Medium TinyMaximum length: Short Short Medium LongDifficulty in installation: Difficult Difficult Moderate difficulty Relatively easyProtection from

electrical interference: No protection Some protection Good protection Very good protection

These signals are picked up by copper whichbecomes an antenna and absorbs energy fromradio transmitters, power lines and electricaldevices. Also copper develops voltage potentialsto the electrical ground resulting in interference.In contrast, glass fibre cables are immune to elec-trical fields and so they do carry clean signalsthat never spark or arc and add to the reliabilityof fibre as a conductor.

The light waves in glass fibre can be preciselycontrolled and is less vulnerable to unauthorizedaccess compared to the electrical pulses in acopper cable. This adds to the security of thesystems and is very important when confidentialmessages are being transported.

Glass fibre is much lighter and smaller thana copper wire but it is much more expensive.The average cost in the US of wiring a homewith fibre is roughly $1500 compared to $1000with copper wire. Note that the increment isonly $500 per home but the total cost of hav-ing copper and then replacing it with fibre costs$2500. There are two observations worth mak-ing. One is that replacing copper wire repre-sents a loss of investment to the carrier owningthe copper wire who should then be expected tooppose fibre in order to support his investment;and two, the laying of fibre in new homes repre-sents a savings of $1000 per home over replacingthe copper wire (and including the sunk cost).This explains why it is cheaper (in total costterms) to install an advanced technology start-ing from scratch (without an infrastructure) thanreplacing an old infrastructure. This explains theadvantage that developing countries without anyinfrastructure have. But this advantage can alsoapply to developed countries like France thathad an outdated telephone system in Paris. Fibreand advanced switches were installed and a freecomputer terminal was given to every household

with a telephone, instead of a telephone direc-tory (which at the time of the initial planningcost just as much as a terminal). As a conse-quence, Paris today has one of the most advancedtelephone systems and a infrastructure basic to awired city. More on infrastructure and more ona wired city later. We must get back to transmis-sion technologies.

A comparison of wired technologies for trans-mission is summarized in Table 3.1.

From the above discussion, one can concludethat the different media of transmission do havetheir distinct advantages and limitations. Theiruse would depend on the carriers responsible forthe transmission and will vary with countriesdepending on their applications, be they tele-phone, cable TV or PCs (personal computers).The density of these applications vary with coun-tries. Statistics for a sample of geographic areasis shown in Table 3.2.

A recent application of telecommunications isthe transmission of video. The characteristics ofvideo compared to those of telephone, cable andPCs are shown in Table 3.3.

The different modes of wiring having differentapplications result in a coexistence of all ormost of these forms of transmission in many atelecommunications environment. One is shownin Figure 3.1, where the transmissions for longdistances requiring large capacities and carrying

Table 3.2 Services in selected parts of the world

US Japan Europe

Telephone lines/1000 people 48.9 42.2 42.2Cellular phones /100 people 2.6 1.2 1.2TV households with cable (%) 55.4 13.3 14.4Personal computers/100 people 28.1 7.8 9.6

Source: Stix (1993: p. 104)

27

Page 41: Telecommunications and Networks

Telecommunications and networks

Table 3.3 Comparison of services in the US

Telephone Cable PC Video

Cost as %income per capita:(Drop over theyears)

14 2 4 1 11 2 3.5 1

Usage: Communication Entertainment Computing EntertainmentTransmission media: Copper wire Broadcasting Fibre Co-axial cablePeriod for adoption

in home:1876-1950 1950-1991 1975-1995 1971-1988

KEY Fibre Coaxial cable Copper cable

RouterforTelephone& TV

Business

School

Office

ROUTER

Signalconverterfor CableTVLong

DistanceTelephone

Video

Signalcon-

Figure 3.1 Use of different transmission media

a mix of traffic will use fibre, say for theconnections to offices, schools and businesses.For local connections for local TV, coaxialcable is used; and for connections to PCs andtelephones in homes, we use coaxial cable orcopper cable.

MicrowaveA problem with wiring is that it is limitedin distances for which it can transmit effec-tively. Beyond that distance, microwave trans-mission is necessary. Earthbound or terrestrialmicrowave systems use high-frequency electro-magnetic waves to transmit through air or spaceto microwave receiving stations. These stations,however, must be within ‘line of sight’ of eachother. They are placed close enough to eachother, roughly 30 miles apart (on land) and the

transmission towers are tall enough so that thecurve of the Earth, hills or other tall obstruc-tions do not interrupt this line of sight. Weatherconditions like humidity and dust can also affecttransmission. Thus the microwave alternativeis not ideal, though it was a great innovationwhen it was first invented. It is now a reason-able economical option only for distances upto around 20 miles offering signalling speeds of1.54 megabits/second.

SatelliteA satellite in space can overcome the ‘line ofsight’ problem besides being less sensitive to theweather. When a satellite is placed in geosyn-chronous orbit in space, its speed allows it tomatch exactly the speed of Earth’s rotation, sothat it is stationary in relation to any point on

28

Page 42: Telecommunications and Networks

Transmission technologies

Earth. Microwaves are then sent to the satellitesome 22 500 miles away and then sent back toEarth stations anywhere on Earth. Each satellitecan transmit and receive signals to slightly lessthan half the Earth’s surface; therefore at leastthree satellites are required to effectively coverall the Earth.

The return journey by satellite could take upto half a second, which may not be desirablefor real time applications but good enough formany applications, especially for day-to-day pri-vate and business data communications and evenvoice communications by telephone. One config-uration of satellite transmission using Earth sta-tions between an office and a remote factory isshown in Figure 3.2.

Wire-less/cordless systemsWhether by microwave or by satellite, or by othermeans of transmission, there always is a tele-phone wire at some end. Eliminating this wiregives us the cordless system. It is considered anew technology because it is recent in time but

it is an old technology since it really is stillbasically microwave broadcasting. Even the name‘wireless’ is as old as the Marconi’s Wireless Tele-graph system of 1900. To avoid the confusion, thenew system is sometimes referred to as wire-lessor cordless.

The great advantage of the wire-less system isthat it no longer is fixed and restricted to a phys-ical space. You can now ‘roam’ around (roamersare subscribers in locations remote from theirhome-service area) anywhere you wish and stillcommunicate with a phone that has a transmit-ter. Of course, the longer the distance, the dif-ferent the technology employed. Thus, within abuilding or home, you may use a wired networkwith laptops and palm top computers. These areconnected through a wall-mounted transceiver(which connects thin and thick cables like thefibre and coaxial) to a wired LAN and estab-lishes radio contact with other portable net-worked devices within the broadcast range. Forwider ranges, small antennas on the back of aPC or within PCs communicate with radio tow-ers in the surrounding area and are referred to ascellular phones. For a wider range of distances,

1-1 million bits/second

Office

Local station

Earth station

Satellite

Earth station

Local station

Factory

50-64 million bits/second (per channel)

Figure 3.2 One configuration of a satellite communication

29

Page 43: Telecommunications and Networks

Telecommunications and networks

even around the world, satellites are used to pickup low powered signals from mobile networkeddevices.

Besides distance, the cordless system has awide spectrum of other combinations like localor long distance; portable and personal (PCS);digital and analogue; and two-way or one-way.Some have choices of their own. Thus the digi-tal computer could be either a palm-top, a lap-top, or a PDA (Personal Digital Assistant). Theone-way (unidirectional) may be either radio ormessage service like paging or downloading ofdata from either a local or a wide area network.The two-way (bidirectional) allows for a dialogueand its manifestations are either the cordlesstelephone, the portable computer, the wirelessLAN, or the cellular radio PCS (Personal Com-puter System), and the CDPD (Cellular DigitalPacket Data), which transmits ‘packets’ of datathrough sparsely used radio channels or duringgaps in conversation. All these types are shownin a taxonomy of cordless systems in Figure 3.3.This classification is not mutually exclusive. Weshall not discuss each in detail, but instead weshall examine three popular systems: the cellularradio, the packet radio and the PCS.

Cellular radio

A cellular radio transmits radio signals toa base-station to a cellular switch throughbroadcasting that connects the wire-less networkwith the fixed network by a cellular switchwith the PTSN (Public Switched Telephone Net-work). The PSTN is connected by fibre or coaxial

cable to devices like telephones or other cellularphones, or to modems in host computers whichmay be minis or mainframes or other PCs. Thebase-station can serve tens of wire-less terminals,whilst the cellular switch may serve up to 100base-stations. One configuration of the cellularphone is shown in Figure 3.4.

Cellular technology had its birth in the 1970sin the labs of Bell in the US. In 1993, a Euro-pean standard GSM (Groupe Special Mobile)was adopted by 32 countries. There are otherstandards in the US and Japan, as well asthe British standard, the CT-2, the CordlessTelephone Version 2.

With developments leading to the second gen-eration cordless technology, we see a change fromstand-alone consumer items to elements of a geo-graphically dispersed network. Network controlremains distributed, with wireless terminals com-peting with little or no central coordination foravailable channels . . . The second generation net-works, all using digital speech transmission, willemploy a variety of access technologies includingnarrowband time division with eight channelsper carrier (GSM), narrowband time divisionwith three channels per carrier (IS-54), frequencydivision (CT-2), and time division with twelvechannels per carrier (DECT). (Goodman, 1991:pp. 33 4).

Packet radio

A packet radio is an approach that uses a switch-ing and PAD (Packet Assembler and Disas-sembler) where the message being assembled

Local(cellular)

Private networks

Public switchingPacket switching

Digital

Palm-topLap-top

PDA

Two-wayPortable

Cellular radio (PCS)Portable computerCordless telephones

One-way

Packet Radio

Data Transfer

Messaging• paging• down-loading

Analogue

Personal

Longdistance(satellite)

WIRE-LESS

Figure 3.3 Taxonomy of wire-less/cordless transmission

30

Page 44: Telecommunications and Networks

Transmission technologies

Cellularswitch

Fibre

Fibre Host modem

Copperwire

Computer

Fibre orcoaxialcable

PSTN

Copperwire

Internalmodem

Mobilecomputer

Cellulardata

interface/adaptor

Fibre

Airwavelink

Figure 3.4 Cellular phone connection to corporate computer

(or disassembled) into packets, which areself-contained sets of information. The packetsare then relayed by the PAD through a satel-lite to the possible destination by broadcastingto different base-stations. The base-stations notbeing addressed ignore the broadcast while thebase-station that is being addressed accepts themessage and passes it on to the ultimate desti-nation, which may be a device or host computer.Such a configuration is shown in Figure 3.5.

PCS (Personal Computer Systems)

A PCS is a LEOS, a Low Orbit Satellite System.It is based on a digital architecture. A definingtechnical characteristic of PCS is its

high capacity and spectral efficiency. Assignedspectrum is divided into discrete channels, whichare utilized by grids of low-power base-stationswith relatively small cell contours.

Because PCS cell contours are relatively small,PCS handsets can operate at low power and stillbe small, light and inexpensive. PCS also will beuseful for private in-building or campus-basedwireless PBX systems because of the poten-tial to assign frequencies to relatively discreteareas...Data ports can be inexpensively built intoPCS handsets to allow direct transmission bybypassing the handset’s voice coder, thus per-mitting PCS handsets to act as wireless modemsfor portable computer and facsimile machines.(Wimmer and Jones, 1993: p. 22).

In the mid-1990s, the PCS was still consideredexperimental in the US though it was welladvanced in Europe, especially the UK. In theUS, there was much debate and argument aboutthe assignment of frequency spectrum whichhampered the development of the PCS, but theinfrastructure for a PCS was well planned. It wasprojected that

31

Page 45: Telecommunications and Networks

Telecommunications and networks

!

!

KEY:! = Target

Packet radio

Messageswitching(PDA)

Figure 3.5 Packet radio satellite broadcast

the cellular industry will spend $20 billion onPCS infrastructure alone. Because of the highfrequencies, 2 gigahertz versus 800 megahertz,PCS requires five times more towers to providethe same service as existing cellular phones . . .Once the new spectrum becomes operational,PCS will deliver a technological advance thatwill draw multiple converts. The lure: lowermobile phone bills. PCS is not a separate or com-peting network with cellular, but a way for largecellular operators to expand their networks.New spectrum to play with and insurance that

they will have bandwidth to expand. (Flana-gan, 1995: p. 32).

Comparison of transmissionsystems

A comparison of the two cordless systems (thepacket radio and the cellular phone) with twoother popular cord systems (the microwave andthe mobile satellite system) is shown in Table 3.4.

Table 3.4 Comparison of transmission approaches

Applications Advantages Disadvantages

Microwave ‘‘Long’’ distancetransmission

Easy installationWide bandGood compatibility

Poor penetration(e.g. of walls and floors)Poor mobility

Mobile satellite Mobile and longdistance transmission

Saturation coverageReliable

ExpensivePoor data rates Limitedcapability

Cellular Telephone Voice/data supplier ExpensiveCircuit Switching Mobility Poor penetrationRoaming modem Matured technology Not universally available

Existing infra-structure

Packet radio Packet switching Mobility No voice supportRoaming location User pays only for

packets transmittedCoverage limited to large

metropolitan areas

32

Page 46: Telecommunications and Networks

Transmission technologies

These four systems may survive the shakedown ofthe industry as it matures. Meanwhile, and eventhereafter, these technologies may well coexist forthey all have distinct functions to perform.

There are a number of applications of wire-lesscomputing that include POS (Point-Of-Sale), dis-patching of repairmen or taxis, delivery of servicesvarying from pizzas to medical help, etc. However,to some extent, the cordless and wire-less trans-mission is in fashion and even a status symbol.What is needed to bring it all together into a use-ful and productive technology are two things. Asidentified by analyst Jay Batson, they are:

Number one is a critical mass in a specific areaso that the entire metropolitan area is covered.Number two, there have to be PC applicationsthat know how to better use wireless data capa-bilities. There hasn’t been a kind of conscious-ness raised all the way up to the software levelthat people use on PCs, or pagers. The secret isfor a compelling market use to evolve. (Flana-gan, 1995: p. 38).

Wire-less technology does increase productiv-ity by using productively the time we idle awaywhether this be at a waiting room, in a car (asa passenger of course), or even while walking.It enables us to perform when we think of it orwhen we need to make a transaction and not haveto wait till we get to our office. It could be a toolfor professions like the a salesperson or a medicalprofessional. And it can be ‘life-saving’ for a ruralresident or in an emergency like a car breakdown,or a medical emergency while ‘roaming’.

Despite the many applications of the cordlessand wire-less devices, they may never replacethe transmission done from a fixed site like awork-site whether it be at home or the office.The workplace is ingrained in many cultures andtheir work ethic, partly because they have all ourworktools together and partly because the work-place creates an atmosphere conducive to work.Transmission and telecommunications capabilityis fast becoming a tool for the office and partof our office culture. Soon it may be difficultto distinguish between computers with telecom-munications capability and telecommunicationsdevices with computing capability. To comple-ment office computing, there may well be hand-sets that deliver not just data, but also voiceand even video. Mobility and portability maycreate a new class of applications that combine

computing and personal electronics. The symbio-sis of computing and communications has appli-cations and implications that we have yet to dis-cover and appreciate.

Summary and conclusionsThe main media of transmission are the wiresthat string our telephone poles; the cablesthat connect long distances including below theoceans, the microwaves that connect longer dis-tances; the satellite that enables much longer dis-tances to be traversed quickly and efficiently,and the cordless phone. These transmission tech-nologies are shown in Figure 3.6. Each of themain transmission technologies has variations.

Telephone or telegraph line

Cable

Microwave transmission

Satellite

Cordless/wire-less

Figure 3.6 Micromotive transmission waves

33

Page 47: Telecommunications and Networks

Telecommunications and networks

A taxonomy of the main important variantsare shown in Figure 3.7, a different view ofFigure 3.3. It is the permutation and combina-tion of these variants that is most suitable for anyone environment of telecommunications. Select-ing the right permutation and combination isthe function and responsibility of the manager,sometimes the technical person but increasinglythe corporate manager. To make such decisions itis necessary to understand some of the technical

aspects of these technologies, which is the pur-pose of this chapter.

Looking at the future, it is quite possiblethat we may see personal cordless communi-cators which will enable us to communicatewith each other in an office building (no mat-ter how tall or large it may be) and with peo-ple outside the building (wherever they may bein the world). One such scenario is shown inFigure 3.8.

Wired(site-based)

Broadcastbased

Trans-missionmedia

Copper

Fibre

Microwave

Satellite

One-way

Two-way

Unshielded twisted

Shielded twisted

Coaxial

Message switching

Packet radio

data transfer

Cellular radio(PCS)

Cordlesstelephone

Portable computer(PDA)

Figure 3.7 A Taxonomy of transmission media

KEY: Broadcast

`Roamer′ person anywhere in the worldwith an antenna

Figure 3.8 A view of transmission in the future

34

Page 48: Telecommunications and Networks

Transmission technologies

ClothSheetrock

Concrete

Reinforcedconcretesteel(walls/floors)

Radio800−900 MHz

Infrared<10 000 MHz

Microwave≥10 000 MHz

Spread spectrum(varying frequencies)

Figure 3.9 Varying penetrations by transmission media

The constraint to current cordless commu-nication seems to be the limits of penetrationthrough opaque objects like walls and floorsin a building. Cloth limits the penetration ofradio and infrared; concrete but not sheet rocklimits microwaves; and reinforced concrete steel(walls or floors in a building) limits spread spec-trum, which is the use of multiple frequenciesto achieve reliable communication with relativelyweak signals. These penetration limits are shownin Figure 3.9. The current restrictions are nota problem for most of us and the current tech-nology seems quite adequate. What is needed isgreater access to such equipment and its integra-tion with other applications.

In this chapter we have examined transmis-sion technologies. But there are other relatedand enabling technologies for telecommunica-tions that are needed. One of these we have men-tioned frequently: switches. In the next chapterwe will examine switching and enabling tech-nologies of telecommunications.

Case 3.1: The Iridium project

The Iridium project is a global satellite commu-nications systems designed by Motorola (US) thatallows customers to call and be called at any timeand from any place using hand-held telephonesdirectly through a constellation of 66 LEO (LowEarth Orbit) satellites in six orbital planes andin an orbit of 780 kilometers. It has two bands,

the L-Band that is from the satellite to earth andthe Ka-Band that is an intersatellite link withgateways and feeder link connections.

The Iridium satellites are designed for Low-Earth Orbit (LEO) using the TDMA (Time-Division Multiple Access) technology. TheIridium project is expected to be operational in1998. It has been financed in 1993 by a firstround of equity offering of $800 million of whichMotorola purchased a minority interest. The totalcost is expected to be around $3.4 billion. Otherowners in this private enterprise adventure are:Bell Canada Enterprises; Krunichev Enterprise,the Russian manufacturer of the Proton rocket;China Great Wall, the Chinese manufacturerof the Long March rocket; Nippon Iridium,a consortium of 18 Japanese’s companies;STET, the Italian cellular and radio carrier;Mawarid Group, a Saudi investment group;Muiridi Investments of Venezuela, UnitedCommunications Industry Co., a Thai cellularand paging operator; Lockheed, a US launchprovider; Raytheon, a US telecom equipmentmanufacturer; and Pacific Electric and Cable,Inc., a Taiwan equipment manufacturer.

Case 3.2: CT-2 and PCs in theUKThe UK has been a world leader in the adoptionof the CT-2 (Cordless Telephone Version 2) andthe allocation of spectrums to the PCS (PersonalCommunication Systems).

35

Page 49: Telecommunications and Networks

Telecommunications and networks

The initial decision of assigning market shareto individual CT-2 licensees was too small

to provide wide coverage with high quality andlow cost . . . United Kingdom licensees imple-mented a telepoint service, in which subscribershad to look for signs to determine where theycould place a call. Subscribers typically find thisinconvenient and demand more ubiquitous cov-erage . . . The advent of a new telecommunica-tions service in the United Kingdom encouragedgreater price competition and the creation ofvalued new services by existing market partic-ipants. (Wimmer and Jones, 1992: p. 22)

The British Department of Trade and Industrymade a study of the spectrum from 470 MHz to3.4 GHz, and found that

personal and mobile communications willincreasingly displace fixed radio relay servicesin the 1 3 GHz region as the latter move to less-congested, higher frequencies, or are replaced byalternate means of communication. (Wimmerand Jones, 1992: p. 24).

Source: Wimmer and Jones (1992: pp. 22 7).

Case 3.3: Transmission at atrillion bits per secondAll through the first four decades of computing,systems were input/output bound because thespeed of the CPU computing was much fasterthan the speed at which data could be fed asinput and results could come out as output. Withtelecommunication, the bottleneck remainedbecause transmission speeds were also muchslower than CPU speeds. The bottle neck waslikely to get worse since CPU speeds were still onthe increase. And so the industry set an informalgoal of increasing transmission speeds to a rate ofa trillion bits per second through an optical fibre.The date of the breakthrough seemed feasible foraround the end of this century.

In early 1996, almost five years before thedeadline, at a conference on Optical Fibre Com-munications, three research teams announcedthat they has independently achieved the HolyGrail of high speed transmission: the terabitthreshold of transmission at a trillion bits persecond. This was the equivalent of transmission

‘the contents of 30 years’ worth of daily newspa-pers in a single second, or conveying 12 milliontelephone conversations simultaneously’. In rela-tive terms, consider the speeds achieved as beingsome 400 times faster than the commercial sys-tems in use today which carry 2.4 billion bps,or 2.5 gigabits a second. Trillion bps capacitiesare necessary for carrying video and multime-dia signals for applications like films and video-on-demand. The time for commercializing thesehigh speeds is estimated at five years.

The developments of high speed transmissionwere done by Fujitsu Ltd., the Nippon Tele-graph and Telephone Company and a team fromthe AT&T Research Labs. The three teams useddifferent approaches but all had one thing incommon: ‘instead of sending one stream of lightthrough the fibre, as is done today, they sent mul-tiple streams of light, each at a slightly differentwavelength, thereby multiplying the amount ofinformation that can be transmitted.’

One potential drawback in all the threeapproaches is that the transmission will mostlikely be limited to relatively short distances ofaround 50 miles. This is because light signalsdegrade with distance and have to be amplifiedand reconstituted en route. This is very expensivefor multiple light streams.

Source: International Herald Tribune, 2/3 March,1996, pp. 1 and 4)

Bibliography

Amedesi, P. (1995). Satellite communications in theyear 2000: the view from Europe. Telecommunica-tions, 29(4), 47 48.

Arnbak, J.C. (1993). The European (r)evolution ofwireless digital networks. IEEE Communications,31(9), 74 82.

Cox, D.C. (1995). Wireless communications: what isit? IEEE Personal Communications, 2(2), 20 35.

Da Silva, J.S. and Fernandes, B.E. (1995). The Euro-pean research program for advanced mobile sys-tems. IEEE Personal Communications, 2(1), 12 17.

Derfler, F.J. (1997). Sky links. PC Magazine, 16(1),NE1 NE12.

Dunphy, P. (1992). Myths and facts about optical fibretechnology. Telecommunications, 26(3), 33 40.

Flanagan, P. (1996). Personal communications ser-vices: the long road ahead. Telecommunications, 30(2),23 28.

36

Page 50: Telecommunications and Networks

Transmission technologies

Goodman, D.J. (1991). Trends in cellular and cordlesscommunications. IEEE Communications Magazine,29(6), 31 40.

Gunn, A. (1993). Connecting over the airways. PCMagazine, 12(14), 359 382.

Imielinski, T. and Badrinath, B.R. (1994). Wirelesscomputing. Communications of the ACM, 37(10),18 28.

Johnson, J.T. (1994). Wireless data: welcome to theenterprise. Data Communications, 23(3), 42 58.

Jutlia, J.M. (1996). Wireless laser networking. Telecom-munications, 30(2), 37 39.

Laurent, B. (1992). An intersatellite laser communica-tions system. Telecommunications, 26(7), 90 92.

Lauriston, R. (1992). Wireless LANs. PC World, 10(9),225 244.

Saunders, S. (1993). The cabling cost curve turnstoward fibre. Data Communications, 22(11), 55 59.

Steinke, S. (1997). Rehab for Copper Wire. LAN,12(2), 57 62.

Stix, G. (1993). Domesticating cyberspace. ScientificAmerican, 269(2), 101 110.

Wimmer, K.A. and Jones, B. (1992). Global develop-ment of PCS. IEEE Communications Magazine, 29(6),22 27.

37

Page 51: Telecommunications and Networks

4

SWITCHING AND RELATEDTECHNOLOGIES

If every instrument could accomplish its own work, obeying or anticipating the will of others . . . ifthe shuttle could weave, and the pick touch the lyre, without a hand to guide them, chief workmenwould not need servants, nor masters slaves.

Aristotle

Introduction

In the previous chapter we examined variousmedia for transmission. But having a transmis-sion media (and a computer) is not sufficient anymore than having a car and a road will get youto where you want to go. There is much informa-tion and enabling technology that is needed. Fortransporting information, you need an addressof the destination and detailed instructions onhow to get from the origin to the destination. Ifyou are travelling a distance that involves manychanges and switches that you must make on,say, a motorway (or autobahn or freeway) youmust know about all the ramps to get on and offand know how to interpret the many signs andinstructions that you will meet. And, of course,you must follow the rules of the road. Can youimagine what would happen if an American driv-ing on the right and an Englishman driving onthe left were to share the same lane of a roadand go in opposite directions? There would cer-tainly be a loss of life. (Many Swedes and Finnslost their lives before they agreed to drive on thesame side of the road.) In networks and telecom-munications there will be no loss of life but prob-ably a loss or at least distortion of messages beingtransmitted. The messages must of course havean address. This address cannot be interpreted byhumans as in road transportation but by a com-puter and so it must be in machine readable formand be complete as well as specific without anydanger of misunderstanding or ambiguity. Theaddress is interpreted by a router which includes

software that determines the best path (or onlyfeasible path) between origin and destinationand may involve many switches that direct traf-fic between networks and LANs. Also, there arerules of telecommunication called protocols thatmust be followed meticulously. There is one moreimportant concept involved. Let’s go back to thecar transport analogy. Suppose that you had topay toll according to volume and you had a com-pact car worth of passengers and luggage. Wouldyou take a truck or even a large car and pay forthe empty space? Of course not. You would packyour car tightly and get rid of all the redundantspace in a truck or large car. The same choiceoccurs with telecommunications. You must payby volume (kilobytes) of message transmitted andso it behoves you to pack your messages tightly.This is known as compression and is importantnot only for reducing costs but also for reducingthe time of transmission. Compression is oftendone by a router which along with a bridge anda gateway are facilities that connect networks.

It is these components and elements of atelecommunications systems which we will exam-ine in this chapter. The router, the bridge, andgateway are the first topics to be examinedfollowed by addressing and protocols like theTCP/IP protocol commonly used even by theubiquitous Internet (over 25 million users world-wide). ATM which is used not only for data andtext but also images and video will be mentionedbut deferred to a later chapter. We conclude witha short discussion of these enabling technolo-gies being smart and intelligent. This chapter issomewhat of an overview introducing many basic

38

Page 52: Telecommunications and Networks

Switching and related technologies

concepts that will be examined later in the con-text of their interrelationships.

Router, bridge, repeater andgatewayA router is primarily an interconnection to anetwork. It is a portal device, where a portalis the meeting point between local and longdistance services. (The equivalent in business isa mail-room or a shipping dock.)

If there is a message that is initiated at therouter, and there are many possible paths for thatmessage, then the router will find a ‘best’ pathfor the message. What is best will be a functionof variables like cost, transit delay, undetectederrors, mishaps in transmission, size and priorityassignment.

Consider the simple case shown in Figure 4.1.The path from origin A to destination B hastwo intermediary nodes D and E in route 1, hasa direct by route 2; and has one intermediarynode C in route 3. Is route 2 the best becauseit is the most direct? Maybe not, because itmay be slow, costly, error prone, etc. Besides,it may be busy and congested and hence not afeasible path anyway. So all possible routes haveto be looked at and the ‘best’ for the assignedobjective function chosen, given the constraintssuch as cost and availability. There are manyrouting algorithms. Tanenbaum discusses eight,and that was long before the ubiquitous Internet(Tanenbaum, 1989: pp. 197 203).

The message may be composed of frameswhich are logical entities; or, if the message istoo large, it may be broken up into packets. In

that event each packet can be sent on a differ-ent route and reassembled at the other end. Thiswould mean that each packet must be labelled toidentify its entity and destination. Sometimes thepacket has to be fragmented so that the packetfor one network is not too large for another. Therouter should also be able to assign priority toone message over another (maybe a message to ahospital for a dying patient!).

One important function of the router is to read(and follow) the instructions in the header (atthe front of the packet or frame) that concernthe router, to provide addressing for the nextdestination, to strip the data that is no longernecessary, and then to repackage and retransmitthe message.

For performing its many functions a router hasa high overhead. For a lower overhead, one mayconsider the simplest portal device: the repeater,which is like a small box which you can holdin your hand and connect the two segments of anetwork cable. The repeater retimes and regener-ates digital signals before forwarding them. How-ever, even a string of repeaters cannot extend aLAN beyond a few thousand yards. You thenneed another network portal connection calledthe bridge.

A bridge has a higher throughput than arepeater and even higher thruput (no, that is notmisspelled, it is merely a short-hand used in thetrade) than a router but has a less efficient rout-ing and flow control. The bridge exercises greatdiscrimination over passing traffic by refusingpassage to the other side of the bridge if the desti-nation is not on the other side of the bridge. Thisreduces non-essential traffic. However, a bridgemay also not guarantee the delivery of the frames

A

D E

C

Route 1

Route 2Message

Origin Destination

A B

CRoute 3

Figure 4.1 Routing

39

Page 53: Telecommunications and Networks

Telecommunications and networks

nor its mishaps. For example, if the frame arrivesat the bridge faster than its processing rate, thena congestion of links and the bridge may occurwith frames being lost unintentionally. Also, if aframe exists beyond the maximum transit timeallowed, the frame may have to be discarded.However, like the router, the bridge does checkfor the frame size allowed by the network to beused and allows for priority. Unlike a router, abridge is protocol independent.

But there are multiple protocol routers, calledMPR, which is ‘simply a router that supportsmore than one protocol. Without the appearanceof advanced hardware technology like high speedmicroprocessors, it is now possible to providesoftware support for routing a number of high-level protocols simultaneously and within asingle system.’ The MPR can be combined witha bridge to give a bridge router system, which‘is a device that simultaneously combines thefunctions of bridges and routers to give the ‘‘bestof both’’. Bridge/routers work by routing theroutable protocols and bridging the non-routableones.’ (Grimshaw, 1991: p. 32).

Like the router (and the repeater), the bridgeis concerned only with networks that share thesame protocols. If the protocols are different,then we need a gateway.

A gateway is the most sophisticated andcomplex of the facilities that connect networksand allows for different protocols at one or all ofthe layers of the systems architecture (Chapter 8is devoted to systems architecture).

Because of the different protocols, a gatewaymust perform three distinct functions:

1. It must translate addresses for protocolsusing different address structures (address-ing discussed later in this chapter).

2. It must convert the format of the messagebecause it may be different in charactercodes, structure of format and maximummessage size.

3. The protocol must be converted to replacethe control information from one networkwith control information required to performthe equivalent functions in another network.This can be a complex problem if theprotocols at each level of the systemsarchitecture have to be converted.

Whatever network linking facility is used, itis desirable that the message be as short as

possible and may need compression. But whatis compression?

CompressionWe discussed compression implicitly when wediscussed stripping redundant data by the router.Compression by stripping is one approach toincreasing throughput through a transmissionmedia. The other approach would be to increasethe capacity of the transmission channel whichwould involve increasing the bandwidth. This isimportant in wide-area networks and will be dis-cussed later in Chapter 6. But there are limita-tions to increasing bandwidth and so instead wereduce the flow traffic. We do this in road trans-portation in a city where we cannot broaden theroad and so we couple two parallel roads andmake each go one way only. In telecommunica-tions we do not change the direction of flow butinstead reduce the redundancies in the message.This is what compression is about.

Compression is not new to data processing. Intransactional processing we do not always pro-cess an entire file with all its fixed information,like the name of a corporation, the date of itsfoundation, etc. Instead, we process only the vari-able data in addition to an identification whichappears as a header to the set of records to beprocessed.

Compression is also used in DSSs (DecisionSupport Systems) where we have a matrix toprocess. Sometimes this matrix has a orderedset of cells that are empty. In the case ofFigure 4.2, which shows the cells of an m ð nmatrix, the cells with values are those in the lefthand triangle while the right hand triangle isempty. The compression solution to this matrixis to eliminate the empty portion and therebygreatly reduce the message to be transmitted.

XXXXXXXXXXXXXXXXXXXXXXXXXXXX

XXXXXXXX

Half the matrix

is empty and therefore a candidate

for compression

Figure 4.2 Matrix for compression

40

Page 54: Telecommunications and Networks

Switching and related technologies

For a matrix in, say, a linear programming orinput output analysis problem with m D 100and n D 150 (which is not too large for suchproblems) and a value of four significant digits,the savings would be around 60 000 bytes of data.

Compression is also important in the transmis-sion of images and voice where whole blocks ofthe image and voice being recorded can be empty.This compression problem is of great interestto the movie industry where standards havebeen developed. The Motion Picture ExpertsGroup (MPEG) has agreed on two standards,MPEG1 and MPEG2, which have reduced thefull video signal from 250 million bits per secondto 1.5 million for VCR quality. The problem withthe MPEGs is that the encoding is very expen-sive and time consuming. There are, however,approaches being researched which include algo-rithms based on fractals and wavelets. Anothersolution may lie in combining compression tech-niques with display technologies. Researcherssuspect that a video display with a web of thou-sands of tiny computers one for each pictureelement of a video screen might be able to gen-erate video images with far less data than conven-tional videos currently required.

Whatever is transmitted (video, voice, dataor text), the process of compression is thesame as illustrated in Figure 4.3. The messageis compressed before the message is sent and

decompressed at the other end. Sometimes, anerror checking character is attached to ensurethat no error creeps into the compression process.This error checking could be part of the header inthe address, a subject that we will now examine.

Addressing

For a message to be transmitted, it is necessarythat it has all the information required to findthe destination along with control information.If the message is a long one, it is divided intopackets, and each packet has its own header sothat the router interprets the header and routesthe packets on one of the different paths avail-able, with the message being assembled back intoits original form at the other end.

An example of a header for a frame (a unit ofinformation transmitted as a whole) which hasfixed sized fields is shown in Figure 4.4. Theheaders will vary in format (fixed or variablefields), number of fields and content of thefields depending on the media of transmissionand the technique of communications used. Itcould be more complex than in Figure 4.4, butlest a potential e-mail user is intimidated bythe header format it should be stated that ane-mail address format is much simpler. For

Message ROUTER

Messagecompressed

NETWORK INTERFACE

NETWORK

NETWORK INTERFACE

Decompression

MessageROUTER

Figure 4.3 Routers and networks

41

Page 55: Telecommunications and Networks

Telecommunications and networks

Control Check

(CRC, Cyclic

Control Check

Message

Destination address

Source address

Frame control information, e.g. priority level,

type of frameFlag

Figure 4.4 Field in a packet

example, the e-mail address of the author oncewas [email protected]. Note that thefields are not fixed in size but the end of a field isidentified by a separator such as @ or a dot. Notealso that the name is not complete, with an initialmissing. This is because there is a limit to thesize of the name field. Note also that lower case isused for the address. Sometimes the whole nameis also required to be in lower case. We shalldiscuss the anatomy of an e-mail address later inthe chapter on e-mail, but suffice it to say thatit is fairly end-user friendly. It is translated intoa machine readable equivalent for interpretationby the router.

There are standards for addressing; onestandard for global addressing is X.500 byCCITT, an early international organization forstandardization.

One crucial device for transmission andtelecommunications is the modem, which we nowrevisit.

Modems

We discussed modems briefly in Chapter 2 andsaw how they enable us to traverse freely betweenthe digital universe of a computer to the analogueworld of telephones. This traverse between theAC (alternating current) of a digital computerand the DC (direct current) world of telephones

is achieved by the RS-232C, which is a serialconnection consisting of several independentcircuits sharing the same cable and connector.It is shown in Figure 4.5 as a box on top of acomputer and can be ‘added on’ to a computerwhen telecommunication capability is needed.The newer PCs come with a built-in modem sothat it is transparent (invisible) to the user.

So what else is new with the modem? Whyall the interest in an old device that has beenaround since the 1950s and 1960s? The answeris that the modem has greatly evolved not justin speed but in other capabilities. In speed, themodem with a microprocessor first introduced in1981 now has speeds of 57.6 kbps (kilobits persecond), 192 times the speed of the original Bell103 modem. The faster speed of the modem maycut the phone bills by nearly half and with theprice of modems dropping, the payoff period maybe just a few months. The computing power ina modern modem with computers replacing themicroprocessor is awesome, some are more thanearly mainframe computer systems. In fact, theyare so fast that the other equipment in the systemcannot keep up with them including manyof the computers that they connect. Also, theinfrastructure may not support the fast speedsof new modems like the 28.8 kbps. Many publicphone circuits (even in advanced countries) havelimits in their bandwidth and lose signal qualityin long loops (even local loops). Furthermore,

42

Page 56: Telecommunications and Networks

Switching and related technologies

Errorcontrol

Modem

Digitalsignal

Analogue signal

Compression control

Includes RS-232C which contains several independent circuits sharing the same cable and connector

Figure 4.5 Modem converts signals

the switching is often poor, with companiesoverloading their circuits.

The advances that came in 1984 did not comefrom industry but from the nudging of the ITU(International Telecommunications Union), thesuccessor of CCITT. The new standard was des-ignated the V.22bis and is still in wide use todaythough it is being steadily replaced by the suc-cessor V.32 (at 9.6 kbps) which is currently inwide use. The V.34 standard probes the line todiagnose its qualities and weaknesses. The V.42standard is concerned with error control and theV.42bis with data compression. (The V symbolstands for modem standards while the numbersis part of a block code for modems.)

Modern modems have improved file-transferperformance, better diagnostics and better docu-mentation. They are more reliable and have goodtechnical support for detecting and solving oper-ational problems. There is also greater interoper-ability between the different models of modemsfrom different manufacturers.

Another advance in the modem world is thefax modem which came into international promi-nence when the government of the USSR hadunexpected trouble with its rebels because ofthe effective communications amongst the rebelsthrough the fax. The faxes are strings of 0 and1 bits that represent a graphical image of thetransmitted page which are processed at both thesending and receiving ends by computers. The

sending end may also have a mechanism thatcompresses the data before transmission. Theprocessing capability at both the sending andthe receiving ends must be very computer inten-sive in addition to consuming storage in vastquantities if you want a reasonable resolution ofthe image.

Fax images by computer (even PCs) aresharper than the regular fax machine, but thedownside is that the computer system must beturned ‘on’ and have the fax program in residentmemory of the computer at all times. But the faxin a computer is so easy to operate. You select onthe screen what you want to fax, enter the addresswhere it is to be sent, and off it goes. Of course,if what you want to send what is not in computermemory, then you must have an optical scanningdevice for input. Some computers coming out inearly 1996 came with in-built scanners which willgreatly increase the use of fax by computer.

Smart and intelligent

A smart device is one that has a microproces-sor or microcomputer to do its computations. Anexample would be a router that needs a computerto do all its many computations for determiningthe optimal or near optimal (or even a better)route for messages. This is not a trivial problemas any practitioner in Operations Research and

43

Page 57: Telecommunications and Networks

Telecommunications and networks

Management Science will tell you, for they havelong been concerned with the Travelling Sales-man problem, which is to determine an optimalroute for a salesman that must make a specifiedset of stops on his sales trip. To calculate therouting problem, computing capability is embed-ded in the device which then makes it smart.

In contrast, an intelligent device also needs acomputer but makes computations that involvemaking inferences. Consider the router andthe problem posed in Figure 4.1. (You perhapsthought that it was a trivial diagram but thereis ‘a method in this madness’.) Further, let usassume that comparisons showed that route 2was better (by a given set of objectives) thanroute 1 and that route 1 was better than route 3.What can we say about route 2 compared to route3? If we applied a decision rule of transitivity (ifAPB (A preferred to B), and BPC, then APC),then, since route 2 is better than route 1 androute 1 is better than route 3, one can safely inferthat route 2 is better than route 3. This ability tomake inferences given data (of preferences in ourproblem) and decision rules or heuristics (rulesof thumb) enables us to make inferences thatreflect intelligence and hence the device can becalled intelligent.

You may consider this a trivial problem, so letus consider a modern modem. One is shown inFigure 4.6 with a computer chip which makesthe modem smart. If there were no computercapability, merely the RS-232C, then the modemcould be called a dumb modem. Now suppose

that the modem is a fax modem with all the capa-bility of discriminating voices and images. Thiscapability requires pattern recognition whichuses heuristics developed by practitioners of AI(Artificial Intelligence). It also needs a computerthat performs the many necessary calculations.But it is the capability of making inferencesabout sounds and images that makes the deviceintelligent. In our example, the fax modem isboth smart and intelligent.

This exercise in smartness and intelligence isdesigned to make the reader appreciate the possi-ble future of the world of telecommunications. Itis going to be much smarter and more intelligentas we learn more about making intelligent choiceseven if the variables are ‘fuzzy’ and not discrete(like the words ‘good’, ‘better’, ‘high’, ‘cheaper’).We are learning much about fuzzy systems andthis is being incorporated in many daily house-hold and industrial goods. As these intelligent andfuzzy systems are adapted to telecommunicationswe can expect more effective and efficient devices.We can also expect optimal models for routing forminimum cost as part of the work being done onoptimization in telecommunications (Luss, 1989).There is even research being done on an AINsystem, the Advanced Intelligent Network (Brim,1994) which will allow the system to perform logi-cal operations after a call and determine the creditstatus of a caller, all in a fraction of a second. Thishas important relevance to on-line real-time reser-vation applications, be it for the airlines, car rentalor the theatre. BT in the UK, Deutsche Budespost

ROM

ROM

MODEM

CHIP

Convertsdigitalandanaloguesignals

Power Supply

C P U

Performs

error

correction

& data

compression

Analogue side

RS-232C

Figure 4.6 Layout of a modem

44

Page 58: Telecommunications and Networks

Switching and related technologies

Telekom in Germany and American Express inthe US are all working on AIN applications.

Protocols

Whether smart, intelligent or dumb, once youhave the necessary devices to transmit and havethe transmission media it is time to have proto-cols, which are agreed upon rules of behaviour.These are rules for sharing telecommunications,media and devices on it. For example, we cannothave two messages going in opposite directionson the same channel! Protocols are conventionsfor representing data in digital form and proce-dures for coordinating communication paths. Weneed protocols that are ground rules for inter-action, sharing, routing and distribution in therelaying of information.

We have had protocols for telephony from theearly days of Morse and Bell, but in telecom-munications protocols for information exchangehave only emerged since the 1970s. There werede facto protocols from IBM and Xerox Corpo-ration and more formal ones from the DOD,Department of Defense in the US. They hadthe problem of having a large inventory of com-puters of different models and different manu-facturers that did not interoperate and wantedsoftware protocols that would enable transmis-sion across the many configurations of com-puting. The DOD established the TCP/IP, theTransmission Control Protocol/Internet Proto-col. Soon computer companies in the computerand telecommunications industry started follow-ing the TCP/IP or else they could not do businesswith perhaps the largest user of computing, theDOD.

Basically, the TCP is responsible for assem-bling packets for transmission and to properlyorder and reassemble them upon request. The IPpart of the protocol addresses routes and deliverspackets to the destination network and host com-puter. We shall have more to say about TCP/IPlater on in Chapter 8.

The TCP/IP should not be confused withIP/TCP. Both are concerned with control oftransmission but IP/TCP was designed for aspecific type of network, the Ethernet. It hasoptions for five protocols: routing control, errorcontrol, echo control (largely for diagnostics),packet exchange control and sequence packetcontrol.

Besides TCP/IP and IP/TCP, there are othernetwork protocols such as IPX, DECnet, Net-BIOS, AppleTalk, and XNS. It is such prolif-eration of protocols that led manufacturers anddesigners of protocols to a recognition in the1980s that there was a need for open protocols,that is, protocols that do not favour any one singlemanufacturer. What was needed was the abilityby users to mix and match networking hardwareand software so as to create customized networks.This would enable one to plug a cable into a wallsocket and achieve interoperability of networking;this is also known as plug-and-play capability.

TCP/IP is an attempt towards an open proto-col and a telecommunications lingua franca. Itis also the favourite choice of an enterprise-widebackbone protocol where a backbone is the high-est hierarchical level. Once connected to a back-bone there is a guarantee of interconnections. Wewill discuss backbones in later chapters. In itsearly version, TCP/IP was most appropriate fordata transmission and primarily for local connec-tions. For media other than data, such as voice oraudio, and for long distance transmission, otherapproaches are desirable including the ATM, asubject that is best discussed along with longdistance and wide area networks. But before weget to the wide area communications, we need todiscuss local communications and local area net-works, the subject of our next chapter. Before wedo that, we need to discuss one other importantfacility: the hub.

HubsAn important facility relating to switching is thehub. We have hubs in daily life like the hub atsome airports. Some airlines have a set of pointsthat they want to serve but, instead of servingeach point to point directly, they connect eachpoint to at least one hub and then switch thetraffic to the desired point through one or morehubs. The airlines have their main maintenanceand administrative facilities at the hubs withminimum facilities at the other points of service,thereby greatly decreasing the overhead costsand increasing efficiency. This arrangement mayinconvenience the passenger somewhat but it istolerable if you are not pressed for time and themoney savings are worth the delay. If you arepressed for time and have direct traffic, thenyou have a special line dedicated to your service.

45

Page 59: Telecommunications and Networks

Telecommunications and networks

Much the same happens in telecommunicationswhere a hub is where messages are switched totheir desired destination. It has heavy duty LANswitches that can handle heavy traffic with highbandwidth demands and must handle sharingof bandwidth with varying traffic like data,e-mail, multimedia and real-time like video-conferencing with varying demands of highbandwidth and low latency.

All traffic must be ideally routed seamlesslyand efficiently. The hub must also be able toidentify and respect priority traffic such as aCAT scan for a dying patient. All traffic mustalso be analysed and monitored for all segmentsof the network to identify potential bottlenecks,congestions and delays.

There are many ways of handling traffic. Onemay be the store-and-forward but then the hubmust decide whether to store all the traffic on asegment to stack up all the traffic or only part ofthe traffic, and if so what part. Some facilities arededicated to certain traffic while other facilitiesare shared. Some hubs are modular and othersare not. Some hubs have a standard backbone;others do not. Some hubs allow FDDI links, someallow ATM links, while others do not care whichlink you use. Some hubs are fixed-port; othershandle multiple nodes for each port. Some hubshandle all topologies, while other hubs restrictthe type of topology. Some hubs use optimalalgorithms such as for routing, while others usesatisfying algorithms and simulation to find aroute. Some hubs are intelligent in that theyhave the ability to make intelligent decisions and

handle ‘fuzzy’ variables. Most hubs are hybridand have a varying combination of the manyfeatures available. Such a hub can have multipleswitching and network connections in one box,and so can interconnect conventional networkmodules on one floor, connect stackable andmodular hubs on another floor, and switch trafficdirectly at high speed. All this can be done fromone central management point to be monitoredby a single console, sometimes even remotely.

Summary and conclusions

In this chapter we have examined the protocolsand devices necessary for telecommunications.An elegant way to summarize the discussionon bridges, repeaters, routers and gateways isto compare the different layers of networkarchitecture, as is done in Figure 4.7. In doingso, the author has violated a cardinal rule ofnot using concepts or terms (layers of networkarchitecture) without having defined the terms.There may be no solution to this problem whendiscussing telecommunication and networks, butwe shall handle the problem by minimizing theterms not yet defined and scrupulously defineand explain them later. Hopefully, by the endof the journey in this book, all these terms willcome together and make sense.

There is a similar problem with the term‘network’. We have come across the conceptmany times but dare not use it too often because

APPLICATION

PRESENTATION

SESSIONS

TRANSPORT

NETWORK

DATA LINK

PHYSICAL

Router

Bridge

Repeater

Gateway

Figure 4.7 Repeater, bridge, router, and gateways

46

Page 60: Telecommunications and Networks

Switching and related technologies

it has not yet been well defined or explained. Wenow liberate ourselves from this constraint anddiscuss the concept in detail as the topic of ournext chapter.

Case 4.1: Networking at the spacecentre

The space centre at Houston, Texas, hascontrolled the flights of all the early spacecraft.In 1995, a new command and control centrewas partly operational and in its beta phaseof testing to replace the old centre and toprepare for the space shuttle into the twenty-firstcentury. The new centre has over 19 kilometresof fibre optic cables connecting its hundreds ofPCs and workstations with the larger computersystems owned by NASA. These computers andworkstations are all interconnected in additionto being connected to the tracking stations allaround the world.

One design specification of this complex andimportant networking systems was that almost allthe equipment must be ‘off the shelf’. This wasspecified in order to keep maintenance easy andnot as costly as in the previous centre.

The design specification is a commentary onthe state-of-the-art of telecommunications andnetworking. Even a large and in some ways veryimportant real-time system can be constructedfrom products that are commonly available andare no longer ‘high tech’.

Bibliography

Brim, P. (1994). The advanced intelligent network: anoverview of markets and applications. Telecommuni-cations, 28 (12), 51 54.

Bryan, J. (1994). LANs make the switch. Byte, 19 (9),113 120.

Govert, J. (1992). New techniques for testing switched56 services. Telecommunications, 26 (1), 19 20.

Grimshaw, M. (1991). LAN interconnections technol-ogy. Telecommunications, 25 (2), 25 32.

Halfhill, T.R. (1994). How safe is data compression.Byte, 19 (2), 57 74.

Johnson, J.T. (1992). Coping with public frame relay:a delicate balance. Data Communications, 21 (2),33 35.

Luss, H. (1989). Optimizing in methodology, algo-rithms, and applications. AT & T Technical Journal,68 (3), 3 6.

Robinson, E. (1995). V.34: off the starting blocks. PCMagazine, 13 (5), 241 247.

Stalling, W. (1991). Streamlining high speed inter-networking protocols. Telecommunications, 25 (2),43 47.

Saunders, S. (1995). Next generation routing: makingsense of the marketectures. Data Communications, 24(12), 52 65.

Stone, D. (1994). When is a fax not a fax. PC Magazine,13 (15), 229 241.

Tannenbaum, A.S. (1989). Computer Networks.Prentice-Hall.

Tolly, K. (1994). Testing TCP/IP software. DataCommunications, 23 (2), 71 86.

47

Page 61: Telecommunications and Networks

5

LANS: LOCAL AREA NETWORKSJust because FDDI is faster than Ethernet does not spell the end of Ethernet; nor does the evenfaster HIPPI spell the end of FDDI. The lesson learned from the Token Ring/Ethernet war wasthat we can have multiple types of data links working together to provide a coherent network.

Carl Malamud, StacksNetworks connect people to people and people to data.

Thomas A. Stewart

Introduction

In our society we have many networks as inroads, railways and even grids for electricalpower. In each case, a network is an intercon-nection of nodes. In telecommunications a net-work is more than interconnectivity of nodes:it is a system with software and protocols thatenables the exchange of information and comput-ing resources. In the early 1980s this was local-ized, as in a building, or a set of buildings, likeon a university campus, a company headquar-ters or a manufacturing plant. This is referredto as a Local Area Network, or a LAN: an inter-connection of computing resources within a lim-ited geographic area. But there are many situ-ations where the interconnectivity must extendbeyond a local environment. We have a MAN, aMetropolitan Area Network, and a WAN, a WideArea Network. They need special software, pro-tocols and even hardware and switches for direct-ing the transmission along the desired route.MANs and WANs deserve a separate chapterand will be the subject of our next chapter,Chapter 6.

In this chapter we will examine the natureand desirability of interconnectivity of comput-ing, the characteristics and objectives of network-ing, networking as a paradigm for computing,and the protocols and switching methods nec-essary for a LAN. Also discussed are the manyapproaches to access in a LAN including the Eth-ernet, token ring, circuit switching, FDDI, framerelay, SONET and the wire-less network.

Interconnectivity

Computers these days are both powerful andcost-effective. They are everywhere, doing any-thing and everything that involves computing.There are supercomputers manipulating billionsof commands per second, working on chemi-cal reactions, forecasting weather, and analysingcomplex images in medicine and industry. Com-puters are handling billions of messages in officesand businesses and helping with design andproduction in manufacturing plants. There areover 50 million personal computers with thou-sands of software packages along with wire-less devices based on satellite and cellular sys-tems which enable us to work at home, in thecar, or even while walking or jogging. Intercon-necting this computing power will dissolve thetemporal and geographical barriers and enableus to work and play with partners that aredistant and even in other countries and con-tinents. It will enable video-conferencing andconveying video images by businesses acrossnational borders, remote medical diagnostics,the exchange of research results, and sharing ofinformation and resources that are otherwise iso-lated and disconnected. It may even affect oursocial lives and bring the world closer together ifthe increase in e-mail (electronic mail) demandis any indication. The impact on business andsociety of such interconnections and the prob-lems in global interconnectivity are the sub-jects for later chapters; suffice it tosay that interconnectivty can have profound

48

Page 62: Telecommunications and Networks

LANs: local area networks

implications on the way we work and live in theyears to come.

Whatever technology emerges for the future,there is great certainty that systems will notbe merely stand-alone systems. In the increas-ing complex information systems environment,there will be a need for the interconnecting ofprocessors, not only to gain access to a morepowerful processor, but also to gain access toanother data/knowledge-base or software pack-age, or even another expensive peripheral. Onesuch schema for interlinking of processors isshown in Figure 5.1. In the real world, the proces-sors of the same type may also be interconnectedto each other in addition to being connected withother types of processors within an organizationas well as with processors connected to a LAN.All these possible connections are not shown inFigure 5.1 in order to keep the diagram simple.

Figure 5.1 shows the interrelationship betweencomputers and telecommunications. It is telecom-munications that integrates computer systems(often disparate in terms of capacity, capabilityand even design) into a network that enablesthe sharing of computing resources by all thoseconnected to the network.

The configuration for networking used inFigure 5.1 is a bus and of course there areother configurations of networking. Networkingand telecommunications are one of the fastestgrowing sectors of the computing industry. Onefactor that is holding back a greater acceptanceand use of networking is a lack of standards,

standards that range from architecture to theformat of a message. Once standards, andpreferably international standards are adopted,networking will offer the access necessary for thesharing of processors and other scarce computingresources.

In addition to sharing the resources of othercomputers, it is equally desirable that one wouldwant to share peripherals. Many peripherals arevery expensive and a department within an orga-nization or even the organization itself cannotafford to have one of its own and must share.This is often true of image processors, opticalscanners, voice processors, or even a fast printer.Another peripheral may well be a storage devicelike an electronic cabinet or as archival storagedevice. Many of these devices can be accesseddirectly, once they are attached to a networkor through another multiplexer or another com-puter. If not available on one network, anothernetwork can be accessed through a bridge ora gateway. One such configuration is shown inFigure 5.2.

One approach to networking is to have services(including documents or files or a peripheral likea printer) resident on dedicated computers called‘servers’, and then the end-user, called a ‘client’,accesses the server that has the required service.This configuration is called the ‘client server’approach and is discussed in Chapter 10 as analternative organizational option for computing.

Physical linking is, of course, not sufficient foraccessing other computing resources. There is the

Parallel processor

Supercomputer

Specialized computer, e.g.LISP machine

Mainframe

Mine

Specialized workstation

Other PC/workstation

Your PC/workstation

L

A

N

To (another) LAN MAN WAN

Hand-held computer

Figure 5.1 Interconnectivity of a pc/workstation

49

Page 63: Telecommunications and Networks

Telecommunications and networks

Office workshop

Printer

Informationprocessingcentre

Processor

Electronicfile cabinet

Interface

Production workstation

Productionmachine

Micro graphic cell

Terminalinoffice

Otherethernets

Gateway∗

Computer

Typing system

Printer

TerminalsEthernetmultiplexer

Fastprinter

cab

le

Figure 5.2 Peripherals served on a LAN

need for software and protocols that facilitate andenable access. Once this is available, a networkis imbued with certain characteristics that areassociated with networks and others that aredesirable and depend on the application. We willnow examine such characteristics.

Characteristics of networksCharacteristics and performance objectives ofnetworks, more specifically LANs, are as follows:

ž fast response timež low delays which should be boundedž notification of estimated delays when they

occurž high throughput (thruput)ž high channel capacityž fairness of protocol in assigning access (within

a priority scheme)ž ability to add or remove a station easilyž low and fast maintenancež interoperability

The terms used above are common to many infor-mation systems and will not be discussed further,

except the last one. Interoperability requiresagreements such as the one for operationsbetween routers. There is not yet a goodagreement between router-to-bridge unless yourrouter can pretend to be a bridge. So many issuesof interoperability in networking are still to beresolved but progress is beginning to be made.‘With full device interoperability, national andinternational standards, and a simplified wayto order digital lines, plug-and-play networkingmay yet be possible.’ (Fritz, 1994: p. 130).

Networking as a computingparadigm

Given the inherited characteristics and somedesired performance objectives, one can safelysay that networking is now a new and differentcomputing paradigm. It is compared with thetraditional paradigms of computing in Table 5.1.

In comparing networking with batch process-ing one finds that there is little duplication incharacteristics or on application appropriateness.It seems logical then that these two modes ofprocessing will coexist. However, this is not true

50

Page 64: Telecommunications and Networks

LANs: local area networks

Table 5.1 Comparison of computing paradigms

Batch processing Time-sharing Desktop (stand-alone) Networking

Main targetaudience

End-user of output Access for end-user ‘Owner’ Anyone bonafide

Connection None Telephone None Telephone andnetwork

User status Subservient Dependent Independent UnrestrictedObjective Computation Access Computation CommunicationOperations Processing in

batches acccessVaried processing Varied Remote

Applications Customized reports Shared computingresources

Varied, but oftenlocalized

Varied, frome-mail todownloadingprograms

ClientcomputersorterminalMobile

computers

Private databases

Public databases

Satellite info.e.g. weather info

On-line inputdevices

Electronicoffice

Fax machine

Homecomputers

Teletype setting

Newspapers,journals, andotherpublications

Telemetry

Telecontrol

Othercomputers

COMMUNICATIONS

NETWORK

Figure 5.3 Inputs to a communications network

with time-sharing. Here networking has broaderscope and extension for sharing computingresources and so time-sharing will be subsumedby networking. As for stand-alone computer sys-tems, there is a commonality in equipmentaccess: both use a desktop. But while the desk-top provides stand-alone computing with perhapsa local database, a network offers much wideraccess, not just to other computers and periph-erals, but also to other databases. Thus desk-top computing may well adopt the networkingconnection. Networking therefore emerges as a

new and important paradigm of computing. Ithas many applications that would otherwise notbe possible. These are discussed in three laterchapters. Networking also has many inputs madepossible because of its remote connections. Theseinputs are shown in Figure 5.3.

Important to all networking, including a LAN,is the access approach for navigating the net-work. There are many access approaches and theydepend on two things: first, the topology of thenodes in the network, and second, the switchingmechanism chosen.

51

Page 65: Telecommunications and Networks

Telecommunications and networks

Topologies and switches

There are three basic topologies, configurationsformed by connections between devices in aLAN. These are the bus, the star and the ring.(There are many combinations of the basic threetopologies not shown in order to focus on thebasic topologies.) It is generally accepted thatthe bus is simple to comprehend, is easy tochange (add or subtract nodes), and has the abil-ity to bypass a faulty node. The ring also has thebypassing capability but, unlike the bus, if one

side breaks down, it has access from the otherside. It is therefore more reliable, especially whencompared to the star, where if the centre breaksdown then the entire system collapses. However,the star is easiest to control, test, maintain andmanage (since it is so centralized), but difficultto change.

A comparison of the three basic topologies issummarized in Table 5.2 and in Figure 5.4.

There are two basic types of network switchingmethods: circuit switching and packet switching.Circuit switching is what is used in our daily

Table 5.2 Topologies compared

Bus Star Ring

Routing Requires full duplex modem Centralized Bidirecttional path possibleNo routing

Control By convention Centralized By conventionNodes Minimum distance between

nodesNo restriction Restricted

Robustness Failed nodes bypassed If centre fails, allfails

Has an alternate path if one fails

Modifications Easily done Easily done Not easily done

BUS

STAR

RING

Figure 5.4 Basic topologies in networking

52

Page 66: Telecommunications and Networks

LANs: local area networks

telephones: you dial, you converse, you hang-up. After you try for a connection (if you getit), you have the exclusive use of the circuitand can stay as long as you want. When youare finished, you hang-up and break the circuitconnection, and that is the end of it. Because ofthe importance of the connection, this approachis called connection switching.

Circuit switching is shown in Figure 5.5. Notethat a message from origin 1 to destination 6 doesnot go directly through A and E but with a diver-sion through A, D and E. Thus, the connectionneed not be direct but whatever is feasible at thetime. This indirect path may be the only avail-able path and is what often happens in our dailytelephone conversations. Your author once hadto make a phone call from Paris to Marseilles inthe south of France. The connection he got wasthrough New York.

Circuit switching is appropriate for voice mes-sages and where human interaction is required,but it is not appropriate for the transmission ofdata as needed for most computer processing.Here, the traffic is bursty, with bursts or surges ofdata sent at high speeds for short periods of time.Bursty traffic has a high variation and unpre-dictability in transmission rates (it varies from

100 bps for a terminal to a million bps for manya processing job).

For such traffic, packet switching is mostappropriate. Here, a message is broken up intopackets which are units of information travellingas a whole between devices conceptually simi-lar to a bus. The packets share the transmissionchannel and may not all go together. That doesnot matter much because each packet is uniquelyidentified and can be reassembled at the otherend. Each packet also has a destination addresswhich is read and used to route by a packet switchwhich could be a programmable computer.

Circuit switching is illustrated in Figure 5.5using a very simple segment; a segment of anothernet for packet switching is shown in Figure 5.6.

The packet switching process is analogous to amotorway with cars representing packets. Carsfor different destinations share the same roaduntil such time as their exit comes and thenthey leave on another road towards their desti-nation. Knowing where to get off, being in thecorrect lane for the desired exit, and getting offprecisely when it is so necessary requires con-siderable human cognition and attention, espe-cially if the traffic is heavy and the possibleexits are many. Fortunately, in telecommunica-tions the switching is all automatic, with all the

6

A 4

DE

1

2

5

From 1 to 6 via A, D, E

From 2 to 5 via A, B, C and D

B

C

Figure 5.5 Circuit switching

53

Page 67: Telecommunications and Networks

Telecommunications and networks

PACKET SWITCHING

Legend

Clients to host(shown only for host 4for sake of simplictiy).

Packets switches(programmable computers)

Packets for host 1

Packets for host 2

Packets for host 3

Packets for host 4

1

4

3

2

Figure 5.6 Packet switching

Table 5.3 Summary of circuit switching and packet switching

Circuit switching Packet switching

Requires a connection Is connectionlessEach circuit call is one or more messages Each message is one or more packetsLogic required at switching centres Logic for control at each nodeCircuit is for exclusive use Channel is shared and more than one message is moved

simultaneouslyInteractive Not interactiveData can be a continuous stream Data is burstyNo ‘store-and-carry’ Can be‘store-and-carry’

decision-making and choices being made by thenetwork software. If the traffic is very busy, themessage is delayed and the system will store andforward the message when passage is possible.This does not happen in circuit switching wherea message can be lost when facing heavy trafficand congestion.

The protocols needed for packet switchingare contained in X.25 which is approved by

CCITT. The X.25 protocol was so importantand dominant that packet switching was oftenreferred to as the X.25 network. Being con-nectionless, packet switching may take differentpaths from origin to destination, depending onwhat links are available at the time.

Packet switching and circuit switching is sum-marized in Table 5.3. Error control is summa-rized in Figure 5.7.

54

Page 68: Telecommunications and Networks

LANs: local area networks

X.25 PACKET SWITCHING ORIGIN

1 2 3 4 5

DESTINATION

Checking for errors done between each pair of nodes before proceeding further

ORIGIN DESTINATION

1 2 3 4 5

FRAME RELAY SYSTEM

Checking done only at the end-station (destination, 5)

Figure 5.7 Error control in frame relay and packet switching

Access methods

There are many approaches to accessing a desti-nation in a network. It is a combination of topol-ogy and a switching method. Each is selected forthe appropriateness of a topology and a switch-ing method for each set of applications. We shalldiscuss these approaches starting with the oldest:the Ethernet.

EthernetEthernet is a combination of packet switch-ing on a bus topology and was first used byARPANET, the network implemented by theDepartment of Defense in the US to facilitatecommunications between its many researchers ingovernment business and academia. In parallelwith ARPANET, similar projects were startedin Europe, especially by the National Phys-ical Laboratory in the UK and the Institutd’Informatique et d’Automatique in France. In1970, the Research Center for Xerox announcedthe Ethernet standard which has since beenadopted by many private and public networks allaround the world. Thus, it is not only the oldestLAN system, but perhaps the most used networksystem.

In an Ethernet system, the network broadcastsa signal over a coaxial cable over a distance of

more than a mile. If another transmission isdetected then transmission ceases and tries againafter a random interval of time. This approach iscalled the SCMA, Carriers-Sense-Access Method.

The token ring

Another approach is the token ring which com-bines the ring topology with packet switchingmethod. Before transmitting a message, a token(short series of bits) is passed. If accepted, thenthe computer accepting the token is free to trans-mit one or more packets and other computerswait until they get the token. This avoids clashesand establishes a mechanism for sharing a trans-mission media of cable.

There are many variations of the token ringapproach including the Cambridge Ring devel-oped by the University of Cambridge in the UK.

Circuit switchingCircuit switching approach creates an end-to-endpath before invoking the flow of data between twonodes. It is a simple technique and used exten-sively by the PT&Ts and other telephone carri-ers. However, it is not always the most efficientapproach since setting up a connection may take

55

Page 69: Telecommunications and Networks

Telecommunications and networks

longer than the message itself to which many ofus making a phone call can attest.

FDDI

FDDI, Fibre Distributed Data Optical Interface,combines the token ring approach to the highcapacity of fibre optics achieving high rates of upto 400 million bps. The system is organized withtwo parallel rings, so that if one fails, recoveryis made on the other ring. This approach is alsocalled the Dual Ring Approach.

Frame relay

A frame relay uses packets in a circuit switchingenvironment. Furthermore, it is similar to a vir-tual circuit which delivers packets in order butwith variable delays, except that virtual circuitsare determined at the time subscribers are con-nected to the system.

Frames are 64 1500 bytes in length. The startand end of a message is identified by a flag.Frames can also handle video and voice traffic.They are popular partly because they are attrac-tively priced.

Both the frame relay and the X.25 for packetswitching, when of variable length, had to con-stantly adjust the flow and timing of messages.However, with reliable digital circuits in the late1980s, designers stripped off many of the X.25functions, reduced the overhead, and subsumedthe X.25 into the frame relay. Firms that arenew (relatively speaking) to telecommunications,like MCI and Data Communications, do not evenoffer the X.25; instead they rely on the framerelay.

Frame relay provides faster service than X.25and at the same time provides much of thesame communications facilities including flagrecognition, address translation, recognition ofindividual frames, and filling in the interframetimes. Some of the benefits of the frame relay are:

ž higher network productivity;ž reduced network delay;ž savings in bandwidth;ž better and more economical hardware imple-

mentation.

The frame relay can be used as a high capacitybackbone for the X.25 access networks and forLAN interconnectivity. Its advantages are:

ž high speed interconnectivity are lower costs;ž minimum delays since the network does not

terminate protocols;ž no significant new software by bridges,

routers and gateways;ž and evolutionary path for high-speed LAN

interconnection services (Bushman, 1994:pp. 42 3).

Frame relays differ from packet switching and isbest described in the comparison of Figure 5.7.

SONET

SONET is the acronym for Synchronous Opti-cal Network. It supports a multiplexed hierar-chy of transmission on speeds ranging from 51to 2400 bps. The SONET system ‘allows datastreams of varying transmission speeds to becombined and extracted without first having tobreakdown each stream into its individual com-ponents.’ (Cerf, 1991: p. 78).

The SONET is used by virtually all importantcarriers except perhaps the largest, AT&T, partlybecause SONET offers its own optical algorithm.It offers superior performance and monitoringand can route around network failure points in40 to 60 milliseconds.

SONET may well be the gigabit backbonefor networks, especially long distance wide areanetworks, the subject of our next chapter.

Wire-less networks

Wire-less LANs are capable of transmitting10 Mbps within a room or building. It is similarto cellular technology for data transmission.Cellular phones rely heavily on analoguebroadcast techniques, so using them to movedigital data will be inherently difficult. Oneapproach is the CDPD, Cellular Digital PacketData. The CDPD is similar and yet differentfrom the circuit switched network. A comparisonof the two is shown in Table 5.4.

56

Page 70: Telecommunications and Networks

LANs: local area networks

LAN MAN

WAN/

NII

WAN/GLOBALNETWORK

Figure 5.8 LAN as a subset of other networks

Table 5.4 Circuit switched vs. cellular digital packedradio

Characteristics Circuit switchingCDPD

Store & forward No YesLarge data files

and faxYes No

Host log on Each session Once a dayCellular channel Dedicated SharedApplication

profileLarge

transactionsSmall transactions

File transfere-mail

Dispatching e-mailmessaging

Summary and conclusionsIn this chapter we examined the need for con-nectivity and examined how connectivity leadsto networking as a new and important paradigmin computing.

We also noted that there are three basic topolo-gies: the bus, the star and the ring. Also, there aretwo basic types of switching: the circuit switchingand packet switching. Each has its own advan-tages and limitations. An application environ-ment would make it desirable to combine onetopology with one switching method. This givesus a variety of methods of navigating the LAN:the Ethernet, the token ring and its variation theCambridge ring, FDDI, frame relay, wire-less,and the SONET. Each is briefly described anda comparison is made.

An important approach is the ISDN which isdiscussed in a later chapter, Chapter 7.

With the proliferation of computers and theincrease of computer literacy and experience

among the end-users, we have seen a greatdemand for LANs. However, this demand willsoon require that the boundary limitations of theLAN be relaxed and extended to a MAN and aWAN to provide remote services. The LAN thenbecomes a subset of a MAN/NII (National Infor-mation Infrastructure)/WAN. This relationshipis shown in Figure 5.8. We shall explore thesepossibilities in later chapters. First, however, weneed to discuss the MAN and the WAN, the sub-ject of our next chapter.

Case 5.1: A network in the UK

Racal Electronics agreed to buy British RailTelecommunications for £132.8 million, ‘givingthe company a data link passing through mostmajor British cities along British Rail tracks’.British Rail accounts for about 80% of theoptical fibre network’s customers. But Racal said,‘usage at present was only 20% of the network’scapacity’.

The acquisition gives Racal strength in thefield of voice and data communications, its coreactivity that accounts for more than half of thecompany’s revenue. Racal now is, in the words ofits Chief Executive, ‘a carrier’s carrier.’

Source: International Herald Tribune, Dec. 6, 1995,p. 17.

Case 5.2: The Stentor network inCanada

Stentor is an alliance of Canada’s major tele-phone companies with integrated local and long

57

Page 71: Telecommunications and Networks

Telecommunications and networks

distance services. It also offers seamless ser-vices with North America through its alliancewith MCI.

Stentor maintains the world’s longest, highdensity and fully digital fibre optic net-work stretching approximately 4800 miles acrossCanada. It services include digital switching,intelligent network services, high reliability (sec-ond to Japan) and a 58% telephone penetration.

‘In 1994, Stentor announced the Beacon Ini-tiative, a project that will bring the informa-tion highway to 80 to 90 percent of Canadiansby 2005.’ The activities of the Bacon Initiativewill include a $8 billion upgrade of the telephonenetworks over the next 10 years; a 500 millionenhancement programme over six years to pro-vide national interconnectivity; the creation of anew multimedia company that will provide anddistribute multimedia services; and venture cap-ital for the development of multimedia applica-tions and products for the information highway.

Source: Stentor, Dec. 1995, p. 3.

Case 5.3: A network planned forChina

The People’s Republic of China is planning anetwork infrastructure that will have mail-hubservers in twelve of its largest cities by the endof 1996. The country-wide network will connecton-line many corporate and governmental groupswith one another and the rest of the world withe-mail.

The servers are to be provided by Control DataSystems Inc. whilst the network service provideris Rayes Technology Co.

Source: Computerworld, Nov. 6, 1995, p. 76.

Bibliography

Abeysundara, B.W. and Kamal, A.E. (1991). High-speed local area networks and their performance:a survey. ACM Computing Surveys, 23(2), 221 261.

Boyl, P. (1996). Wireless LANs: free to roam. PCMagazine, 15(4), 175 202.

Bushman, B. (1994). A user’s guide to frame relay’.Telecommunications, 28(7), 42 46.

Brueggen, D.C. and Yen, D. (Chi-Chung) (1990).Local area network connectivity. Computer Standards& Interfaces, 11, 103 114.

Cerf, V.G. (1991). Networks. Scientific American, 265(3), 72 81.

Derfer, F.J. Jr. (1992). LAN Fundamentals, Part 2. PCMagazine, 11(7), 229 250.

Francis, B. (1991). Linking LANs with laptops. Data-mation, 37(10), 61 63.

Gareiss, R. (1993). Tommorrow’s networks today. DataCommunications, 24(13), 55 65.

Gifford, J. (1995). Wireless local loop applications inthe global environment. Telecommunications, 29(9),35 37.

Lane, J.L. and Upp, D. (1991). SONET: the nextpremises interface. Telecommunications, 25(2),49 52.

Miller, A. (1994). From here to ATM. IEEE Spectrum,31(6), 20 24.

58

Page 72: Telecommunications and Networks

6

MAN/WANNo technological imperative determines how to link LANs or even whether to do so.

Ben Smith and Jon Udell, 1993

If you want to connect Ethernets to ATM, you may need a Ph.D. to figure out the long term implications.Paul Strauss, 1994

Introduction

LANs in the 1980s were very successful, but itsinherent constraint of being confined to a localarea is its greatest limitation for the 1990s. Whilein the 1980s, industry matched resources to appli-cations, in the future, applications may well haveto be matched to networks. Future applicationsare moving away from the legacy applications ofprocessing data to the processing of real time data,text, voice and images. Processing needs are shift-ing from the desktop networking to enterprise-wide strategic computing; from store-and-forwardcomputing to strategic computing; from off-linecomputing to real-time computing; and from localprocessing to remote processing.

Networking is being transformed from theearly environment of military systems tied to pri-vate networks that were switched mechanically toanalogue and digital devices of today with broad-band fibre optically transported messages thatare electronically switched by sophisticated soft-ware and intelligent network systems.

Businesses are becoming less centralized andmore distributed. Businesses are no longer oper-ating exclusively within their national bound-aries but going global where physical boundariesare no longer an issue. Traffic in telecommuni-cations is shifting from a LAN to a metropolitanMAN and to a wide area WAN.

We must avoid a communications gridlock ifwe are to benefit from open and worldwide mar-kets. To fully benefit from the information age wemust have networks that interconnect the millions

of computers and thousands of computer installa-tions world-wide to enable useful and meaningfulinformation exchange. A step towards this goal isa step beyond the LAN and towards the MAN andWAN, the subject of this chapter.

In this chapter we will examine the nature ofprocessing in a MAN/WAN and compare it withthe LAN. We will also examine the planningand performance management of a WAN whichincludes bandwidth management and switchingmanagement. The discussion of the managementof switches will lead to a detailed discussion ofthe ATM.

MAN and WANMAN stands for Metropolitan Area Network andis a computer network that typically covers apart or all of a metropolitan city. It usuallyencompasses a compact area with an area thatranges from one to a few dozen miles in radius.In contrast, a WAN (Wide Area Network) is for amuch larger area, much larger than either a LANor a MAN. Typically, the distance may be 100 ora few thousand miles in radius. In practice, aWAN may cover a nation, a continent, and beinternational and world-wide.

A MAN is mostly copper and fibre cables likethe digital circuits based on the local phone com-pany and connected to LANs with cable or withmicrowave connections to a nearly microwavesystem.

The MAN is not heard of much these daysperhaps because it mostly uses a stable andproven technology developed by the telephone

59

Page 73: Telecommunications and Networks

Telecommunications and networks

companies. It is, however, an important basis ofthe so-called intelligent building, intelligent notin the AI (Artificial Intelligence) sense, but inthe sense that it is (potentially at least) equippedby computers. Each room is wired for plug-and-play equipment. You acquire your hardware andsoftware, plug the computer into the wall socket,and are connected to the world through network-ing. This may not be a common scenario today,but many professionals in computing see that isa viable scenario.

Planning for a WANIn planning any network, one must considersome of the facts of the changing environment:

ž an ever increasing number of computers andworkstations with regular upgrading of tech-nology and all of them quite powerful;

ž an increasing number of end-users and orga-nizations using networks;

ž an increasing awareness and increase in com-puter literacy of the end-user, which includesthe manager, worker and the home-owner;

ž the end-user certainly able and desirous ofend-to-end interconnections and integrationof applications;

ž increasing complexity of businesses that de-mand remote processing for their input oroutput or both;

ž increasing need of multimedia processingincluding data, voice, images and video withpossibilities of films-on-demand in the nearfuture;

ž the extent of remoteness of processing is con-tinuously enlarging to include the entire world.

One can safely say that in our world the need forcomputing and telecommunications is demanddriven. The demand drives the computer indus-try which in return drives the telecommunica-tions industry. The cycle is complete in thatthe telecommunication industry drives the com-puting industry which in turn influences thedemand. This cycle is shown in Figure 6.1.

We can see the demand driven technology inthe area of multimedia where the end-user is nolonger satisfied with printed reports in batch asin the 1960s and 1970s but demands graphic andimage processing especially in the factory withCADD (Computer Aided Drafting and Design).But soon that was inadequate and there was a

Demand for applications by end-user voice & video processing teleconferencing cinema-on-demand

Demand on telecom industry interconnectivity broadband capacity multimedia transmission

Demand on computing industry multimedia workstations

Figure 6.1 Interaction between demand and technology

demand for voice processing, both analysis andsynthesis. And then that too was not sufficientand there is now a demand for audio processing.To add to all this, there is a demand that this bedone in real time with animation and in colour.So, the level of aspiration is steadily nudgingup with the traffic mix being very multimediaand in real time as shown in Figure 6.2. All thismust be done quickly and seamlessly (smoothly)across remote distances. The end-user cannot tol-erate a waiting time of even a fraction of a sec-ond. Thus, the demand on the computing indus-try and on the telecommunications industry totransport all this information is steadily increas-ing. With each advance there is demand for moreadvances. With each reduction of response timethere is demand for more reductions and fasterprocessing. With each inclusion of media thereis a demand for more and better service andfor these applications to be integrated. Outputin printed form is no longer sufficient; outputshould be as voice with a possible input responsealso as voice and all in real time.

Voice is shown as only one of four media inFigure 6.2 but it is more important in relativeterms. It is important as a stand-alone technologyand has many applications for itself; in addition,it is part of video processing (and voice process-ing) and sometimes part of real-time processing.So it is important. It is of special interest tothe computing professional because it is an ana-logue signal whilst computers are digital. Thatis why an integrated digital system is of suchinterest and is the subject of a separate chapter,Chapter 7.

60

Page 74: Telecommunications and Networks

MAN/WAN

TextImages

Voice

Data

Realtime

Figure 6.2 Traffic mix in telecommunications

Performance of a MAN/WAN

Whether we discuss voice processing or not, andwhether we talk about a MAN or a WAN, wemust design and implement a system that hasgood if not high performance. But what is per-formance for telecommunications and networks?It depends on the perspective, but both end-usersand technicians will agree that delays are impor-tant. One approach may be to have a secure sys-tem with low losses and high reliability. This isa subject that we will discuss in Chapter 12, onthe Security of Networks. One important perfor-mance measure is to reduce delays. This can bedone by increasing bandwidth, by adding addi-tional bandwidth, reducing the load on band-width without reducing traffic (by compression

perhaps), or by increasing the speed of thelinks. These strategies are all of what is calledbandwidth management, which we shall discussbelow. We shall also discuss another reason fordelay and even loss of reliability, and that isswitching management. This includes the strat-egy of switching such as the hub and spokemethod and of course the reliability and robust-ness of switches that can operate reliably in heavyload as well as in a variety of traffic conditions.

Bandwidth managementThere is considerable empirical evidence whichsupports conventional wisdom that delaysincrease as the system gets loaded and itsutilization increases. This is shown in Figure 6.3

Utilization

Normalized delay

50% 80% 100%

10

5

2

Figure 6.3 Link utilization and delays (adapted from Weiss, 1990: p. 58). Not to scale

61

Page 75: Telecommunications and Networks

Telecommunications and networks

Table 6.1 Link speed and time required for a 640 kbyteload application, assuming 100% efficiency and no con-tention

Link speed (kbps) Transmission time (s)

9.6 546056 960

112 471500 3.5

Source: Adapted from Weiss (1990: p. 58)

where we can see that beyond a 50% utilizationdelays increase rapidly and increase much morerapidly after a 80% utilization. A quick answerto this problem would be to add anotherchannel. Multilink or bonding (sometimesalso called bandwidth-on-demand) are waysto add channels to a network. Multilink issoftware-based while bonding is hardware-based.Multilink does pose a problem: it may deliverpackets out of sequence. TCP/IP accepts packetsout of order and rearranges them in proper order.The designers of Multilink did not anticipate thisproblem arising and have no good solution for it.

Another set of approaches is to reduce the loadon the system by compression and the elimina-tion and control of unnecessary data transmitted.Such filtering also eliminates unwanted networktraffic. For an enterprise network operating witha wide variety of protocols filtering can add upto 25 30% more effective bandwidth.

Yet another approach would be to increase thespeed of the transmission link. This can producea dramatic effect as shown in Table 6.1 where thetransmission time can drop from 5460 seconds to3.5 seconds by an increase in the link speed from9.6 kbps (which is the speed of many modemson a LAN) to 1.5 Mbps which is the speed onsome WANs. The moral is clear: keep the localtraffic on slow and cheaper channels but transferthe long-haul loads on the faster channels eventhough they may be more expensive because theyare more cost-effective.

Switching managementOne problem is large switching problems likethose faced by a WAN and whether or not tohave many point-to-point connections with fewswitching in between, or to have all the localtraffic funnelled into a hub and then transmitthe load over long distances on a high speed

WAN. We have a similar problem in air traffic.Some airlines, as in the US, funnel traffic onsmall planes to large hubs and then transferthem on non-stop long hops across the oceansometimes taking up to 12 hours. Besides theunhappiness of some passengers, the technicalproblem is that such switching at hubs gets quitecomplex with disastrous results when timings arenot just right. This complexity increases not justwith the increase in traffic volume but also withthe change in traffic mix.

In earlier chapters we have drawn the analogyin telecommunications to highway (motorway orautobahn) traffic with ramps and instructionsfor making changes at intersections. In packetswitching we can see the parallel of packets (pas-senger) sharing the same channel (lane of road)and then being shunted off when they must makea connection. The problem is relatively easy whenone considers just one type of traffic: data traf-fic. But now consider a mix of traffic: data, voice,images and video with some in real time. Thisis analogous to having not just auto and cartraffic but also having train and waterway traf-fic all at the same place. Complexity increasesvery quickly. This actually happens at Slossenin Stockholm where roads, trains, a canal andpedestrians come together at the same time andyet all the traffic flows seamlessly. The problemin telecommunications must also be solved notjust for a mix of traffic but for heavy traffic thatcan well be expected in the days to come. Part ofthe solution lies in a robust switching technique,and this is where we see the arrival of the ATM.

The ATMATM stands for Asynchronous Transfer Modewhere asynchronous means that the signal isnot derived from the same clock, and thereforedoes not have the same fixed timing relationship.ATM is supported internationally and in 1988it was chosen as the switching and multiplexingtechnique for B-ISDN (Broadband ISDN, to bediscussed in a later chapter). It offers a highdata rate (Gbps) as well as a low latency rate(latency is the time between access time andtransfer time).

It was partly in response to the need forlarge capacity and partly in response to theneed to handle voice and video in telcommuni-cations that the ATM was developed. And why ishandling of voice and video different from data?

62

Page 76: Telecommunications and Networks

MAN/WAN

For one thing, the processing is different. TheATM technology provides a common format forbursts of high speed data and the ebb and flowof the typical voice phone. With voice, periodsof silence can be edited out without any loss inthe message. And with video, only changes in theimage are sent; the rest is already available andcan be used for reconstruction of the message.Also, voice and video must be done in real timein order to avoid any losses in synchronization.

ATM is a connection-oriented technology sothat each cell is specified before the connectionis made. ATM uses the cell structure which hasa fixed message field of 48 bytes and a header of5 bytes. The header in the cell contains all theinformation a network needs to relay the cell fromone node to the next over an established route.The header also contains control bytes as shownin Figure 6.4. This ATM structure is comparedwith the frame relay structure in Figure 6.5 which

Header = 5 bytes Payload (message-data/information) = 48 bytes

M E S S A G E

Header Check Sum (8 bits) used for control purposes

Cell Loss Priority (2 bits) used to indicate whether a cell may bediscarded during periods of network congestion

Payload Type Indicator (2 bits)used to distinguish between user cells and control cells

VCI / VPI field (24 bits) used for channel identificationand simplification of the multiplexing process

Generic Flow Control field (4 bits) used to control the amount of trafficentering the network

Figure 6.4 Structure of an ATM cell

H E A D E R P A Y L O A D (MESSAGE)

ATM structure:

5 bytes 48 bytes (fixed length)

Frame structure:

Payload (message) is of varibale length

Flag

Header (2−4 bytes)

Figure 6.5 Comparison of ATM and frame structure

63

Page 77: Telecommunications and Networks

Telecommunications and networks

has a smaller header and a variable length mes-sage field. (The flag in a frame is a series of bitsthat indicates the start and the end of a payloadmessage.) In Ethernet, the packet for switchingis 64 bytes long. The header is larger because ofthe complexity of the variable length message thatfollows the header but also because it providesdata that the software needs to recognize wherethe message starts and where it ends. Fixed lengthpackets, in contrast, can be implemented com-pletely into hardware which is faster.

ATM ‘users send bursts of as many or as fewcells necessary to transfer their data; they alsopay only for the cells they send, not for the speedof a dedicated facility they may use only part ofthe time’. (Lane, 1994: p. 43). The bursts are sentthrough a centralized ATM switch that has dedi-cated connections to its end-users who may havea PC or a workstation, in addition to connec-tion to a computer that may be a server. This isillustrated in Figure 6.6. The dedicated connec-tion with the ATM switch which routes messagesand controls access in the event of contention,conflict when more than one user wants to usethe same resource simultaneously. This arrange-ment is analogous to the PBX (Private BranchExchange) used for voice calls.

ATM establishes virtual connections betweeneach pair of ATM switches needed to connecta source with a destination. Up to 65 536 vir-tual channels can be multiplexed into a vir-tual path. These connections are termed ‘virtual’to distinguish them from the dedicated circuits

used in the STM, Synchronous Transfer Mode.‘It is the virtual nature of ATM services that willprovide greater efficiencies in the future. Today,most communications capacity is idle. A voicecircuit is only 30 40 percent efficient; most ofits time is spent listening.’ (Lane, 1994: p. 43).

Connections can also be virtual as in PVC(Permanent Virtual Connection) where parame-ters (but not necessarily routes) are established inadvance in contrast to the SVC (Switched VirtualCircuit) which provides resources as required.

We have characterized ATM as a connection-oriented technology, but that does not precludeit from handling connection-less traffic as well.This traffic on an ATM has often been identi-fied as being heavy because of the nature of thetraffic, which includes images, voice and video.To give the reader an appreciation of the mag-nitudes involved, we shall examine some appli-cations for their data intensity. We start withtext, a recent book written by your author of therough size as this book occupying over 12 millionbits. In contrast, a full diagram like many inthis book, would take around 1 million bits each.Images are data intensive. An example from thereal world would be a medical imaging applica-tion such as the rendering and transmission of adiagnostic X-ray would involve 2 10 billion bitsof information (Vetter, 1995: p. 31). More suchapplications can be found in scientific researchwith high definition three-dimensional images inreal time. In industry, examples of such imageswould be in CADD, computer aided design and

PCs or

workstations

switchToATM

network

Computer Server

ATM

Figure 6.6 An ATM switch

64

Page 78: Telecommunications and Networks

MAN/WAN

drafting. In business, such as in oil prospect-ing, large amounts of data on simulation haveto be moved quickly from graphical workstationsto the field offices. Then there are other videoapplications in the entertainment industry andin education. In the office there is teleconferenc-ing, image archiving, and group work that mixesvoice, images and text all in a time-sensitive envi-ronment. Also, the traffic load on networks isheavy not just because of the type of applicationbut also because of high volume. Even ordinaryapplications like file transfer and e-mail haverisen exponentially. All must now be handled byLANs and WANs.

In summarizing the discussion of ATM, onecan compare it with circuit switching as inTable 6.2. Also, one can safely say that ATMis a proven technology but that its applicationslie more in the future (like teleconferencing,films-on-demand) than in current reality. Thereare many commentators and practitioners whoare enthusiastic about ATM and there are somethat are more cautious. We shall quote bothsides below:

ATM is often described as the technology thatwill allow total flexibility and efficiency to beachieved in tomorrow’s high speed, mulitiser-vice, multimedia networks . . . its promise of highspeed, integrated services and universal connec-tivity . . . the technology that finally enableshigh-bandwidth, time critical applications toreach the desktop. (Vetter and Du, 1995: p. 29).

ATM cannot deliver ubiquitously across the net-work in a failure-proof mode as yet . . . Manyof the current network management platformslack the performance features and sophistica-tion required to manage ATM’s high speed,highly complex connection-oriented networks. . . an enterprise network entirely composed of

Table 6.2 Comparison of circuit switching and ATM.

Circuit switching ATM

Traffic type: Data Data and voiceStructure: Variable length Fixed lengthDelays: Variable Fixed

Can be long Very lowBandwidth: Wasted EfficientSwitching: Done in software Done in hardwareOrientation: Connection Connection and

connection-less

ATM will not be possible for some time yet. . . most enterprise users will implement hybridATM/frame relay and connect to a device that isterminated in the enterprise network on an ATMport . . . Interoperability with existing technolo-gies such as frame relay and the X.25 is imper-ative to a smooth and cost-effective evolution ofATM. (Federline, 1995: pp. 69 70).

The problem that many of us, especially layper-sons, have with ATM is not with the structureor its functions but with its name. We tend toconfuse ATM in teleporcessing with ATM inbanks, Automated Teller Machines.

ATM is one approach to the demands of trans-mitting data and voice. Another approach is theISDN, the topic of our next chapter.

Summary and conclusionsA MAN and a WAN are extensions of a LAN.There is more interest in the two extremes, theLAN for short haul and the WAN for longhaul telecommunications. They are compared inTable 6.3.

Another way of looking at a LAN and a WANis that the LAN is an access layer which inter-faces other networks at low speed and the WANis the backbone layer of networks providing highperformance connectivity. This relationship isshown in Figure 6.7 where the two layers arerepresented by ‘clouds’ as boundaries.

Table 6.3 Comparison of a LAN and a WAN.

LAN WAN

Distance: For a few km For over1000 km

Speed: Low speeds High speeds(in Mbps) (in Gbps)

Error rate: Low Can be highOwnership: At firm’s level Higher than

firm’s levelAdministration Costs: Less than WAN More than

LANMaintenance: Less complex More complex

than LANRouting algorithm: Simple ComplexSwitching: Frame Frame

FDDI FDDIATM

65

Page 79: Telecommunications and Networks

Telecommunications and networks

ACCESS LAYER

BACKBONELAYER

Figure 6.7 Access and backbone layers

Smith cautions us about the issues to beaddressed by the WAN backbone:

address capacity planning issues as well as faultreporting and route planning functions. Routermanagement systems focus very heavily on themanagement of routes/paths and are typicallyweak in the wide area performance reportingand management . . . unless users fully appre-ciate and are prepared for the consequencesof some of the interconnectivity systems thatthey implement, their WAN resources couldbecome expensive, overloaded, and poorly man-aged. LAN internet working needs an efficientWAN behind it for users to achieve productivityimprovements. (Smith, 1994: p. 55).

To meet the high load demands and variety oftraffic we need an ATM. But this does not meanthat it cannot be used in a LAN. The LAN of thefuture will not only have hosts, Internet workingdevices, interfaces to public networks, but alsoATM switches as predicted by Vetter and Wang:

The bandwidth of traditional LANs is usually onthe order of tens of megabits per second, whileATM will support Gbps speeds. Today’s LANalso lacks scalability. Tomorrow’s LANs mustoperate in an environment in which comput-ing devices are so inexpensive and readily avail-able that there are hundreds or even thousandsin a typical office. With such large numbers ofdevices any attempt to interconnect them withtraditional shared-media LANs would be impos-sible. The limitations of existing bus and ringLANs, the demand for higher bandwidths, and

larger user populations are major reasons for thegrowing interest in ATM LANs. (Vetter, 1995:p. 35).

Another factor supporting the use of ATM in aLAN is that LANs are increasingly managed ina centralized fashion by hubs. Increasing use oftwisted-pair and optical fibre media foster cen-tralized hub connections as well, and a switchedLAN interconnection is seen as an extension (ora replacement) to existing hubs. As computingpower increases, direct connections of high-endworkstations to a centralized switch is seen as anattractive, especially for video and other high-end applications. In particular, the expectedgrowth of video-related applications makes theconnection-oriented ATM technique a suitablechoice for this usage. (Wang, 1995: p. 40).

Budwey and Salameh look at the role of ATM asoccupying a niche in networking.

In the global networks, SMDS and frame relaywill prevail in the next five years as the driv-ing technologies. ATM will provide for highperformance networks, frame relay for lowerend applications, and SMDS will be the LECsswitching interface of choice. If priced correctly,SMDS may provide the infrastructure of thefuture global network. (Budwey and Salameh,1992: p. 26).

Case 6.1: ATM at sandiaSandia Labs in Alburquerque, New Mexico, isorganizationally related to Livermore Labs, in

66

Page 80: Telecommunications and Networks

MAN/WAN

California, some 1800 km apart. They were oneof the first to have a supercomputer and be on theARPANET. They both are involved in researchand development for the US government defenceand energy departments. They serve thousandsof scientists and engineers and have some of themost powerful computing power in the world.Sandia has 200 LANs and there are 25 in Liv-ermore. These facilities are to be consolidatedfor purposes of implementation, security mech-anisms, reliability figures, meeting performancerequirements, and adherence to networking andcommunication standards.

Supercomputing at Sandia and Livermore hastwo computing environments: ‘secure’ for classi-fied work and ‘restricted’ for unclassified work.Both environments provide TCP/IP (Transmis-sion Control Protocol/Internet Protocol) accessfor remote FDDI (Fibre Distributed Data Inter-face) (also used in the UK Parliament in Lon-don), or Ethernet LANs to centralized networkresources such as graphic servers and mass datastorage. Access to supercomputers is provided atboth sites by Ethernet/FDDI and other routers.

ATM switches are used in the consolidationlink based on Cisco routers that providethe connection from the LAN technologyto the SMDS/ATM (Switched Megabit DataService/Asynchronous Transfer Mode) to MAN(Metropolitan Area Network) and using the ATMswitches to connect the MAN to the WAN (WideArea Networks). The measured round trip delaytime of the end equipment is 7.1 seconds.

Future plans include ‘the possibility of migrat-ing from SODS to the emerging standards fortransporting IP and other protocols directly overATM switches’.

Source: Neagle et al. (1994). Developing an ATMnetwork at Sandia National Laboratories. Data-mation, 28 (2), 21 3.

Case 6.2: NavigatingLANs/WANs in the UK

LANs, MANs and WANs are being used in theUK for resource sharing and as a mechanismfor improving the personal productivity of officeworkers, as well as for promoting collaborationand work group computing.

Britain’s PA Consulting group have found thatbusinesses have increased their sales by 25%;

reduced their administrative staff by 15%; andimproved their customer care index by 10%.

In addition to increasing productivity andreducing costs, there is also often an improve-ment in security and quality levels as well as areduction in delays. The main factors that influ-ence the LAN/MAN/WAN connectivity solutionsare the size of the MAN/WAN, the location of theaccess points, number of users, patterns of traffic,applications portfolio, inter-LAN software, andhardware compatibility.

The LAN/MAN/WAN connectivity can lead toan enterprise solution that, according to ChrisGahan of the British carrier BT, ‘. . . is a blendof services, using individual technologies to theirbest advantage and having the flexibility tochange the blend as business changes’.

Source: International Herald Tribune, Oct. 4, 1995,p. 14.

Supplement 6.1: Wantechnologies

Frame relay ATM ISDN SMDS

Trans-missionmode:

Variable-lengthpackets

Fixedlength, 53byte cells

48-bitpackets

Fixed-length, 53byte cells

Usage: Data,somevoice

Data,voice andvideo

Data,voice andvideo

Data

Speed: 56 kbpsto1.5 Mbps

1.5 to622 Mbps

144 kbps 56 kbpsto34 Mbps

Source: Computerworld, Oct. 16, 1995, p. 69.

Supplement 6.2: Survey onWANs

Focus Data, an independent market researchfirm conducted a survey of users of NetworkWorld on usage and selection criteria of WANs.The results are shown below:

Analogue dial-up 53.1%ISDN 33.7%Switched digital 15.3%Other 32.7%Don’t know 9.5%

67

Page 81: Telecommunications and Networks

Telecommunications and networks

Based on a possible score of 5.0, the scores forthe top five selection criteria used are as follows:

Ease of use for remote users 4.56Throughput performance 4.45Support for a specific LAN protocol 4.37Ease of support 4.12Management tools 3.76

Source: Network World, Oct. 30, 1995, p. 62.

Supplement 6.3: Projected pricingof ATMThe price of ATM access is predicted to dropconsistently at least for the next three years. Thepredicted drop is as follows:

1995 $54001996 $40001997 $30001998 $2000

Sources: CIMI Corp., Voorhees, N.J., US; printedin Computerworld, Nov. 20, 1995, p. 2.

Bibliography

Alexander, P. (1995). Network management: the roadto ATM deployment. Telecommunications, 29(9),47 50.

Basi, J.S. (1990) Networks of the future. Telecommuni-cations, 24(7), 33 36.

Bryan, J. (1993) LANs make the switch. Byte, 18(6),113 132.

Budwey, J.N. and Salameh, A. (1992). From LANs toGANs. Telcommunications, 26(7), 23 26.

Fritz, J. (1994). Digital random access. Byte, 19(9),128 132.

Hurwicz, M. (1997). In search of the ideal WAN. LAN,12(1), 99 102.

Kim, B.G. and Wang, P. (1995). ATM networks: goalsand challenges. Communications of the ACM, 38(2),39 44.

Lane, J. (1994). ATM knits voice, data on any net.IEEE Spectrum, 31(2), 42 45.

Miller, A. (1994). From here to ATM. IEEE Spectrum,31(6), 203 204.

Pugh, W. and Boyer, G. (1995). Broadband access:comparing alternatives. IEEE Communications Mag-azine, 33(7), 34 46.

Richardson, R. (1997). VPNs: Just between us. LAN,12(2), 9 103.

Smith, B. and Udell, J. (1993). Linking LANs. Byte,18(12), 66 84.

Smith, P. (1994). Reconciling the LAN vs. WANbandwidth management mindset. Telecommunica-tions, 28(3), 51 55.

Vetter, R.J. (1995). ATM networks: goals, architecturesand protocols. Communications of the ACM, 38(2),39 44.

Vetter, R.J. and Du, D.H.C. (1995). Issues and chal-lenges in ATM networks. Communications of theACM, 38(2), 28 29.

Weiss, J. (1990). LAN/WAN internetworking. Telecom-munications, 24(7), 57 59.

68

Page 82: Telecommunications and Networks

7

ISDN. . .the intrinsic value of a telecommunications system grows combinatorially with the number ofsubscribers it interconnects.

David Rand Irvin

Introduction

ISDN is the acronym for Integrated ServicesDigital Networks. It is a digital version of theswitched circuit analogue telephone system.‘But,’ say the critics, ‘we have had switched net-works and telephones for a long time. So whythe excitement?’ ‘Well,’ reply the proponents ofISDN, ‘it is multimedia and handles data, voice,images and video.’ ‘But,’ add the sceptics, ‘wehave had voice and fax through the modem allthese years, and so what is new?’ The proponentsof ISDN then point out that it is not only a tech-nology for multimedia communication, but it isan enabling technology that will allow computerswith applications of integrated multimedia to beas ubiquitous as the telephone is today. The scep-tics then say, ‘I have been hearing of the ISDNfor over a decade and we have seen no results.Maybe we should not upset what we already have.We may not need something that is so complexand difficult to implement.’ ‘Yes,’ counter theproponents of ISDN, ‘It has taken a long timeand will take longer because we are dealing witha system that is not only integrated but inter-nationally so. And getting international agree-ment between the carriers, suppliers of telecom-munication components, the computer industry,and many governments, does take a long time. Italso takes a long time for testing a product care-fully and then getting acceptance especially foran advanced concept. Remember, there were only25% of households with telephones in the US in1920 and it took 60 years for this percentage toincrease to 96%. ‘These things take a long time’.So the argument rages.

In this chapter we will examine the myths andrealities of ISDN. We will describe ISDN as it is

today, look at its objectives for tomorrow and theday after, discuss the implementation of ISDN,and examine the obstacles and future for ISDN.

We start with image processing followed byvoice processing. In both cases, we examine thenature of the application and their uses in busi-ness and daily life. These applications are con-strained by a lack of resources necessary forimplementation of the digital technology. Theseconstraints are then examined. For the readerwho wishes to read on, there is more on thenature of ISDN and its evolution.

The computing environmentISDN was designed for an environment whereall the needs of computing could be integrated.This included applications of data and voice,as well as images and video. Data and imagescan be easily digitized while voice and video arebasically analogue signals and more appropriatefor telephony than for a digital computer. Theproblem is one of integrating the two types ofsignals or else pay for the inefficiencies resultingfrom the interfacing of the two. Either all shouldbe digitized or all be analogued. ISDN takesthe approach of all being digitized. However,understanding the nature and magnitude of thedigitized and analogue applications is necessaryto appreciate ISDN.

All the early processing were computations andtransaction processing which were numerical anddigital. Later on we added textual processing, butthis was digitized by giving each alpha charactera unique digital equivalent. Then came graphicswhich in many cases could be digitized if a curveis viewed as a set of lines, which they are becausemany discontinuous lines can look continuous.

69

Page 83: Telecommunications and Networks

Telecommunications and networks

Likewise an image (like a drawing or even a photo)can be viewed as a set of dots where each dot isdigitized. Thus, an image can be represented byan array of numbers. A number code could alsorepresent the intensity (and perhaps colour, if weare not just dealing with black-and-white images).These primitive picture elements are called pixels.Thus a computer image is a two dimensionalarray of numbers, the individual pixel values.For example, we might have a 100 ð 100 arrayof intensity measurements, each selected from arange of 0 to 100, where 100 represents white and0 represents black, and the 99 intermediate valuesrepresent various shades of grey.

The initial stage of image processing is pixelprocessing and may involve a ‘clean-up’ pro-cess, i.e. removal of noise (e.g. black pixels thatshould be white) that is often introduced bythe hardware that generates the pixel image. A‘smoothing’ operation is then performed in whicha small cluster of adjacent pixels are compared,and a single odd-valued one is adjusted to avalue that is closer to the ‘average’ of that of itsneighbours. This smoothing operation is compu-tationally trivial but done repeatedly, somethinglike 10 000 times on a 100 ð 100 array of pix-els. One can quickly see that the computationalneeds of image processing can multiply rapidlyand become very large very quickly. Digital com-puters soon became indispensable.

The large set of computations for imageprocessing is worth the price because as thesaying goes ‘a picture is worth a thousandwords’. Also, there are many applications inbusiness that range from simple charting for areport to complicated drawings, blueprints, andCAD/CADD (computer aided design/computeraided drafting design). In medicine, magneticresonance imaging (MRI) and computerizedaxial tomography (CAT) scans of parts ofthe body including the brain may well avoiddangerous surgery and save many lives. Ina less dangerous and more entertaining way,images are used in cinema and film-making. In1982, the film Tron, was released and creditedas the first feature film that used computergenerated imagery as background for live actors.Well, within a decade, this digital technologybecame so commonplace that it is now beingused throughout the entire film industry. Wenow have digital cameras that take moving andanimated pictures and store them for futuremanipulation by computer. This technology

has applications in business for training, foradvertising products and for simulations indecision-making.

Image processing is also important in anyoffice. To give you some idea of the magnitudesinvolved, consider a study done by Arthur D.Little in 1980 in the US. The study found thatan average office worker handles per day: 1 pagefrom files, 5 pages from mail, 4 from catalogues,11 photocopies, 32 pages of computer printoutas input, 14 pages passed along, 5 pages mailedand 8 pages to be filed. Now project this intothe future and you can soon see that there isa great potential demand for image processingif office work is to be rationalized and madeefficient, especially when office processing shiftsfrom data and text only to their being embeddedin graphics and pictures as images.

Even the need for processing data is increas-ing rapidly as shown in Figure 7.1 where filesin the 1970s have grown into very large files inthe 1990s and may now occupy 1012 bytes. Inaddition to data, text and graphics, there is theneed to process voice which is often indispens-able today in the office, in the factory, in thehome and in all walks of life. This need is largelymet by the telephone and partly by voice pro-cessing. The differences between the processingof voice and that of data may be well known, butfor the record they are summarized in Table 7.1.

Many applications related to voice are pro-cessed by the telephone exchange. An exam-ple is video processing. One configuration isshown in Figure 7.2. This can be done througha central facility like a PBX (Private BranchExchange) carrier with a BRI (the basic rateinterface) rate for low volume, or through theLEC (Local Exchange Carrier) paying a PRI(the primary rate interface) for high volume.The BRI and the PRI are two classes of servicesto customers of baseband ISDN. The BRI pro-vides up to 144 kbps (two 64 kbps ‘B’ channelsC one 64 kbps ‘D’ channel for control informa-tion). The PRI provides up to 1.54 Mbps whichincludes twenty-three 64 kbps ‘B’ channels andone 64 kbps ‘D’ channel.

The many applications of voice and imageprocessing mentioned above are stand-aloneapplications. Many can be integrated providing araison d’etre for ISDN. Some of these applicationsare listed in Table 7.2. They vary in bandwidthdemand as shown for a sample of applications inFigure 7.3.

70

Page 84: Telecommunications and Networks

ISDN

Bytes ofdata

Very, Very large database

Very largedatabase

Integrateddatabase

Largedatabase

Mediumdatabase

Smalldatabase

Large file

Small file

Blocks of records

Large record

Smallrecord

1950 1970 1990

1012

1011

1010

109

108

107

106

105

104

103

102

101

Figure 7.1 Growth of corporate databases

Inter

Exchange

Carrier

PRI

PRI

PRIBRI

BRI

BRI

LEC

LEC

PBX

PBX

BRI = Basic rate interface; PRI = Prime rate interface; LEC = Local exchange carrier

Figure 7.2 Networking with video-telephones

71

Page 85: Telecommunications and Networks

Telecommunications and networks

Telex Text

Desktop tele-conferencing

File Transfer

FAX CAD/CAM

Digital videophoneTransactions

Visual-ization

Analogue phone

Bandwidth (bps)

10 100 1K 10K 100K 1M

64K

10M 100M 1G

Figure 7.3 Bandwidth requirements. Not to scale; only approximate

Table 7.1 Computer network vs. voice network

Computer Voicenetwork network

Usage: Digital VoiceDistance: Limited LongPerformance: High LimitedOwnership: Private Public usuallyCharges: Low, if any Service charge

The resource environmentThe question that could be asked is whythese applications cannot be processed byour conventional digital equipment along withmodems to do the conversion between digitaland analogue signals. The answer is that modemsare appropriate for PCs with a 9.6 kbps modem,despite their steady increase in bandwidthfrom around 100 bps to almost 50 kbps. Upto 100 kbps, the baseband ISDN may wellbe adequate for connecting LANs as well asfor transmitting data faster than before. Forexample, a facsimile page that took 30 secondsin the pre-ISDN era would now take around4 seconds. However, for applications beyond100 kbps, one needs the B-ISDN (BroadbandISDN) as shown in Figure 7.4. The B-ISDNis designed for voice and video as well aslarge volumes of data to be transmitted over

long distances. With fibre optics, speeds of theB-ISDN can get data rates between 10 and600 Mbps. Such bandwidth applications like thehigh speed workstations, large data repositoriesinterconnected for the purpose of processsingmedical images, molecular models, distributedCAD/CAM (computer aided design/computeraided manufacturing), and the like are waitingadoption. Applications of ISDN were listed inTable 7.2. The most spectacular application ofISDN (not even listed in Table 7.2 because it isstill a proposition) may well be NASA’s EarthObserving System for global-change research,which is expected to transmit more than onetrillion bytes of data per day (or equivalently92 million bits per second) for the durationof a 15 year period. That is to begin in thelate 1990s (Irvin, 1993: p. 43). However, we aretalking about B-ISDN without really explainingthe baseband ISDN or narrow band ISDN. It istime to do so.

What is ISDN?Voice and data are inputs to an interface hard-ware equipment which is connected to an ISDNinterface. It is connected to an ISDN switchthrough three channels: two ‘B’ channels (bearerchannels) of 64 kbps and 16 kbps, appropriatefor either data or voice transmission, and one ‘D’channel, which is a 16 kbps designed to control

72

Page 86: Telecommunications and Networks

ISDN

Bits/second?

B-ISDN

ISDN

MODEM

10M

100K

10K

100

1960 1970 1980 1990 2000 201 0

Figure 7.4 Rise in data rates (adapted from Griffiths, 1990: p. 158)

Table 7.2 Applications made feasible by ISDN

Customer sharing with salesperson the same screenon products and conditions of sale

Teacher sharing screen with studentTelecommuter sharing screens of multimedia desktop

with customer, supervisor or co-workerVideo-conferencing with all parties looking at data,

text, graphs, pictures and even results of asimulation in progress

Access and dialogue with librarianMedical records and imaging accessRemote medical diagnoses like from an airport orformSecurity (and identification) and surveillance systemTeleshoppingTelebankingTelereservationsTelenewsHigh speed bulk multimedia transfer including

books from a libraryMultidocument image storage and retrieval

Note: All the above applications involve on-line remoteprocessing. Many of the above (e.g. video-conferencingand those involving dialogues with up to the minuteupdated data) are isochronous, that is, they are timedependent and in real time

transmission in the ‘B’ channel (used to sig-nal the switching system to generate calls, resetcalls and receive information on the incomingcalls including the identity of the caller). Thesethree channels are sometimes referred to as the

2BCD system. Two of these three channels are of64 kbps each. They can be multiplexed to formone 128 kbps or by multiplexing four ‘B’ chan-nels to form one 256 kbps channel. Similarly, one64 kbps can be submultiplexed into two 32 kbpsor eight 8 kbps channels for eight terminals con-nected in parallel. These channels connect to theISDN switch which is connected at the other endto networks that may be signalling, non-switched,switched or packet switched networks. This isshown in Figure 7.5.

Implementation of ISDN

ISDN has no system to compete with or to matchand copy. ISDN was implemented following allthe rules of good development: the system speci-fications were stated by the users, and the systemwas designed, implemented and tested. Since thesystem was to have a world-wide relevance, thedevelopment had to be global and this takes along time.

In 1984, the first specifications of ISDNappeared as the ‘Red Book’ specs followed bythe ‘Blue Book’ specs, based on experiences withthe Red Book. These were made by CCITT, aninternational organization telecommunicationsbased in Europe.

Suppliers for components were selected: Sie-mens Stomberg-Carlson in Europe and AT&Tand Northern Telecom in the US. Theimplementation was ahead of schedule and moreso in Europe than in the US. This may bebecause in Europe we have nationalized PT&T

73

Page 87: Telecommunications and Networks

Telecommunications and networks

PC orwork-station

ISDNinterface

B (could be data)

B (could be voice)

D (could be info.)

ISDN

switch

SwitchedNetwork

SwitchedNetwork

Figure 7.5 Basic components of an ISDN

(Post Telephone and Telegraph) organizations,while in the US the telecommunicationsindustry is decentralized. The private carriershad billions of dollars invested in analogueequipment and so there was hesitation andcaution but no resistance to a conversion,painful and expensive though it was boundto be. But carriers in the US did anticipatethe coming of ISDN and have rapidly addedWAN access capabilities to everything rangingfrom computers to internetworking equipmentto video-conferencing equipment. US companiesare developing alliances and partnership toredirect their resources to the burgeoningpotential market of ISDN. The computerindustry is also producing equipment necessaryfor an ISDN environment such as the ISDNmodems, adapters, bridges and bridge/routers(Garris, 1995).

The implementation of ISDN was started inthe UK in 1985 by British Telecom followedby Illinois and California in the US in 1986;in Mannheim and Stuttgart, Germany in 1987;and in Brittany, France, also in 1987. By 1988,there were over forty different trials of ser-vices of the ‘Red Book’ standards that were inprogress. In many countries, the infrastructureto be required by ISDN was being laid. In theUS, the use of fibre grew from 456 000 miles peryear to 3 811 000 miles a year by the end of 1991.Japan enjoyed a fibre optic backbone networksince 1989. They announced their fibre-to-the-home trials, and a schedule for providing suchservice to all subscribers by the year 2015. InEurope, the Euro-ISDN (European ISDN stan-dard) was introduced in 1988 and compliance

with the standard was targeted for 95% in 1995.In Australia there is also a 95% availability ofISDN. In the UK, the availability of ISDN hasbeen 100% for several years. In the US, servicesare expected to be around 90% by 1996 (Galvinand Hauf, 1994: p. 36).

Despite its extensive testing and its interna-tional certification, ISDN has not been widelyadopted. In a study done in the US, (Lai et al.,1993: p. 49) 17% of the companies surveyedrejected ISDN. The main reasons cited (in orderof significance) were:

ž other networks can serve the same communi-cations needs equally well;

ž nation-wide ISDN not available;ž not able to justify costs;ž not an established technology;ž not available in our area;ž international ISDN is not commonly avail-

able;ž not compatible with organization’s comput-

ing environment.

The same study identified the principal obstaclesfor the adoption of ISDN (Lai, et al., 1993: p. 50).These are:

national ISDN not available;unattractive tariff structure;lack of user awareness;world-wide ISDN not available;expense of ISDN equipment;available only in metropolitan areas;incompatible equipment;ISDN services not attractive;lack of standards for ISDN.

74

Page 88: Telecommunications and Networks

ISDN

Summary and conclusions

ISDN is a viable technology but has not yetreceived wide acceptance. Meanwhile it willcoexist with other technologies that are oftenolder and better entrenched like the analogue,Switched 56 and SMDS. Another competingtechnology is the frame relay. This is not olderbut younger than ISDN; in fact, it is a spin-offof ISDN (Bhushan, 1990). These technologies arecompared in Table 7.3.

The ‘I’ in ISDN stands for the integrationof the media of data, images, voice and videointo one single digital end-to-end system ofseamless transmission and communication. The

integration of the two basic media of digital data(from the computer) and analogue signals (froma telephone) are shown in Figure 7.6. Such inte-gration will not only enhance the many applica-tions of integrated data digital and voice as listedin Table 7.2 but will encourage the user and theindustry to identify and develop applications thathave only been dreamed of. For example, it maywell be possible soon to pick up the phone andcall someone across the continent or even acrossthe world and download a database or images orvideo in one window or the screen and see theother party in another window simultaneously.This may well come without any grand openingand fuss but will just creep upon us like a long

Table 7.3 Comparison of ISDN with other technologies

Advantages Disadvantages

Analogue: Ubiquitous SlowCheap Long set-up times

Switched 56: Faster than analogue High monthly services chargeProvides digital service

SMDS: Highest bandwidth Confined to local carrier’s regionFrame relay Best for full time connections such

as large and very busy branchoffices

Too expensive for low frequencyusers

ISDN Fast Not universally available orcompatible

Flexible configuration Unfamiliar to many professionalsCan be cheap Expensive if packaged wrongly

PC Telephone

Multipoint ISDNBridge or Router

Printer Host computer Office computer Peripheral

ISDN LANbridge

National ISDNNetwork

Figure 7.6 Integration of digital and analogue

75

Page 89: Telecommunications and Networks

Telecommunications and networks

overdue progression of computing applications.ISDN is not yet a plug-and-play technology,

but it is a big leap from the analogue-onlycommunication of the telephone. ISDN is anaccess technology that allows us access to alldata whether in the form of digits or analoguesignals simultaneously. It is a vehicle by whichbusiness as well as households will cruise on theinternational information highway.

ISDN is a foundation technology and like thefoundation of a house it is fundamental and eveninvisible. But the infrastructure for the houseof ISDN has been carefully planned and wellimplemented. It is not commonly accepted, noteven commonly known. There is still confusionand lack of awareness about availability sched-ules, rates, standards, and even the benefits andenhancements of ISDN. There are still problemsof national standards being in conformity withinternational standards, the problem of cheapand stable rate structures, as well as interoper-ability and portability between components anddevices using ISDN. We have not yet reachedthe level of applications that provide an indus-try with the assurance of economies of scale forthe industry to adopt ISDN. Also, ISDN is con-strained by bandwidth. A broadband ISDN, theB-ISDN, is now being implemented (a case onthe development of its standards is discussed inthe chapter on standards). All this will comein due time and as with the telephone, it maytake many decades but it will be ubiquitous andindispensable for a worthy style of living thatawaits us.

ISDN is consistent with many of the wellaccepted architectures of networking, which isthe subject that we will examine in some detailin our next chapter.

Case 7.1: ISDN at West VirginiaUniversity (WVU)WVU had a FDDI backbone that links itsten buildings but had 90 other buildings thatcould not be interconnected. Economic analysisindicated low bandwidth connectivity in thesesatellite buildings dial-up routers that can usemodems to link LANs over ordinary phonelines gave only the most casual support of LANinterconnection despite compression being used.

In 1990, WVU decided on a flat-rate ISDNservice that connected the satellite buildings to

the FDDI backbone using Ethernet bridges. Thiscost $40 per month for each line and $2100for a bridge that lashed the two 64 kbps ‘B’channels of ISDN to create a 128 kbps pipe.With compression, thruput exceeded 200 kbps.That gave an order of magnitude slower thanEthernet but an order of magnitude faster thanan analogue modem. Also being synchronous, itcarries a bigger load than does the asynchronoustraffic. The alternative was to get a T1 link for$700 per month plus a bridge and a CSU/DSU(channel service unit/data service unit) for$12 000.

Source: Byte, Dec. 1993, p. 75.

Case 7.2: ISDN in FranceThe Numeris ISDN in France has many applica-tions using ISDN that can be classified into fourcategories as shown below.

DATA MULTIMEDIARemote processing Radio commentariesLAN interconnection TeleconferencingSoftware loading Audiogram Service

Illustrated video textIMAGE PROCESSING DOCUMENTATIONImage server High speed facsimileMedical Imagery Electronic mailTelesurveillance Document databaseRemote teaching Document exchangeVideo TelephonyLocal image stations

The ISDN implementation was done in threestages. The first stage was to use ISDN for a spe-cific application including image videotext; thesecond stage was the voice and data integration;and the third stage was the full integration withthe architecture of the corporate network.‘The various needs expressed by a corporate net-work are split amongst the three main bearernetworks, which are leased lines, the packetswitched network, and the ISDN. The planningand definition of this architecture requires twoto three years before becoming available.’

Source: Jean-Pierre Guenin, Therese Morin,Francois Lecrec, Roger Trulent and PierreDeffin. ISDN in France 1987 1990: fromthe first commercial offering to the nationalcoverage of numeris ISDN. IEEE CommunicationMagazine, January 1991, pp. 30 35.

76

Page 90: Telecommunications and Networks

ISDN

Case 7.3: ISDN for competitivebridge across the AtlanticIn December 1996, the author was playing com-petitive bridge in real time on the Internet. Threeof the players were from America and one, withthe screen name Atle, was from Norway. Atle wasconsistently slow in his responses.

In between the plays, the players are able tocommunicate with each other and the Americansnudged the Norwegian to speed up. Finally Atleresponded: ‘I apologise for being slow but notfor long. Next month I am getting an ISDN con-nection.’ ‘Oh’ I said, ‘How much will that costyou?’ $US500. And what does it cost in Amer-ica?’ asked Atle. The American from Californiaresponded: ‘I do not need an ISDN. I get a fastenough response and unlimited access for lessthan $20 per month.’ ‘Wow’ said Atle, but soongot immersed in playing bridge and the conver-sation on ISDN ended.

Bibliography

Bhushan, B. (1990). A user’s guide to frame relay.Telecommunications, 24(7), 39 42.

Crouch, P.E., Hicks, J.A. and Jetzt, J.J. (1993). ISDNpersonal video. AT& T Technical Journal, 72(3),33 38.

Derfler F.J. Jr. (1994). Betting on the dream. PC Mag-azine, 13(18), 167 187.

Galvin, M. and Hauf, A. (1994). Expanding the marketfor ISDN access. Telecommunications, 28(10), 35 38.

Garris, J. (1994). ISDN sleight of hand. PC Magazine,14(5), NE1 6.

Griffiths, J.M. (1990). ISDN explained. Wiley.Irvin, D.R. (1993). Making broadband ISDN success-

ful. IEEE Network, 7(1), 40 45.Lai, V.S., Guynes, J.L. and Bordoloi, B. (1994). ISDN:

adoption and diffusion issues. Information SystemsManagement, 10(4), 46 52.

Sankar, C.S., Carr, H. and Dent, W.D. (1994). ISDNmay be here to stay . . . But it’s not plug-and-play.Telecommunications, 28(10), 27 33.

Viola, A.J. (1995). ISDN solutions: ready for primetime. Telecommunications, 29(6), 55 57.

Walters, S.M. (1991). A new direction for broadbandISDN. IEEE Communication Magazine, 39 42.

77

Page 91: Telecommunications and Networks

8

NETWORK SYSTEMS ARCHITECTUREOSI is a lovely dream; SNA has a lot of clout; TCP/IP is current reality; but using multiple protocolsis the trend

Anon, 1995

Introduction

Architecture is concerned with structure, andstyle and design. The architecture of networksystems is the style and design of the structureof networks that enables electronic communica-tion. It provides a framework for all the compo-nents and protocols discussed in earlier chaptersand provides a frame of reference for much ofthe discussion to follow. It is an overview of thestructure of networking. Typically, an overviewis done early in any text in order to facilitateplacing the components in their relative position.We have done the opposite. It is like travellingaround much of the world and then looking atthe world map. But, in the case of networking,we make the exception on pedagogical grounds.An early overview of network systems architec-ture could be very intimidating. Networks andtelecommunications are very rich in the namesand acronyms of the many components, devicesand protocols involved. Using them in an overviewwithout explaining them first could be difficultfor both the reader and the author. So we dis-cussed important components and protocols, andnow we are ready to see how they all fit togetherand provide a basis for the remaining discussion.

The earliest architecture of networks was SNAby IBM. There were many others developedin America, including DNA, DCA, OSA andTCP/IP. Across the Atlantic, there was IPA andXBM developed by ICL in the UK, as well as theOSI model developed by the international orga-nization CCITT. It was expected that all thesemodels would fold up or merge into one inter-nationally accepted standard. And there was ashakedown as expected, but three models seemto have emerged; SNA, TCP/IP and OSI. In

this chapter we will examine each. First, we willexamine SNA and OSI and compare them. Wewill then look at APPN, the updated versionof SNA, and compare it with its competitor inthe US, the TCP/IP. Finally, we will comparethe OSI models with APPN and TCP/IP. Weconclude with observations on how this rivalrybetween the three models will affect the end-useras well as the computer industry and telecommu-nications in general.

Systems network architecture(SNA)SNA was developed by IBM beginning in 1972.It is a layered architecture where each layer is agroup of services that is complete from a con-ceptual point of view. Each layer has one ormore entities, where an entity is an active ele-ment within a layer. Each entity provides ser-vices to entities in the layer above them, and inturn receive services from the layer below them(except for layer 1). In addition, each layer pro-vides a set of functions so that communicationcan take place not only for file transfer and trans-actional processing, but for the management ofon-line and real-time dialogues that may takeplace between two parties. The traffic expectedwas not only data but also voice and video.

The SNA architecture consists of sets of lay-ers within groups of the physical unit (PU) andthe logical unit (LU). Each layer is numberedsequentially from 1 to 7 starting from the bottom.Thus, layer 1 is the physical layer that providesthe mechanical and electrical level interconnec-tions for the two stations (sending and receiv-ing). Layer 2 is the datalink layer that provides

78

Page 92: Telecommunications and Networks

Network systems architecture

rules for transmission on the physical mediumsuch as packet formats, access rights, and errordetection and correction. It is responsible for thetransmission between two nodes over a physicallink and is serviced through bridges. Layer 3 isthe network layer that provides routing of mes-sages between two transport entities. This layer issometimes called the path control layer becauseit is responsible for the path that a message takesthrough the network which could include morethan one node. This network layer 3 is servicedthrough routers.

The physical layer in SNA is addressed in thearchitecture, but SNA does not actually definespecifications for protocols in this layer. Instead,SNA assumes the use of different approachesincluding national and international standardsalready in place.

In the logical unit, there is layer 4 whichis for transmission and provides functions forerror-free delivery of messages such as the flowcontrol, error recovery and acknowledgment.This layer also provides an optional dataencryption/decryption facility (to be discussed ina later chapter). Layer 5 is for data flow control.It initiates and establishes connections and keepstrack of the status of sessions and connections.This layer controls the pacing of data within asession; arbitrates users’ rights and services whenthere is a conflict; synchronizes data transfers;and controls the mode of sending, receiving andresponse. Layer 6 is the presentation layer which

provides the necessary services for formattingdifferent data formats used in each sessionand manages the sessions dialogues. Finally,there is the upper most layer, layer 7, theapplication layer. It provides the applicationservice elements for the end-user. This includesthe control of exchanged information, operatorcontrol over sessions, and resource sharing, filetransfer, database management, and documentdistribution and interchange.

The LU (logical unit) is between the PU (phys-ical unit) and the applications of the end-userwho enters into sessions for communication. Thesessions’ medium will vary in mode and hencethere are seven LUs that correspond to each typeof session mode. For example, the LU2 supportssessions with a single display terminal of the 3270type while LU6 supports peer-to-peer connectionwith the application subsystem or applicationprogram. The PU, LU, end-user and the sevenlayers of network architecture are portrayed inFigure 8.1.

For communicating in the SNA schema, themessage must first go to the top-most layer (clos-est to the application and end-user), layer 7 asshown in Figure 8.2. The message then goesdown to the physical unit (path OA origin tothe point A), across the transmission path AB,and finally up the seven layers to the destina-tion D, through path BD. Both paths OA andBD are controlled by network software which isadditional to the OS (operating system) software

LAYER 7 APPLICATIONS

LAYER 6

LAYER 5

PRESENTATION

DATA FLOW CONTROL

TRANSMISSION CONTROLLAYER 4

LAYER 3

LAYER 2

LAYER 1

PATH CONTROL

DATA LINK CONTROL

PHYSICAL CONTROL

LOGICALUNIT

PHYSICALUNIT

END-USER with applications

Figure 8.1 Seven layers of SNA

79

Page 93: Telecommunications and Networks

Telecommunications and networks

LAYER 7

LAYER 6

LAYER 5

LAYER 4

LAYER 3

LAYER 2

LAYER 1

O Origin (Application)

LAYER 7

LAYER 6

LAYER 5

LAYER 4

LAYER 3

LAYER 2

LAYER 1

Network

Data / Voice / Video

Transmission

3

2

1

3

2

1A B

(Application) Destination D

Figure 8.2 Communication through the layers

and the applications software. The transmissionis through the network. The communication pathis shown in Figure 8.2.

Corresponding to the use and development ofthe SNA layers, there were protocols being devel-oped that addressed the individual or group oflayers as shown in Figure 8.3. Thus, the IEEE802 standards of the Institute of Electrical andElectronics Engineers in the USA developed pro-tocols for the physical layer as well as for mediumaccess control in the datalink layer and forlogic control for most of layer 3. The IEEE 802included standards and definitions that becamede facto standards for the industry for the Eth-ernet and token rings, as well as providing defi-nitions for concepts like connection and connec-tionless service. Other protocols developed werefor other sets of layers such as the IP for layer 3

SNA IEEE 802 DOD CCITT ISO

Layer 1Layer 2Layer 5 TCP SessionsLayer 4 TCP TPLayer 3 Logic control IP X-25Layer 2 Medium access LAP-B

controlLayer 1 Physical X-21

Figure 8.3 Early sources of protocols

and the TCP for layers 4 and 5. These proto-cols were developed by the DOD (Departmentof Defense) in the US. We mentioned them inan earlier chapter and will come back to themlater in this chapter. Other early protocols weredeveloped by the international organizations likeCCITT for layers 1 3 and by the ISO for layers 4and 5. The protocols by the CCITT and the ISOcorresponded directly to the OSI model, a subjectthat we will now address.

The OSI modelOSI stands for the Open Systems Interconnectionreference model by the ISO, International Stan-dards Organization. The open refers to a specifi-cation made openly in the public domain in orderto encourage third-party vendors to develop add-on products to it. The interconnection refers toprocedures for exchange of information betweencomputers, terminal devices, people, processesand networks.

OSI was announced in 1977 as a response tothe need of international standards for commu-nications and networking. The main objectives ofthe OSI model were:

1. to provide an architectural reference pointfor network design;

2. to serve as a common framework for pro-tocols and services consistent with the OSImodel; and

80

Page 94: Telecommunications and Networks

Network systems architecture

3. to facilitate the offerings of interoperablemultivendor services and products.

In design, the OSI model was not very differentfrom the SNA model, as is clear from the con-ceptual comparison in Figure 8.4, except at thelowest level and physical layer. While there isan almost one-to-one mapping for most of theupper layers, there is no direct counterpart inthe SNA model for the lowest layer of the OSImodel. The SNA model left the definition of the

communications devices outside its model: theOSI did not have any such reservations.

For a summary comparison on SNA and OSIin general terms see Table 8.1.

The OSI model never did catch on in theUSA despite the fact that it was developed byan international organization. OSI, however, was(and is) very popular in Europe. The Euro-pean Common Market Commission had madethe OSI and ISO its standards for connect-ing systems products and networks of different

SNA OSI

7 APPLICATIONS 7 APPLICATIONS

6 PRESENTATION 6 PRESENTATION

5 DATA FLOW CONTROL 5 SESSIONS

4 TRANSMISSION CONTROL 4 TRANSPORT

3 PATH CONTROL 3 NETWORK

2 DATA LINK CONTROL 2 DATA LINK

1 PHYSICAL CONTROL 1 PHYSICAL LEVEL

Externally defined

Figure 8.4 SNA versus OSI

Table 8.1 A comparison of SNA and OSI

SNA OSI

Source: IBM ISO (International StandardsOrganization)

When initiated: 1972 74 1977Architectural design: De facto standard in the US and

IBM mainframe usersLargely like SNA but has more for

the physical layer 1Levels implemented in 1985: All seven layers Bottom three layersAcceptance: Welcomed by IBM mainframe

users and many in the USLargely in Europe with SNA/OSI

interfaces negotiated with IBMFuture: Updated by APPN and other

products being steadilyintroduced

Work being done at all seven levelsin addition to the networkmanagement level (seeDeirtsbacher et al., 1995)

81

Page 95: Telecommunications and Networks

Telecommunications and networks

manufacturers. The Europeans used its leverageas a large customer of IBM equipment to per-suade IBM to agree to the OSI model. In theUSA, the DOD (sponsor of TCP/IP) had exec-utive directives to use international protocols. Itseemed that the model was bound to become theinternational standard. But IBM did not give into the OSI model. It did, however, agree to agateway between the SNA and the OSI model.There are three ways to integrate the two mod-els: one is directly at each of the seven levels;the second is indirectly through the three bot-tom levels of the physical unit; and the thirdis to have an SNA/OSI interface in between(Tillman and Yen, 1990: pp. 219 220). Thereare many interconnection devices that includerepeaters, bridges, routers, bridge/routers, gate-ways, device emulation, Internet transmission,and interstation transmission (Tillman and Yen,1990: pp. 216 218).

There are many reasons for IBM’s hesitation togive up SNA for international harmony. One wasthat at the time of the most pressure, 1985, IBMhad all its seven layers implemented while OSIhad only three bottom layers implemented. Also,IBM was no small firm to be pushed around. IBMnot only had a dominance in all segments of thecomputing market but also a sales presence in allthe world, including Europe. In 1985, IBM hada revenue of $48.554 billion, more than all thenext 12 world-wide competitors combined. (Thisrevenue was more than the GNP of all the coun-tries in Africa outside South Africa and two of theoil rich countries of North Africa.) In 1985, IBMemployed 405 535 people (Datamation, June 15,1986, p. 56). IBM continued with its proprietarynetwork architecture and had a ‘captured’ marketfrom all the users of IBM mainframe computers.

The APPNThe problem facing IBM was that its networkingarchitecture in SNA was designed for a main-frame host serving a lot of dumb terminals ina hierarchical master slave configuration. Thiswas appropriate for the 1970s when SNA wasfirst implemented, but the world of computingchanged in the 1980s. The dumb terminals werereplaced by PCs and workstations; centralizedmanagement was replaced by decentralization ofcomputing; allocation of resources by the hostwas to be replaced by the sharing of information

and computing resources without host interven-tions; the mainframe host was being replaced bycomputers that were more powerful than manymainframes as servers of computing resourcesincluding databases, programs and peripheralservices; and the single host was now beingreplaced by multiple hosts and servers. This newcomputing paradigm was for peer-to-peer com-puting and client server computing. We shalldiscuss these configurations and the download-ing and downsizing of the mainframe to the end-user client and the server in a later chapter. Inthis chapter we will examine how IBM respondedto the changing environment with its APPN andthe competition it faced from TCP/IP.

APPN is the acronym for Advanced Peer-to-Peer Networking. A peer is a functional uniton the same protocol level as another. In peer-to-peer communication, both sides have equalresponsibility for initiating a session. This is incontrast to the master slave relationship, whereonly the master unit initiates and the slaveresponds.

The hierarchical host to terminal configura-tion of the SNA is now replaced by networkingthat handles peer communications among hosts,departmental computers and desktop PCs, andis compatible with the hierarchical SNA struc-ture while maintaining connectivity with dumbterminals. Because APPN is an update of SNA,the new traffic can still be handled without muchconversion or encapsulation (in telecommunica-tions, it signifies the headers for transfer from ahigh protocol level to a lower protocol level).

APPN was originally targeted in 1986 ofmidrange systems. Since then there have beenmany network announcements to enhance theAPPN. An important enhancement was theSAA, the Systems Applications Architecture,that had a sophisticated and fundamentalrouting technology. The routing was nowpossible without host intervention. APPN nowkept track of network topology making itconsiderably easier to connect and reconfigurenodes. Such enhancements to the APPN addedmore functionality and made the APPN morerobust (a program that works properly under allnormal but not all abnormal conditions).

In addition, performance increased. The tenmessages required under SNA now required onlytwo messages. But despite all the enhancementsto APPN, there still is strong competition fromTCP/IP.

82

Page 96: Telecommunications and Networks

Network systems architecture

TCP/IP

TCP/IP stands for transmission control protocolover an Internet protocol. We visited TCP/IP ear-lier as a set of protocols. They were developed bythe DOD in the mid-1980s and was concernedwith the middle layers of the seven layer cakearchitecture. The TCP/IP was expected to retire,unable to resist the pressure from IBM’S SNAand OSI supported by the international com-munity. But, TCP/IP took on a life of its own.Since it was based on the UNIX operating sys-tem, it had the support of many UNIX users wholiked the multitasking and multithreading fea-tures which are ideal for multiplatform networks.UNIX was becoming the system of choice formany mission critical applications in IS (infor-mation sytems). In the early 1990s, it was gen-erally accepted that TCP/IP had made manyinroads into the SNA market, perhaps close to20% of the over 50 000 SNA networks acrossthe world serving some 300 000 nodes (Kerr,1992: p. 28).

The success of the TCP/IP approach was thatit relied for its critical backbone on the multi-ple protocol routers and the OSPF: the ‘openshortest path first’ approach that was a new rout-ing algorithm. The routing was adaptive givingthe ability to route around failed circuits. ‘Theserouters evolved out of the TCP/IP and Ethernet

environments; as LANs expanded, the routerswere enhanced to accommodate more protocolsand to provide critical segmentation capabilities.This enables IP routers to become the key tocontrolling broadcast storms and ensure relia-bility in large meshed internet backbones con-necting local and remote LAN users’. (Rosen andFromme, 1993: p. 79).

It is very tempting to marry the SNA and theTCP/IP. One way is to run TCP/IP on a IBMmainframe using SNA. The other approach is toimplement SNA gateways on UNIX machines.What is emerging though is the use of TCP/IPto create what is called the SNA/IP networksthat competed directly with the APPN, shar-ing responsibility for the different layers in thenetwork architecture with SNA still specializ-ing in the upper layers. Incidentally, even IBMapproved of the SNA/IP alternative to internet-work, for IBM offered the system as an alter-native to their pure APPN. The two are inde-pendent enterprise architectures but they cancoexist on the same physical network. There areseveral vendors that offer routers that handleboth IP and APPN simultaneously. This allowsenterprises to deploy pure APPN or IP back-bones, or both. The backbone is connected toaccess networks through access devices, as shownin Figure 8.5. The access network is within thesphere of influence and control of the end-user

Communicationsprocessor

Intelligent terminal

PC PC

Dumbterminal Work-

station

Fileserver

Access device Backbone device

Accessrouter

Accessrouter

Figure 8.5 Access and backbone networks

83

Page 97: Telecommunications and Networks

Telecommunications and networks

and could include PCs, workstations, terminal(dumb or intelligent), communication controllers,and fileservers. The access devices are routers orcommunications processors. In APPN networks,the access devices can be routers, FEPs (FrontEnd Processors), or LAN-attached communica-tion controllers. These access devices channeltraffic to the backbone routers where the datafields for the backbone include addresses ofsource and destination, start and end delimiters,and frame check. The backbone may be a MANor a WAN with wide-area infrastructure for com-munications across the region or country, or evenglobally across the world, linking to all points ofthe enterprise network.

One way to link SNA terminal traffic and aLAN is to put a token ring LAN adapter inthe SNA device. As defined by the IEEE 802.5standard, the token ring frame contains fields forthe addresses of source and destination, start andend delimiters, and frame check sequence.

One or more token rings could be used with anIP backbone. This is shown in Figure 8.6, wherethere are two token rings on the sender’s sidewith a source routing bridge in between and onetoken ring on the receiver’s side. There are tworouters accessing the backbone.

Multiple protocols

There are many protocols under almost continu-ous development, like the X.400. The X designa-tion tells us that it has been developed by CCITT.The set of recommendations from X.400 to X.430defines standards for a general purpose systemand recommendations that may solve many of theinterconnectivity problems like those of e-mailand EDI (Electronic Data Exchange) so impor-tant for financial institutions.

Achieving a homogeneous system may not beas easy as one would like. In the absence of inter-nationally agreed architectural model and proto-cols we must mix protocols within the seven-layerarchitectural model. This is shown in Figure 8.7where there is often a choice in each layer.Another view, that from the other side of theAtlantic, is the use of the OSI model and all itsprotocols. This view is shown in Figure 8.8.

Multiple protocols give flexibility to the net-work manager, but the dream of the networkmanager may well be of having the ability to mixand match not just topologies and protocols butalso network management strategies. The alter-native strategies available to the network man-ager are the subject of the next part of this book.

PC orwork-station

Token ring

Token ring Token ring

Routingbridge

Accessrouter

Accessrouter

PC orwork-station

BACKBONE

Figure 8.6 Token rings and backbones

84

Page 98: Telecommunications and Networks

Network systems architecture

Applications

Transport

Data LinkIEEEE 802.3, (Ethernet) IEEE 802.5 (Token Ring),Frame relay, FDDI, Cell/Packet

OSI, TCP/IP i.e. SNMP (Simple NetworkManagement Protocol), FTP (File TransferProtocol), X.400, SNA, & others

Common Transport Semantics

SNA, APPN, OSI, TCP/IP

ApplicationSupport &Interface

TransportInterface

Transaction processing, Message processing,Database processing, and Other processing

Figure 8.7 Protocols at different layers

7: Application

6: Presentation ISO 8823, CCITT X.226

5: Sessions ISO 8327, CCITT X.225

4: Transport ISO 8073, CCITT X.24

3: Network CCITT X.25, q, 701-707, SCCP CCITT Q.711-714

2: Data Link LAP.B, CCITT X.25, Q.701-707

1: Physical CCITT Q 701-707, X.21, X.21 Bis, X.25

ISO 8650, 9072, CCITT X.22xTCAP/CCITT Q.771-776

TUPCCITT

Q.721-725ISUPCCITT

Q.761-766

Figure 8.8 International protocols

Summary and conclusions

Network architecture is the design of a com-munication system including hardware, software,access methods and protocols. We have discussedaccess methods in a previous chapter; in thischapter, we discuss protocols. They are the basisof hardware and software products that are inter-operable, a topic which will be discussed in alater chapter.

Network architecture is a complex subject,with many books being written on each of themain models (see Meijer and Peeters, 1982;

Martin, 1987; Meijer, 1988; and Tang 1992 forbut a small sample). In this chapter we havetaken an overview of some of the models. Weexamined the SNA along with its updated APPN,the international OSI model, and TCP/IP proto-cols. A summary comparison of APPN, TCP/IPand OSI are shown in Figure 8.9. The differentarchitectural models being accessed through agateway network are shown in Figure 8.10.

There is no general acceptance for a sin-gle model for network architecture like there isone international standard for ISDN. The mainreason is that ISDN had no competition. In

85

Page 99: Telecommunications and Networks

Telecommunications and networks

APPN TCP/IP OSI

Application

Common

Programming

Interface

APPN

Path Control

Data Link

Applications

and

Applications

Services

Sockets Interface

TCP

IP

Data Link

Application

Presentation

Sessions

Transport

Network

Data Link

Figure 8.9 Appn vs. TCP/IP vs. OSI (adapted from Friedman, 1993: p. 90)

APPNTCP / IP

ROUTING

GATEWAY

NETWORK

SNA /

APPN

SNAHierarchical

OSI

Figure 8.10 Many architectures being supported

contrast, the international model OSI had com-petitors that were well established with some3 5 years’ experience. Also, the IS (informa-tion systems) managers were in no hurry forthe ISDN. They were preoccupied with expand-ing their applications portfolio; logically inte-grating their applications; integrating verticallywith their DSSs and EISs and ESs (expert sys-tems); and of integrating their databases withtheir knowledge bases and making their systemsintelligent. In contrast, in the mid-1980s, when

network managers had to make decisions on theirarchitectural designs, they were actively lookingfor network performance and a network architec-ture that worked. They had no ideological loy-alty to international standards. Those that hadIBM hardware stuck with their SNAs and theupgraded APPN, while those with UNIX sys-tems preferred the TCP/IP partly because theseprotocols had a built-in integration with theiroperating systems. The alternative would havebeen to wait for the international standards to

86

Page 100: Telecommunications and Networks

Network systems architecture

stabilize and be fully implemented. This wouldalso have required the network systems and com-ponents manufacturers to guarantee the offeringsof OSI compatible products. The manufactur-ers (at least in the US) showed no inclinationfor wanting to offer guarantees, whilst the net-work managers did not want to wait in a shroudof uncertainty about which network architecturewas to be universally accepted. Fortunately forthe network managers, they did not have to waitand adopt an architecture that was universallyaccepted. The competition in the world marketgave the consumer and network manager greaterchoice and pushed the industry into offering amenu of (albeit only three) cost-effective andhigh performance architectures, the OSI modelbeing just one of them.

The fact that the international marketplace didnot accept the OSI model is a source of greatsorrow not just from the point of view of networkarchitecture but also from the viewpoint of thedevelopment of standards. Robert Amy, who hasobserved the rise and fall of OSI for over a dozenyears, has this to say on the failure and successof OSI:

. . . the market-place has not accepted OSI . . .because the world moved out from under OSIbefore OSI could complete its tortuous waythrough the standards process . . . a furtherdeterrent to the acceptance of OSI was theindustry’s lethargy in developing new servicesand products that were visible to end-users . . .The acceptance of the layered model servedto focus attention on the need to disentangleapplications from communications protocols. Asa result, data communications equipment hasbecome in effect multilingual, and multiprotocolencapsulation has emerged in today’s routerproducts . . . the original goals of OSI havebeen met, but without reducing the number ofprotocol options available to each network user.(Amy, 1994: pp. 52 3)

One lesson that comes out of this rivalryof architectural models is that SNA prosperedand survived because IBM managers adaptedto the changing environment of peer-to-peerand client server paradigms. Could the OSImodel have done just as well if it had a headstart like SNA? Do international standards havean inherent sluggishness built into them thatprevents them from adapting quickly to changing

environments? This is a subject that we shallreturn to in the chapter on standards. It is thepart of the book that deals with strategies ofnetwork management: the set of chapters that isthe next subject of our discussion.

Case 8.1: AAPN in HFC BankHFC is a retail bank in Windsor, UK. It has over200 AS/400 terminals and is perhaps the largestuser of APPN in the world. John Hogan, directorof Information Technology at HFC, has this tosay about APPN: ‘It has limited design optionsfor disaster and recovery for us. We had to putin extra switches in the event of a failure, soit would be better if APPN had a way to dealwith it’. Hogan thinks that the basic advantageof APPN is not speed, it’s that it takes a fun-damentally different approach from hierarchicalSNA to making applications run across networks.‘More traditional mainframe sites will say ‘‘Whydo I need it?’’. . . But a paradigm like APPN willbe important for client server applications, andthat’s the way the world is going. Even the mosthardened glass house has got to recognize it’.

Source: Datamation, Oct. 1, 1992, p. 31.

Case 8.2: Hidden costs of APPNKathryn Korostoff, principal consultant at‘Strategic Networks Consulting Inc. of Rock-land, Mass., says that ‘the cost of adding AAPNto a front-end processor (FEP) is really huge.Usually, you have to increase the hardwarecapacities memory and processing power’.Korostoff says that upgrading a base-configured3745 front-end processor to support APPN maycost as much as $26 800 per FEP. She says that itwill cost an additional $11 800 per FEP in hard-ware and somewhere around $11 500 in software.

Source: Datamation, Oct. 1, 1992, p. 31.

Case 8.3: Networking at SKF,SwedenSKF is the world’s largest manufacturer of bear-ings. SKF has a workforce of about 41 000 work-ing in 150 companies in 40 countries. For its3000 end-users it had an SNA backbone in 1978with five centres in Europe. In 1992, SKF went

87

Page 101: Telecommunications and Networks

Telecommunications and networks

from an SNA backbone to a router backbonewith 26 routers. Its backbone is still 90% SNA,9% DECnet and 1% LAN traffic. The backbonealso links Ethernets that support CAD/CAM filetransfers, over 100 VAX minicomputers and alink to the company’s data centre (King of Prus-sia) in the USA. In addition, SKF supports over11 000 dumb terminals, more than 4000 PCs, andleases more than 300 private circuits. SKF usesframe relay to link small sites largely because ofits ability to handle bursts.

Routers and more specifically the IP backbonehad a distinct advantage ‘because they divert traf-fic back onto its original path once a line is res-tored’. But the router backbone cost more that theearlier SDLC (Synchronous Data Link Control)network. ‘That’s partly because it employs higherbandwidth lines to move more traffic and holddown delays. Redundant links, used to ensurereliability, also contribute to the steeper charge’.Skr 6.5 million a year was budgeted a year ‘forthe leased lines in the backbone’s meshed coreand another Skr 250 000 a year for the routers’.

SKF consolidated the five computing centresinto two, one in Sweden and one in Germany,both using IBM 3090 mainframes. It closed downthe mainframe operations in Clamart (France),Turin (Italy) and Luton (UK). The consolidationsaved SKF around 30 million Swedish kronor(approximately £3 million) per year while stillmaintaining reliability and performance on itscorporate network. Most of the savings camefrom staff reductions and ‘elimination of IBM’shefty license fees for mainframe software’.

Source: Peter Heywood (1994). Shifting SNAonto a global router backbone: SKF shows howit’s done. Data Communications, 23, (5), 58 72.

Bibliography

Amy, R.M. (1994). Standards by concensus. IEEE Net-work, 8, (1), 51 55.

Coover, E.R. (1992). Systems Network Architecture.IEEE Computer Society Press.

Deirtsbacher, K.-H., Gremmeimaier, U., Kippe, J.,Marabini, R., Rossler, G. and Waeselynck., F.(1995). CNMA: a European initiative for OSI net-work management. IEEE Network, 9, (1), 44 52.

Friedman, N. (1993). APPN rises to the enterprisearchitectural challenge. Data Communications, 22, (2)87 98.

Kerr, S. (1991). How IBM is rebuilding SNA. Datama-tion, 37, (20), 28 31.

Martin, J. (1987). SNA: IBM’s Networking Solution.Prentice-Hall.

Meijer, A. (1988). Systems Network Architecture. Wiley.Meijer, A. and Peeters, P. (1982). Computer Network

Architectures. Pitman.Rosen, B. and Fromme, B. (1993). Toppling the SNA

internetworking language barrier. Data Communica-tions, 22, (6), 79 86.

Tang, A. and Scoggins, S. (1992). Open Networkingwith OSI. Prentice-Hall.

Tillman, M.A. and Yen, D. (Chi-Chung) (1990). SNAand OSI: three strategies of interconnection. Com-munications of the ACM, 33, (2), 214 224.

88

Page 102: Telecommunications and Networks

Part 2

ORGANIZATION FORTELECOMMUNICATIONS ANDNETWORKS

Page 103: Telecommunications and Networks

This Page Intentionally Left Blank

Page 104: Telecommunications and Networks

9

ORGANIZATION FOR NETWORKINGThe illusion of autonomy causes us to ignore the connection with other systems . . . The illusion ofcontrol causes us to deny the nature of the system in which we are a part.

Peter SteinWhen you fail to plan, you are planning to fail.

Robert Schuller

IntroductionLet us suppose that on the basis of the knowl-edge you have gained in the previous chaptersand with your experience in computing, youhave been offered a job as Director of Net-working. The firm has just made mergers andacquisitions of manufacturing facilities both athome and abroad that anticipate a heavy useof networking including applications in multi-media, video-conferencing and e-mail, as well asglobal transfer of data and information for anemployee base of 3000 employees spread aroundthe country and abroad. You will be respon-sible for offering network services that are ofhigh quality and performance, reliable, robust,and accessible to end-users. You may make threeassumptions. One is that there are no budgetaryconstraints provided that you can justify thecosts. Two, you have a free hand in organizingyour department. And three, you have been madean offer that you (and your family) cannot affordto turn down. So what do you do?

Well, first you read the rest of this book quicklyand carefully. In the last part it will discuss allthe applications of networking. In this part youwill learn about management of computing. Themany functions of managing a network systemwill be delegated to other staff members and theirfunctions will be identified and discussed in laterchapters in the book with forward references tothese chapters. The one function that you maynot wish to delegate (at least at this point) is plan-ning for networking. But before one does that onemust have an organization. This can be discussedat two levels: at the corporate level and at the

departmental level. Here one must identify thedifferent positions needed for network manage-ment with the functions that they must perform.We shall do this first, followed by a discussion ofthe important function of planning for telecom-munications and networking.

Location and organization ofnetwork managementIn the early days, network management was partof IT (information Technology) and was con-sidered as part of support services, much likethe librarian, information centre, security officer,standards officers, training officers, etc. They allreported to a Director of IT, also known as theCIO, Chief Information Officer. This is shownin Figure 9.1. The location of network manage-ment and of IT depending largely on the organi-zation culture of the organization and also on theimportance and complexity of networking to theorganization. However, as the need for telecom-munications and networking increased and wasrecognized, it became part of communicationsand was lumped together with other communica-tions related departments like mail, telephone andtelegraph; voice and facsimile; teleconferencingand text processing. This configuration is shownin Figure 9.2. Network personnel now reportedto a Vice-President of Information Services, alsoknown as Director of MIS (Management Infor-mation Services) or MOT (Manager of Tech-nology). This organizational grouping recog-nized the importance of telecommunications andnetworking to all communications and the desire

91

Page 105: Telecommunications and Networks

Telecommunications and networks

Planningstaff

Policyanalyst

Problemanalyst

Technologywatcher

PLANNING OPERATIONSSYSTEMSDEVELOPMENT

SUPPORT

Programmer- Maintenance

Systems analyst

AI personnel- Knowledge engineer- Other

OR/MSpersonnel

Technical writer

DIRECTOR OF IT / CIO / MOT

Operators

Schedulers

Control clerks

Supply clerks

Data entry- Supervisors- Clerks

EUC / IC

DBA staff

Telecom-munications staff

Security officer

Standards officer

Documentors

Figure 9.1 Organizational structure of an IT department

Vice-Presidentof InformationServices

Director of MIS

Planning Records ReferenceServices

CommunicationsWordProcessing EDP Information

Centre

Total SystemsPlanning

InformationFlowCoordination

Data

Text

Duplication

Printing

Micrographics

Distribution

Storage

Retrieval

LibraryServiceManagement

ResearchService

Mail Data

Voice

TelephoneTelegraph

TextProcessing

Teleconferencing

Networking

Facsimile

Typing

Filing

Retrieval

ComputerOperation

ApplicationAnalysis

ApplicationProgramming

SystemsProgramming

DataAdministration

Figure 9.2 Horizontal integration under Vice-President for information services

to rationalize these related activities and integratethem not only for higher efficiency and lowercosts but for greater effectiveness. This impor-tance grew as PCs and workstations proliferatedand the desire for downsizing increased. Orga-nizations become decentralized and distributed

with end-users demanding their own computeraccess from their desktop to other databases andcomputer peripherals that they did not have orcould not afford. The end-users became clientsof an interrelated computing system served notby central hosts but by servers dispersed not just

92

Page 106: Telecommunications and Networks

Organization for networking

in the organization but remotely. This led to theclient server systems and downsizing connectedby LANs to be discussed in Chapter 10.

In addition to an increase in the volume ofprocessing, there has been an increase in itscomplexity. Computer processing and its remotecommunication is no longer confined to data andtext but has now expanded to voice and imageprocessing. A further extension would be theirintegration into multiprocessing.

Another change in the computing environmentis that trade and other daily interactions are nolonger confined to the organization but extendacross the country and the world. Communica-tions are no longer just national or regional butinternational and global. Thus telecommunica-tions and network systems have to be global,which requires large bandwidths and large capac-ities for wide area networking. They ideallyrequire a national telecommunications infras-tructure (to be discussed in Chapter 15) and aninternational infrastructure (to be discussed inChapter 16).

The importance of telecommunications andnetworking can be found in the demand fornetworking personnel. Salaries is one indicationof the demand for personnel. A look at the topten positions in IT in 1994 will reveal that two ofthe top four and four of the top ten are networkpersonnel. This is shown in Table 9.1 and is wellcollaborated by other studies (Chiaramonte andBudwey, 1991: p. 56).

The high salaries of network personnel is notjust a reflection of the high demand for such per-sonnel but also a reflection of low supply. Thelow supply is partly the result of the fact thatthese personnel have not only to be technicalbut also to have a very broad based education.This is especially true for senior personnel ina public regulatory organization in telecommu-nications. The mix of professionals required byOftel (Office of Telecommunications) in the UKis shown in Table 9.2.

It shows that only a little less than 40% of allpersonnel were technical personnel. In a firm,the ratio of non-technical personnel will be lowerbut a point that can be made is that telecom-munications and networking is a multidisciplinefield. This makes the recruitment of technicalpersonnel for networking a little difficult espe-cially at the higher levels of network managementwhere the non-technical and managerial skillsare in high demand. At the lower levels, skills

Table 9.1 Ranking of the top ten positions in IT bySalaries

IS DirectorTelecommunications DirectorData Centre ManagerLAN/WAN Network ManagerSenior Software EngineerSystems Analyst/Programmer/Project LeaderObject-Oriented DeveloperClient Server AdministratorCommunications SpecialistProgrammer Analyst

Source: Data Communications, May 1994, p. 17

Table 9.2 Mix of personnel in a regulatory agency

Profession Number

Technical experts 8Information Officers 3Economists 2Internal auditors 2Librarians 2Lawyers 2Accountants 1Technicians 1

Total 21

Source: Telecommunications.

are mostly learned on the job. At all levels, per-sonnel have to be attracted, trained and retainedin the highly competitive world of telecommuni-cations personnel.

What are these skills? What are the professionsin telecommunications and networking and howare they organized in a department? What spe-cialists are required? What is the role of consul-tants in networking administration? Are they ona retainer or hired ad hoc, or both? What is therelationship of telephone and mail managementto network management? We shall now answerthese and other related questions.

Structure of networkadministrationThe structure of network administration willbe greatly affected by existing organizationalculture, personalities in top management, and

93

Page 107: Telecommunications and Networks

Telecommunications and networks

the volume and complexity of the networkapplications portfolio. It also faces jurisdictionalproblems that could be very contentious sincetelecommunications and networking will replaceor threaten to replace many existing departmentsthat are large and well entrenched, becausethey are very labour-intensive. Take the exampleof internal mail which has to be sorted anddelivered. Can that be done by e-mail? E-mailhas experienced an exponential growth in the lastdecade and is the subject of Chapter 19. It mayreplace much of surface and air-mail, referred toas ‘snail-mail’ because it moves at the relativepace of a snail when compared to the speed ofe-mail. This will cause much displacement andsome unemployment even though there still willbe some traditional mail. Much will depend onthe competitiveness of the Post Office and thePT&T as well as the speed and effectiveness withwhich ISDN is implemented. Another area ofcontention with networking will be in telephonecommunications management which is still quitelabour-intensive despite much automation oftheir services. The analogue signals of telephonymay soon be replaced partially by ISDN. Yetanother area is image processing. This will bepartly replaced by ISDN but this is not so labour-intensive and will not cause as many contentiousjurisdictional problems as with telephones andthe mailing room.

It is very likely that all these departments willcoexist for at least some time in the future butthe battle lines are drawn and the battle is soonto come. Whatever the battle, there will be someorganizational changes that will be necessaryfor networking. One configuration is shown inFigure 9.3.

Teleprocessing and networking will report tothe Director of Information Processing, or Direc-tor of IT, or the CIO (Chief Information Offi-cer), who is often a Vice-President. Sometimes analternative title to a CIO is the MOT, Manager ofTechnology. This emphasizes the high-tech aspectof the job and is sometimes a preferred title.

There are many personnel involved with oper-ations. We will discuss operations in the chapteron network administration, Chapter 13. In thischapter, we will briefly discuss, or at least iden-tify, the other personnel required for telecommu-nications and network administration.

The core personnel would be those involvedin development. They will help operational per-sonnel especially on the systems and subsystemsthat they developed. These personnel would bethe network hardware engineer, the network soft-ware engineer, and the network engineer, whowould most likely be a senior person with aknowledge of both network hardware and soft-ware. They will be supported by specialists, likethose on voice processing, image processing. In

TELECOMMUNICATIONS/NETWORK MANAGER

OPERATIONS DEVELOPMENT SUPPORTOperational Control

Maintenance

LAN/MAN/WAN Manager

Client/Server Manager

Installation Manager

Voice/ImageManager

Network Engineers

Hardware Engineers

Software Engineers

Specialists

Voice Analyst

Image Analyst

Data Analyst

Artificial Intelligence Analyst

Planning Office

Librarian

Standards Officer

Technology Watcher

Security Officer

Training/Education

Figure 9.3 Organization of telecom and networking

94

Page 108: Telecommunications and Networks

Organization for networking

small organizations, the voice and image process-ing specialist may be rolled into one person spe-cializing in pattern recognition, a subdisciplineof AI, Artificial Intelligence. Alternatively, theremay be an AI specialist.

In organizations where such specialists can-not be supported for lack of substantial projects,a consultant is engaged for the purpose. Theconsultant could either be on a retainer orbe engaged on an ad hoc basis for consultingon important, complex and expensive decisions.Otherwise the consultants and full-time profes-sionals could be organized on a project basis.Organization by project is not unique for IT andis used for systems development especially inlarge projects that use the SDLC, Systems Devel-opment Life Cycle. Even with prototyping usedfor development, the project approach is desir-able because it involves end-users and manage-ment as participants and has a better chance ofgood problem specification, project implementa-tion and eventually project acceptance.

The development staff as well as the operationalstaff have supporting personnel that include thePlanning Officer, which in some cases may wellbe the Network Manger or Assistant Manager.Then there is the Librarian, a person not uniqueto IT. So also with the Standards Officer, whounlike duties in, say, Systems Development is notmostly generating standards but enforcing them.Telecommunications and networking is a servicethat is shared by many others and so standardshave to be national or regional if not international.This subject is important enough to deserve aseparate chapter for itself, Chapter 11.

Security is also a problem known to all of IT,but security in telecommunications is very uniqueand very important. In these days of LANs,MANs and WANs, the danger of security breachesthrough communication channels is both seriousand complex. It is the subject of Chapter 12. Theyare related to the operation and management ofnetworks, the subject of Chapter 13.

Another support function of any department ofnetworking (and telecommunications) is that oftraining and education. All these functions haveto be planned. The Director/Manager of Net-working will participate in this task and in smallorganizations may be totally responsible for it.This task will be the focus of the remaining partof this chapter. In it we will examine the natureof planning, the dynamics of planning, its processand its implementation.

Planning for networkingPlanning is the visualization of the future andtaking steps to achieve and strive towards thisvision. But is planning for networks any differ-ent from corporate planning or even planning inIT? The answer is yes, and no. Yes in terms ofthe concept and process, but no in terms of theinputs and outputs, dynamics and uncertaintyinvolved. Take for example the IS department.We select this department since it is the highestpaid in IT for many corporations and is respon-sible for all if not most of the mission criticalcorporate applications. For example, the payrollis critical and affects everyone but this criticalityoccurs only once a month or perhaps once a week.But for some firms, networking is critical duringall working hours as for example the reservationsystem for airlines or hotels or car rentals. For afirm with world-wide sales, production or suppli-ers, the working hours may be most of the day,and, for teleworking, it is all hours of the dayall year round. And this criticality may be evenmore extensive as e-mail (and voice mail) andISDN become more ubiquitous, not just in thecorporate world but in all our daily lives. Wecannot do much without communications. Andso networking is very important and critical.

One problem with planning for networking isthat it is so dependent on technology which is sounpredictable and unstable. In IS, we have a sta-ble technology and a fairly predictable responsefrom end-users. The telecommunications andnetworking world is far more volatile and depen-dent on international standards and governmen-tal actions. The dynamics are different.

Planning variablesA problem in planning is the transformation oflong-range goals into strategic and operationalobjectives. This is especially difficult with atelecommunications and networking (and com-puting) technology that is often unpredictable.For example, consider the applications environ-ment postulated in the case stated early in thischapter. Does that include image processing? Notexplicitly, but if you do know that the firm isin the manufacturing business with factories andsuppliers spread around the world, you shouldpredict that the firm will want to send imagesin the form of drawings and blueprints on thenetwork. Should you plan for image processing?

95

Page 109: Telecommunications and Networks

Telecommunications and networks

Why not ask management? The answer is that IShas taught us that management does not alwaysknow, and if it does know it cannot always artic-ulate its needs in operational terms that a plan-ner can use. There are numerous cases whenmanagement has been asked about an applica-tion and has responded, ‘No, we will not needthat application.’ A little later the same managersdemand that application, and if you remind themof their denial of the need of that application,they will quickly reply, ‘Ah, but you should haveknown my needs. That is why you are paid such ahigh salary.’ And so it becomes incumbent on theplanner to predict the future and do so correctly,or at least close to being correct.

Dynamics of network planningThe main players in networking are the gov-ernmental agencies, standards organizations, thetelecommunications and computer industry, and,of course, the corporate world and its end-users.Their interrelationship is shown in Figure 9.4

The important interactions are between thecorporate entities (with their end-uses) and theindustry (computer and telecommunications).The industry offers a product and the firmsand end-users accept or reject it. Historical

examples include the picture telephone that wasdemonstrated in the 1965 World’s Fair, but thepublic was not ready for it. Even today, theintegration of picture and voice as well as voicealong with data is not fully available or acceptedin the market-place.

A more recent example is with PCs. The indus-try offered large mainframes but the end-userwanted the PC. When small firms responded,there was demand for them to be integrated.And so increased the demand for LANs. Mean-while the end-user became more computer liter-ate and more demanding. They wanted decentral-ization and a distribution of computing powerand wanted it to gravitate to the desktop withthem being the clients with access to computersacting as servers of databases and other comput-ing services. And so came about the client serverparadigm which the industry eventually recog-nized and then offered the necessary technologyfor servers and clients.

Technology is often driven by the industry andresponds to market forces and demands fromthe industry. Thus the forces of change are bothupstream and downstream. But often, and inmany countries, technology is driven by the avail-ability of international standards without whichthere is less incentive to produce products thatwill have international acceptance. Technology

REGULATORYAGENCIES

STANDARDSOrganizations

Telecom/NetworkINDUSTRY

Telecom/NetworkTECHNOLOGY

CORPORATE WORLD

CLIENTS SUPPLIERS END-USERS

Demand for products service

Productse.g., client /server system

Technology watcherConsultants

Figure 9.4 Main players in network planning

96

Page 110: Telecommunications and Networks

Organization for networking

is also driven and influenced by governmentalagencies depending on the country. In the US,the intervention is not direct but indirect throughfunding for academia and industry especially inthe area of space exploration. In Japan the inter-vention is more direct with seed-money to indus-try and the selection of products to be producedand market-share to be achieved. In other coun-tries including Europe there are the nationalizedPT&Ts and intergovernmental agencies that haveconsiderable influence. We shall discuss thesedynamics in a later chapter, Chapter 15, but fornow it is sufficient to say that the interrelation-ship exists and does influence the external vari-ables in the planning process.

Planning process

The planning process for networking is concep-tually much the same as for IT and corporateplanning. Planning is done for varying planninghorizons each feeding into the other. Thus thelong range plans provide objectives and goals thatare to be achieved by strategic medium rangeplans which in turn dictate the short term plansthrough instruments like budgets. This processis shown in Figure 9.5. The inputs to this pro-cess are many and summarized in Figure 9.6.They include data on external events as much ason the internal environment of goals, objectivesand constraints. There is a dependence on advice

Technological AdvancesManager of Networks

End-user/Client pressures

Plan forOperations

Compare& control

Compare& controlApproval by

CorporateManagement

Network SystemsDevelopment (Projects)

Operationof Network

Services

Evaluation

Figure 9.5 Implementation of telecom/network plan

Technological TrendsPolitical reality: Laws/RegulationsPeopleEconomic conditions

End-user needs backlogs future needssatisfaction levels

Technology Watcher

Consumers

Consultants

TELECOMMUNICATIONS AND NETWORK PLAN

Telecom andNetworksPlanningCommittee

Corporate culture plans power structure

Resources Funds Personnel Time

Figure 9.6 Inputs to network planning

97

Page 111: Telecommunications and Networks

Telecommunications and networks

from technology watchers and consultants. Theconsultants can be a source not only of objectiv-ity but also of a Weltanschauung and global viewnot often found within an organization. The con-sultants need to be experts not just on technologybut also on the acceptance behaviour of the end-users and customers of networking.

Implementing a plan

There is often a problem in going from a longrange set of goals and objectives to a mediumterm and short term set of operational objec-tives. This problem occurs because the managertends to make statements in rather general termssuch as ‘very reliable and accessible’ and ‘highquality and performance’. For one thing the vari-ables are not too operational. What is meant by‘reliable’ and ‘accessible’, and what are ‘qual-ity’ and ‘performance’? The other problem is tospecify what is meant by ‘very’ (accessible) and‘high’ (quality and performance). Lofti Zadehcalls these identifiers fuzzy and has an elaborateprocedure for converting these ‘fuzzy’ variablesinto numerical values (Partridge and Hussain:1994: pp. 242 53). It is unlikely that the typicalmanager will have the time to convert network-ing goals into numerical values, but the plannercould state these goals in operational terms. Forexample, reliability could be stated in terms ofdelays or response times, like an average delay ofno more than 30 seconds or an average responsetime of 1 minute from request completed to startof response. The values could be in ranges likethe average response time of 1 2 minutes. Andthis may be the start of a dialogue till conver-gence is reached. This may take a few itera-tions and one or two plans and planning sessionsbefore the fuzzy values are converted into moreoperational terms.

An example of operational objectives for net-working is shown in Table 9.3. It incorporates thelong range goals postulated in the Case Studystated earlier. It has added some applicationsbased on what can be expected from the clienteleand technology. For example e-mail and voicemail may be desired applications by the clien-tele but may well be dictated by technology andavailable whether you want it or not.

The performance variables are mostly what isknown in IT or have been mentioned in an earlierdiscussion. One variable survivability may need

Table 9.3 Possible objectives of network administration

Systems applicationsShould allow for the possibility

of the following applications:Distributed processing (e.g. client server

system)E-mailTelecommutingImage processingVoice processingVideo processing

Desirable systems characteristicsHigh speed access to LAN/MAN/WANEasy access to LAN/MAN/WANAccess should be for asynchronous

system (i.e. real-time systems)Masking of complexity for end-user

(end-user friendliness of system)Delays and average accesses per

successful attempts should bebelow a specified threshold

Low down-timeSystem should be cost-effectiveSystems should allow for:

expansionvarying traffic patternmaintainabilitysecuritysurvivability

a definition. It concerns the ability to operatedespite a given probability of the presence of dis-ruptive and dysfunctional influences. The con-cept is related to reliability. If the system is areal-time system, then high reliability is impor-tant and this will determine the topology to beused. You may recall that one topology is morereliable than another. And so the objectives of aplan do have an important bearing on the designand operation of the system.

Another design decision will relate to accessi-bility. If, say, high accessibility is required andthere is a new building in the design stage, thenthe long range plan for networking should be toconsider making the new building an ‘intelligent’building with network connections in roomswhere professionals would be working. But thatis in the long run. How about the medium termand short term? The answer will depend on thedistribution of end-users and resources and thecomputer literacy and attitude of the end-users.

98

Page 112: Telecommunications and Networks

Organization for networking

If there is a potential for many end-user clientsand many servers of databases and comput-ing resources that are distributed, then we mayhave the potential of a client server computingparadigm. It is such computing paradigm that wewill consider in our next chapter.

Summary and conclusions

This chapter concerns the front-end of networkmanagement. We use the term network man-agement for telecommunications management,which includes not only the network but also thehardware, software and all the necessary inter-faces required.

In this chapter we examined the options forthe location of the network administration inthe organization chart and discussed some ofthe alternative organizational structures for thenetwork administration department. We alsoidentified many of the functions of networkadministration and gave forward references tochapters in this book where these functions arediscussed in great detail.

One of the functions discussed in this chapterwas that of planning. Planning has many inputs.These are shown in Figure 9.6.

We recognize that telecommunications isstrongly linked to market forces and the indus-try of telecommunications and computers. Theyin turn under pressures and influence of gov-ernmental agencies as well as standard organi-zations, both national and international. Theseare top-down and are in addition to the bottom-up pressures from the end-user and corpo-rate management to mask the complexities ofthe systems and make it end-user friendlywithout any loss of efficiency or effective-ness. This is a difficult act for the NetworkAdministrator.

Case 9.1: Headaches for networkmanagement

A survey of 427 corporate networkers at largeglobal companies were asked to list the ‘risks andthreats’ they felt. The percentage responses are

as follows:

Unauthorized systems access 28%Viruses and malicious code 24%Password exposure 24%Internet access 23%Disgruntled employees 21%Information leaks 18%Use of laptops 13%Natural disasters 13%PBX fraud 8%Hackers 6%Terrorism 5%

Source: Datapro Information Services Group(Datran, NJ, USA), printed in Data Communica-tions, Jan. 1995, p. 14

Bibliography

Adler, J. (1989). The case for centralized LAN man-agement. Telecommunications, 23(6), 58 62.

Chiaramonte, J. and Budwey, J. (1991). Finding andkeeping good people. Telecommunications, 25(2),55 59.

Davies, J.E. (1990). Professional development and theinstitute of information scientists. Journal of Infor-mation Science, 16(6), 369 379.

Desmond, C. and Templeton, R.S. (1989). Planning abusiness communications network. IEEE Network,3(6), 8 10.

Doll, D.R. (1992). The spirit of networking: Past,present, and future. Data Communications, 21(12),25 28.

Flanagan, P. (1994). Taming the network: how aretelecom managers coping with change? Telecommu-nications, 28(8), 31 32.

Healy, P. (1991). Good advice, bad advice. Which Com-puter? 14(6), 76 88.

Mingay, S. and Peattie, K. (1992). IT consultants so-urce of expertise or expense. Information and Soft-ware Technology, 4(5), 341 350.

McLean, E.R., Smits, S.J. and Tanner, J.R. (1991).Managing new MIS professionals. Information andManagement, 20(4), 257 263.

Partridge, D. and Hussain, K.M. (1994). Knowledge-Based Information Systems. McGraw-Hill.

Torkzadeh, G. and Xia, W.-D. (1992). Managingtelecommunications by steering committee. MISQuarterly, 16(2), 187 196.

99

Page 113: Telecommunications and Networks

10

THE CLIENT SERVER PARADIGMIn the 21st century, it will no longer be sufficient to put computers into environments. They must bepart of the environment.

Bill Joy, Vice-President of Micro Systems

Introduction

In the early days of computing, computers wereexpensive and computer personnel were scarce.Development and operations were centralized.Soon, there was a backlog of applications and alot of ‘noise’ between the end-user and the com-puting providers. A reaction was towards decen-tralization into distributed processing wherecomputing was decentralized while much of thecentralized computing shifted to the nodes andlater to the end-user. Meanwhile, there were twoimportant technological developments. One wasthe arrival of the PCs in the early 1980s withthe promise of performance/price ratios higherthan the mighty mainframe and yet small enoughto stay on the desktop. But, the PCs were iso-lated from other computing resources like expen-sive peripherals and the corporate database. Inthe mid-1980s came the next computing develop-ment which was the LAN (Local Area Network).This enabled PCs (and other computers) to ‘talk’to each other and share the scarce resources ofperipherals and databases. The LAN providedinterconnectivity. What was also needed, how-ever, were computer processors that would facil-itate sharing and serve out these resources. Thuscame about the servers. The file server was fordata sharing and the printer servers were forsharing printer resources. But, soon this was notadequate. The number of PCs rose dramaticallyeven within one organization. The number ofend-users and clients rose too. Processors rangedwidely in power and price. Peripherals becamemore versatile, requiring not just printer servers,but also image processing servers, and voice pro-cessing servers. These servers, with the databaseand application programs residing on them, were

accessible to any PC, mini or mainframe througha bus or a LAN. Such a client server system isshown in Figure 10.1.

There were many limitations to the PC-centricapproach. One was the lack of adequate admin-istrative control and systems management tools.Also, there was a high cost of data swappage andnetwork traffic as well as the inefficient use ofthe processors both at the end-user end and atthe server end. A solution was to distribute theload and responsibilities between the processorsat the front-end, (the client) and the back-end(the server). The client server paradigm thatwas thus born is the focus of this chapter.

We start with an operational definition of theclient server system, identify some of its func-tions and advantages, and list its many imple-mentations and success stories. With this back-ground as motivation, we shall examine thecomponents of such a system: the hardware (pro-cessor and user interfaces); software; commu-nications facilities; the human resources need;and, finally, the organizational environment nec-essary for the client server to be successful.We shall also examine the concern of corporatemanagement with ‘downsizing’ and ‘right-sizing’,and consider the advantages of the client serverparadigm as well as the obstacles involved in itsimplementation. We conclude with a discussionof the client server paradigm used by many end-users for cooperative processing.

Components and functions of aclient server systemAn overview of the client server approach isshown in Figure 10.2. It is compared with the

100

Page 114: Telecommunications and Networks

The client server paradigm

Data/knowledge-base

Database server

Printer

Imageprocessor

Otherperipheral

Printer server

Image processor server

Peripheral server

LAN

LAN

PC End-user

Mini/mainframe End-user

Figure 10.1 Schema for client server system

PCPC

PC PC

PCPC

PC

PCPC

PC

Hostcomputer

Host computing with master−slave orientation

Enterprisecomputer

W A N

SERVERLAN

Distributed networking with client−server orientation

Workgroupdepartmentalcomputer

LAN

PC

Figure 10.2 Host computing vs. the client server system

earlier configuration of a host acting as ‘master’and the connected PCs (or minis) acting as ‘slaves’.With the client server approach, each PC is inde-pendent for local processing and sharing the cen-tralized enterprise equipment (especially servers)through a LAN. In the case of widely dispersedclients and servers, the access could be througha MAN (Metropolitan Area Network) or a WAN

(Wide Area Network). The corporate databaseand enterprise-wide application programs typi-cally reside with the server and are accessed by theend-user through the client processor, the LANand the server, as illustrated in Figure 10.3. Eachof the components of the system will now be dis-cussed in turn. (The section numbering will cor-respond to the numbers in Figure 10.3.)

101

Page 115: Telecommunications and Networks

Telecommunications and networks

5Server Cable connection

Data/knowledge-base andapplication programs

43

Transmission (often LAN)

End-user(s)

Client

6

2

1

Figure 10.3 Components of a client server system

THE USER

The user is typically the end-user, who accessesthe client for service. The end-user could be acorporate manager, a professional or employeeof the corporation, or it could be customer. Hereis where some confusion can arise. The customerin business and commerce is called a client, butthis client is a human, not to be confused withthe client in computing which is a processor. Weshall have more to say about the user and end-user when we reach the end of the process ofusing a client server system, that is, when theuser or end-user receives the output and resultsfrom the system. First, however, we discuss theclient.

CLIENT

The client could be a powerful processor or adumb terminal with no processing capability.Typically, it is at least a PC with its own operat-ing system. Most of the processing is often doneby the server and the division of work is deter-mined by a computer program, which is why

programs for a client server system are differentfrom those of a traditional transactional system.Thus, the client server system is an enablingtechnology with applications written specificallyfor it. Sometimes this means rewriting existingapplications. Whether this means all applicationsor selected applications or even ‘mission critical’applications will depend on the confidence in theclient server system and the propensity for riskon the part of IT management.

The applications may include message process-ing like e-mail; accessing local files and databases(with or without a DBMS) for browsing and localcomputing; computing on server-supplied data;and (through the server) the sharing of comput-ing resources like the corporate image processingsystem, an optical character recognition system,advanced graphic processing systems, a colourplotter, or just a fast laser printer. Furthermore,these peripherals may belong to a variety of ven-dors. Given a well designed application, a power-ful server and a well tuned fast network, one canhave quick results with the entire system behindthe client being totally invisible and transparentto the end-user who does not need (or care) toknow where or how the processing is done.

102

Page 116: Telecommunications and Networks

The client server paradigm

To facilitate the query processing from a client,most client server systems use a StructuredQuery Language (SQL) which is a high leveldialogue language competing in its end-userfriendliness with many a 4GL. SQL is comparedwith its equivalent in English in Figure 10.4.

SQL, along with its relational database(residing with the server), is almost a de factostandard for client server systems, though morerecent semantics of the access language isincreasingly object-oriented. Meanwhile, SQLdoes allow access to multiple database serversproviding a wide range of decision-makinginformation and knowledge residing in remoteand dispersed locations.

An important component of a client proces-sor is its UI, User Interface, through which theuser communicates. For a user such as a pro-grammer, the UI need not be very user-friendly,but for an end-user it had better be friendly orelse it may not be used at all. A GUI, Graphi-cal User Interface, is the more preferable for theend-user because it enables access through graph-ical icons rather than through programming com-mands. In the future, a GUI may well be able toaccept and deliver information not just in terms ofnumbers and even text but also as graphics, voice,video, images and animation, thereby becominga multimedia terminal. However, the more facil-ities available, the more powerful the client pro-cessor needs to be. For simple data manipulation,even a laptop would suffice, but for more com-plex and large amounts of processing, a worksta-tion may be appropriate. However, a workstationcan be three or six times as costly as a PC depend-ing on the ‘bells and whistles’ (features) attached.At the least, the workstation is faster and hasmore memory and access to more programminglanguages depending on what functions have tobe performed. If it is a general purpose worksta-tion, the most popular business programs would

English: Find the number of employeesworking for Mr. Smith and making atleast $40,000 annuallyQuery for Client processor:

SELECT NAMEFROM PERSONNELWHERE MANAGERD`SMITH'AND SALARY<40000

Figure 10.4 English and its Query language equivalent

be the word processor, the spreadsheet, e-mail,DBMS processing and business graphics. For aDSS (Decision Support System), greater speeds,computing power for, say, business simulationcomputing and access to programming languages,including simulation languages like SIMSCRIPT,are required. For a KBS (Knowledge-Based Sys-tem) like an expert system, AI languages like Pro-log and LISP will be necessary. For an engineeringworkstation one needs not only computing powerbut also powerful graphics capability for appli-cations like CAD (Computer Aided Design) andsystems simulation.

A solution to the expensive workstation is tohave an X terminal, which is essentially a work-station without a disk. It is designed specifi-cally for the type of processing and functionsof a client server system with the focus on net-work communications, graphics performance andan end-user friendly interface. The X terminalis actually an ‘application-specific’ workstationoptimized for running the X protocol. The Xwindows protocol ‘can support any number ofwindows with any type of font or window size.Also, the X terminal’s bitmapped screen allowsthe application to display all sorts of data for-mats (text, images, drawings, and so on) simul-taneously, thereby enhancing the user’s produc-tivity’. (Socarras, et al. 1991: p. 52).

An X terminal can be connected directly to aLAN, through a workstation by cable, or withmore than one X terminal attached to one work-station.

The X Windows system is the de facto stan-dard for the larger machines in the UNIX envi-ronment. For the PC, the de facto standard isMicrosoft Windows. These de facto standards pro-vide network transparency so that an applicationrun on a server thousands of miles away willappear on the screen of the client as if it wererunning on the local client processor.

‘Ironically, the X terminal’s greatest virtueis the lack of functionality. By having noprogrammability, no local diskette drive, it isimpossible for anyone to introduce unqualifiedsoftware at the desktop. As a result, all softwareon the application server is installed under thequality assurance of the IS organization. Inaddition, the inability to download files and copythem to a diskette, improves the data security ofthe network.’ (Connor, 1993: p. 52)

It is about time to discuss a network.

103

Page 117: Telecommunications and Networks

Telecommunications and networks

Network and transmission

The server and the client can be connectedtogether by hardwire. However, when they arewidely dispersed, they must be connected to aLAN, which in turn is part of a MAN or WAN.

This enables a corporation to have an enter-prise network that may be strung around thecountry and yet operate individually, in a work-group, or at a department or local level. For thisto occur, it is important that there is interoper-ability, i.e. the operation and exchange of infor-mation in a heterogeneous mix of equipment andsoftware using the network. The essence of open-ness is that there is interchangeably of compo-nents of the system and therefore vendors haveto compete with better products and better ser-vice. Both customers and vendors benefit frominteroperability and the resulting competition.

Interoperability also implies an open networkarchitecture. The earliest architecture was theSNA but this was proprietary and not open toother vendors. Some, especially those in Europe,objected and supported the OSI proposed by theISO, the International Standards Organization.But ISO did not get much acceptance in theUS where SNA was accepted by IBM userswhilst TCP/IP had many supporters in the UNIXUser’s Group. Thus we do not have one universalopen system but the systems available are notclosed either.

One very desirable feature of a client (and aserver) is that it has an open architecture, whichensures interoperability, that is the hardwareand software can operate interchangeably oneach other’s equipment despite their being non-homogeneous. This is not the vendors’ preferencebecause they would like you to acquire all yourequipment and software from them and have acaptured market. In conflict with the vendor, itis in the interest of the customer to have theflexibility to mix and match components of theclient server system.

SERVERS

Connectivity, though very important, is not allthat is needed for efficient and effective sharingof computing resources. This was recognized withthe increase in the number of PCs when eachmanager of a PC could not afford the databaseand peripheral resources needed. The importanceof resource sharing became obvious. This was

achieved in the mid-1980s with servers, which issoftware that enables and controls fast and easyaccess to databases and application programs.

Databases evolved into knowledge-bases andincreased not just in number but greatly in typesof models and complexity. Also, there was oftenthe need to share software, not just applicationssoftware, but also compilers for programminglanguages, such as the one for SIMSCRIPT 11for the occasional user of simulation. And therewas also the need to download some of theapplications from the minis and mainframes tothe client PCs. All this included an increasedrecognition of the importance of the client andthe end-user, who demanded not just fast andeasy access, but also a user-friendly environmentthat facilitates and even encourages the sharingof resources.

The new paradigm of sharing computingresources has its own demand on specialized sup-porting resources: it requires a network serverOS (Operating System), multiple user interface,sometimes a GUI (Graphical User Interface), adialogue oriented client server language (suchas a version of SQL) and a database architecture.Since the resources are spread out spatially, notjust in one country but also abroad, there is theneed of fast and reliable telecommunications andinterconnectivity. And all this must be imple-mented while the hardware/software platformand the communications technology being usedis transparent to users. The user must pay too byfollowing the procedures and protocols requiredby the systems. The users must also learn andobserve some of the rigidities for using the stan-dard user interfaces and interface languages.

Much of the software required is availablefrom vendors and software houses and theclient server systems vary greatly in emphasisand capabilities. They may be mainframe-centric(centred), PC-server-centric, or may have anemphasis on data/knowledge-base distinct frombeing communication-based. But, despite theavailability of software packages, there is oftena need for in-house software development. Thereis also the need to integrate the client serversystems with existing information systems andto use the system not only independently as anend-user but also to work cooperatively amonggroups of end-users.

The server is a processor, conceptually verymuch like the client processor. Yet, it is verydifferent. For one thing it does not have a UI,

104

Page 118: Telecommunications and Networks

The client server paradigm

let alone a GUI. It is designed for networking,database processing and applications processing.It is different from a general purpose processorand is sold as such: a server processor. Theserver may differ as to their responsibilities andfunctions. For example, the server may act as arepository and storage of information, in whichcase it is a file server; or, it may perform dataretrieval, in which case it is a database server.

Which type of server is desirable depends onthe needs and objectives of the system. In anycase, the server must be able to do multitask-ing (perform multiple functions simultaneously),use multiple operating systems, be portable, havescalability (the ability to upgrade upwards with-out loss of software performance), and have afast response time despite the time requiredfor teleprocessing. Because of these capabilities,servers are much more expensive than a clientprocessor, in the range of $20 000 to $50 000 at1995 prices.

Why are servers so expensive? Becausethey perform many functions. These functionsinclude:

ž Network management;ž Gateway functions, including access to out-

side access and public e-mail;ž Storage, retrieval and management of docu-

ments;ž File sharing;ž Batch processing;ž Bulletin Board access;ž Facsimile transmission.

The platforms for sever processors are PC-LAN servers with mainframes and minicom-puters as alternatives. The server platform andissues relating to the servers are hidden fromthe end-user, but there are issues which include:internetworking; disk space utilization; datamanagement; gateway and other access con-trol; backup and recovery; and fault tolerance.Another important issue is the management ofdata (and knowledge if any).

Database processing

By processing data at a server instead of at amainframe there are some principles of process-ing data that are applicable. These include theissues of integrity, security and recovery of data.The enterprise data that is a corporate resource

needs to be unified and integrated; access todata must be controlled to maintain security; andrecovery must be possible after systems failures.This should be done without affecting the accessfor legitimate sharing and without an uncon-trolled proliferation of islands of data croppingup everywhere. Access optimization, security util-ities and I/O (Input/Output) handling wouldvary with servers manufactured by different ven-dors, but should not impact on the consistencyof data and the responsiveness for the end-user.This improved responsiveness is important, espe-cially for a query which could now be a fractionof a second instead of 3 5 seconds under a main-frame, thus allowing the processing of customized‘wild’ queries. Most of the processing, however, isdata entry and could still be done in batch withupdating done in the day and processing done atnight and transmitted by LAN for use in the earlymorning at the local client end. This enables mul-tiple users (such as, say, small businesses or pro-fessionals like doctors) who cannot afford theirown processing to use a client processor.

Much of data management (and resource shar-ing) is automated. Some of this is done througha DBMS that resides with the server which con-trols access between multiple processing systems(and even multiple distributed databases) andintegrates data access with network management.

Applications processing

Data is used by application programs and mostof these reside with the server. There are someoff-the-shelf client server applications nowavailable and these are increasing in number andscope. Still, however, many applications mustbe painfully developed from scratch. Applicationtools and lower prices have made client serversystems development more competitive in termsof cost performance when compared withmainframe and minicomputer development. Butthe client server application development doesdiffer from traditional software development insome significant ways that include:

1. Processing functions are distributed betweenclient and server. The front-end client por-tion is run by end-users using languages likeSQL that have simplified data request proto-cols and extract data from whereever it mightbe located, whatever computer stores it, andwhich ever operating system controls it.

105

Page 119: Telecommunications and Networks

Telecommunications and networks

2. UI and more often GUI are used becauseend-user friendliness is still very important,if not most important to the end-user.

3. Advanced networking, mostly LANs, areused.

4. 4GLs and code generators are used exten-sively, though OO methodology is beingincreasingly used.

5. Development tools like SQL Windows,FLOWMARK, Progress, ObjectView, andUniface for OS/2 are emerging.

6. CASE tools are being used with RapidPrototyping.

SERVER BACK TO CLIENT

Once the applications are processed and dataretrieved as per request by the client, the resultsare sent back the way they came, throughthe LAN and the client processor. There theapplication results may have to be formattedfor better display or the data retrieved has tobe processed. All this is done by the programsresiding at the client. This is then given to theuser through the UI, User Interface. Thus farwe have examined simplified line diagrams ofthe client server approach. A formal schematicdiagram of a real-world application is shown inFigure 10.5.

Organizational impactA user who may be a computer programmer,unlike the end-user, may well be less than enthu-siastic with the client server paradigm. Why?Because the programmer is concerned (justifi-ably or not) about the loss of control over devel-opment, processing and use of information. Also,like the DBA (Database Administrator), the pro-grammer is concerned about the loss of integrityand security of data. The programmers and theDBAs are supported in their concerns by theManager of IT, but for an additional reason.The manager is concerned about a loss of powerthrough a reduction of span of control and a lossof control over the acquisition of resources, pow-ers and responsibilities that now gravitate to thedepartment administrating the client node.

While the professional computing personnel(including IT management) are not enthusias-tically in favour of the client server system,the corporate management is enthusiastically in

favour. Why? Because there is an opportunity ofreducing costs at the centre. Some cost compo-nents increase and some decrease, but the netis a decrease of overhead costs of computing atthe centre. There are other organizational conse-quences of the client server systems: downsizing.

The client server paradigm is largely mini-computer- or microcomputer-centric and is com-peting well with the mainframe-centric systemswhere most of the ‘mission critical’ applicationsreside. However, the costs for the client serversystem is higher than the mainframe in totalcosts though the initial costs are lower. Theannual costs per user in the fifth year (with 3584mainframe users) in one study came to $1484 forthe mainframe and $2107 for the client serversystem (Semich, 1994: p. 37).

The downloading of computer processing fromthe mainframes (and minis) is referred to asdownsizing by corporate management. It is thedecreasing of their overhead costs and is down-sizing for them. But, downsizing in managementparlance is the reduction of the labour force andthe firing or displacement of personnel. This ispopular in recessions and bad economic timeswhile upsizing and expansion is popular in goodeconomic times. The right size and right tim-ing is referred to as right-sizing. But, in termsof computing resources there need not be anychange in labour force. There is a reduction ofoverhead computing costs at the centre, but notnecessarily a reduction of total costs. What wehave instead is a shift of costs (and responsibili-ties) to the client nodes. This shift is welcome tocorporate management at the centre though theydo share the concerns of losing control over thedevelopment and processing of information.

Support for the client server system comesfrom the end-user who can see the reduction oftime and ‘noise’ and an increase in responsive-ness in the development of systems. The end-usergains greater control over the system. End-usersare now more computer literate and experiencedand no longer fear the computer which is becom-ing more robust and friendly. They are willing totake the responsibilities of controlling the client-end and leave the server-end to the centralizedprocessing centre. This coalition of corporatemanagement and end-user with help from thecomputer industry (in the delivery of appropri-ate systems) will overcome any resistance to theclient server system which will be increasinglyimplemented throughout business and industry.

106

Page 120: Telecommunications and Networks

The client server paradigm

Figure 10.5 A real-world application of the client server system

Page 121: Telecommunications and Networks

Telecommunications and networks

There are three alternatives in implement-ing a client server system. One is to go ver-tically and implement all the levels of imple-mentation, but for only one business at a time.However, implementing, for example, a LANfor one business does not make economic sensebecause once implemented a LAN can be usedby other applications. The other approach is toimplement horizontally, i.e. the LAN, the server,the clients, applications and the end-users (theirtraining). This approach is also flawed in thatsome resources lie idle while others are beingdeveloped. The third alternative is to do all atonce, horizontally and vertically.

It is likely that the third approach is often notfeasible because of the high investment necessaryin money and skilled personnel. It is also highlyrisky. The less risky approach is to implementvertically and do so on a pilot study basis. Thefirst implementation in one case cost $2 million,but then the next implementation of similarscope was almost half the cost. So, there is alearning curve involved. Also, in implementing apilot study, it is desirable to select an applicationthat is less risky and has low costs and lowvisibility.

Bad economic times and recession are allies forthe client server system. Management wantingto downsize and reduce their centralized over-head will be more willing to give up some of theircentralized control to the client, the server andthe end-user.

It will be sometime before IT personnel andcorporate management are fully comfortable giv-ing up their centralized processing, which inmany cases is fairly well stabilized with its secu-rity regime, its back-up and recovery practices, itsfault tolerant mechanism, and an experienced ITstaff running the development and operations oftheir information and knowledge base. But end-users are taking the initiative and the responsibil-ity. They want more control over their operationsand the time required for development.

Downloading to the end-user’s desktop isperhaps inevitable, especially with increasingopen architectures in hardware, open systemsin telecommunications, and the refinement ofmultimedia equipment. The technology willwork in favour of the end-users and thecorporation and make the computer industrymore competitive and responsive to the end-user’s needs.

Advantages of the client serversystem

The advantages of a client server system arecontext-sensitive and depends on your vantagepoint. If you are a corporate manager you will bethrilled with the advantages of downsizing andthe reduction of costs at the centre. If you are anend-user you will be pleased with getting con-trol of the system even though it may mean thatyou cannot blame computer personnel for thingsgoing wrong and instead have to take certainresponsibilities for the system (at the client-end).If you are one of the computer personnel, espe-cially at the management level, then you will besomewhat happy at your reduced responsibilitiesfor the system but you are now concerned aboutthe integrity and security of the system, espe-cially that of the data/knowledge-base. On thewhole, viewing the system from the point of theenterprise, there are more advantages overridingthe disadvantages, risks and limitations involved.These advantages are summarized in Table 10.1.

Obstacles for a client serversystem

In achieving the advantages, there are someobstacles and risks. Some have been implied inthe discussion of the organizational implications:problems associated with downsizing and theneed for greater coordination and cooperationamong the end-users of the system. There isalso the problem of resistance, which must beexpected of all new technological changes. Thereare also problems of conversion, especially theneed to train personnel in the use of the newsystem.

Training may be considered a technologicalobstacle. End-users have to be trained not onlyon using the client and knowing the functionsof the server, but end-users need to be edu-cated about networking and trained in navigatingacross the LAN and perhaps even the Internet.Other technological obstacles are the lack of toolsof development and products of the client serversystem; the lack of methodologies; the shortageof experience in the planning and implemen-tation of a client server system; and the lackof national and international standards relatingto the equipment and operations of the system.

108

Page 122: Telecommunications and Networks

The client server paradigm

Table 10.1 Advantages of a client server system

Reduction of responsibilities and cost overheadBetter local cost control of operations and

development (original and modifications)Faster response time to requests for processingGreater access to corporate data/knowledge otherwise

maintained in a highly protected and centralizeddata structure. The client server system stripsdata off transactional systems and stores it in theserver to be shared for analysis and evenmanipulation locally

Enables distribution of processing from centralizedto desktop computing

Offers cooperative processing between individualsand group departments across organizationalboundaries, geographies and time zones

Rewriting systems for the client server system isoften an opportunity to purge obsolete softwarefrom the application portfolio and to consolidateand integrate the system, and make it moreefficient

Offers more friendly interfaces for end-users,especially knowledge workers and customers

Greater involvement of end-users in ITimplementations

The open architecture and open systems offerflexibility in choosing different configurations ofhardware, network and DBMS from multiplevendors

There are greater possibilities for expansion byadding hardware (even laptop computers) tonetworks without replacing existing hardware. Theplug-and-play possibility applies at least in theorywhen parts of a system can be replaced withoutimpact on the rest of the system

These obstacles and others have been summa-rized in Table 10.2 and can be overcome only byclose cooperation between IT personnel, corpo-rate management and end-users.

Summary and conclusionsAlok Sinha has described the client serversystem well:

. . . one or more clients, and one or more servers,along with the underlying operating system andinterprocess communication systems, form acomposite system allowing distributed compu-tation, analysis, and presentation.

Table 10.2 Obstacles in the way of a client server sys-tem

OrganizationalLack of personnel skilled in the client server system

and in networkingResistance to change and new technologyRisks of downsizingCosts of conversionNeed for greater coordination and control of

more end-users

TechnologicalNeed for LAN/WAN infrastructureLack of skills and equipment resourcesLack of methodology and experience in planning for

a client server systemLack of client server products and tools of

developmentLack of client server applicationsLack of national and international standards

for the client server paradigm.

The resources and infrastructure needed forthe client server paradigm is displayed inFigure 10.6, where we see that an end-usernavigates through the client facility, the networkinfrastructure, the server facility, and then to theapplications and data/knowledge-base. Havingdone the desired computing, the reverse path istaken back to the end-user.

The end-user can do some local processing ona local database at the client facility and alsouse the many software productivity tools such asword processing, spreadsheets and e-mail. Someof these productivity tools, like e-mail, will beused more intensively in cooperative process-ing which connects the clients horizontally andimproves horizontal communications. Productiv-ity tools extend communication to different timesbut the same place, as well as different timesand different places. However, cooperative pro-cessing does increase the cross-currents betweenend-users and thus raises new problems of collab-oration and cooperation in intercommunicationsand may lead to adhocracy.

We conclude with a view of future paradigmsof client server systems moving from theEthernet era (of file servers, to databaseservers, groupware and transactional processingmonitors) to the intergalactic era of the late 1990sand early 21st century with object-orientationin distributed processing (Orlafi et al., 1995:

109

Page 123: Telecommunications and Networks

Telecommunications and networks

End-User

CLIENT

Network

SERVER

APPLICATION PROGRAMS, DATA/KNOWLEDGE

Figure 10.6 Navigation in a client server system

Table 10.3 Paradigms of client server systems

1982 1986 FIRST WAVE (Ethernet Era)File Servers

1986 1995 SECOND WAVE (Ethernet Era)Database ServersGroupwareTransactional Processing Monitors

1995 20xx THIRD WAVE (Intergalactic Era)Object-Orientation in DistributedProcessing

p. 122). This evolution of the client serversystem is summarized in Table 10.3.

Case 10.1: Client server at the1994 Winter Olympics

IBM installed a client server system for theWinter Olympics in 1994. It was designed toserve 100 000 potential users that included 50 000accredited personnel, 2000 athletes, 8000 mediarepresentatives and over 100 000 visitors per day.

The application was ‘mission-critical’ and reli-ability had to be absolute since there would beno second chance to capture say a ski slalom raceor a lug run. And what if the results were inac-curate? That would not only cause a flurry butbring many an athlete and relatives to tears!

There was token ring LAN connecting PS/2(PC computers) at 16 sites with clients andservers at over 3000 sites. The network architec-ture used was IBM’s SNA whilst the PS/2 han-dled all the accreditation and games manage-ment. The OS/2 graphical user interface (GUI)offered easy access to all users accessing a client.There was also an IBM ES/9000 mainframewhich served as a central database with over250 000 files available for the network. In addi-tion, there was the RISC System 6000 that wasused for the design and planning of the games.

Case 10.2: Citibank’s overseasoperations in EuropeCitibank started moving workload from its 176mainframes in 17 countries to a client serversystem. The strategy was one of re-engineeringto have a more open and flexible platformthat would support an object-oriented networkwhich would provide information for enhanceddecision-making and lead to higher productivity.

The client server platform handled 30 000transactions per day with a value of approxi-mately $200 million. The objectives of the systemwere to have a very high level of fault tolerance,good contingency and a platform that would bescalable to the required processing power.

Citibank runs Windows NT on Compaqservers. They deliver transactions to other

110

Page 124: Telecommunications and Networks

The client server paradigm

banks, do bookkeeping, accounting, MIS, report-ing, drafts, DSS support, and check writ-ing. By 1995, plans are for all locationsin Europe to be linked by a wide areanetwork and document imaging system tieddirectly into the new platform, deliveringtransactions in an electronically structuredformat.

Source: George Black (1994). Citibank’s big gam-ble. Which Computer? May, pp. 42 44.

Case 10.3: Applications ofclient server systems

The Morning Star group used the client serverapproach to downsize from a 15 year oldmainframe system and achieved a faster time-to-market response and greater productivity. It usedthe Hewlett-Packard HP 9000 Business Server.

Heinz Pet Products used the client serverparadigm to increase productivity and improvecustomer service.

Electronics Distributor built its client serversystem incrementally for tracking marketinginformation, increasing productivity, enhancingdecision-making support, improving customerservice and bringing the product faster to themarket.

The Bank of Montreal in Canada, with assetsof $116 billion, re-engineered its operations toprovide innovative and timely solutions. Thebank worked with Digital Equipment usinga integrated suite of software products forenterprise information delivery developed by theSAS Institute.

Motorola, a $3.5 billion manufacturing enter-prise, reports its right-sizing effort to result inhalving its costs under a mainframe (Connor,1993: p. 56).

GE in one application showed a one-year pay-back for its start-up cost through its downsizingeffort using a $6000 PC.

Unisys, a large manufacturer of computersystems, cut its information system’s costs by

a third and improved service to end-users byimplementing the client server technology.

Bibliography

Cameron, K.S. (ed.) (1994). Special Issues on Down-sizing. Human Resource Management, 33 (2),181 298.

Canning, (1992). Plans and policies for client/servertechnology. I/S Analyzer, 30 (4) 1 16.

Connor, W.D. (1993). The right way to rightsize.UNIX Review, May 45 55.

Datamation, 37 (20) (1993), 7 24. Cover story on‘Client/Server Computing’.

Levis, J. and von Schilling, P. (1994). Lessons fromthree implementations: knocking down the barriersto client/server. Information Systems Management, 11(2), 15 22.

Liang, T.-P., Hsiangchu, Chen, N.-S., Wei, H.-S.and Chen. M.-C. (1994). When client/server isn’tenough. Computer 27 (5), 73 79.

Miranda, M.H. and Tellerman, N.A. (1993). Corporatedownsizing and new technology. Information SystemsManagement, 10 (2), 1993, 32 38.

Muller, N.J. (1994). Application development tools:client/server, OOP, and CASE. Information SystemsManagement, 11 (2), 23 27.

Orfali, R., Harkey, D. and Edwards, J. (1995). Inter-galactic client/server computing. Byte, 20 (4)108 122.

Pinella, P. (1992). The Race for client/server CASE.Datamation, 38 (5), 51 54.

Semich, J.W. (1994). Can you orchestrate client/servercomputing? Datamation, 10 (16), 36 43.

Seymore, J. (1994). Application server at your service.PC Magazine, 13 (3), 277 302.

Sinha, A. (1992). Client server computing. Communi-cations of the ACM, 35 (7), 77 98.

Socarras, A.E., Cooper, R.S. and Stonecypher, W.F.(1991). Anatomy of an X terminal. IEEE Spectrum,28 (3), 52 55.

Schultheis, R.A. and Bock, D.B. (1994). Benefits andbarriers to client/server computing. Journal ofSystems Mangement, 45 (2), 12 15.

Watterson, K. (1997). Client server meets the Webgeneration. 2(2), 93 96.

111

Page 125: Telecommunications and Networks

11

STANDARDSIf you think of ‘standardization’ as the best that you know today, but which is to be improvedtomorrow you get somewhere.

Henry Ford

Introduction

We discussed standards when we described theOSI model. It was to be the international stan-dard of network architecture, but it did not getpublic acceptance, largely because it did not‘complete its tortuous way through the stan-dards process’ (Amy, 1994: p. 52). So what isso tortuous about international standards? Arenational or local standards any less tortuous?What are standards anyway? Are they neces-sary for telecommunications and networking?If so, what are their advantages and limita-tions? What does it cost? What is the process ofstandardization? Who formulates them and whoenforces them?

It is these and related questions that we willexamine in this chapter. We will also look atthe national and regional organizations that havedone most for the development of internationalstandards. These are the Europeans, the Japaneseand the Americans. In each case, though, we willlook at a different perspective to give us a bal-anced and ‘total’ picture of the process. In theEuropean case we examine a regional organi-zation and its standards-making process; in theJapanese case we look at the national organiza-tion and its decision-making process; and in theAmerican case we look at a case study in which aninternational standard was developed, that of theB-ISDN.

For a view of decision-making at the interna-tional organization itself, we look at the devel-opment of standards for the OSI model at theISO, International Standards Organization. First,however, we look at standards, their need andtheir rationale.

What are standards?

When measuring quantity, time or distance, weadhere to standards such as 12 to a dozen,60 minutes to an hour. These are standards weall unconsciously acknowledge. Standards areaccepted authorities or established measures ofbehaviour, operations or performance.

Sometimes things in life are not as standardas we would want, like the day of rest in theweek, or even the number of days in a year.But these are not very crucial. Some standardsare crucial like when lives could be lost becausewe do not drive on the same side of the roadin all countries of the world. Sometimes, a lackof standards is just a big pain in the neck, likewhen we travel with electrical appliances andour plug does not fit the socket in the wall orwe do not have the right frequency of electricalcurrent. Why can we not have one shape ofplug that fits in all sockets and have just onefrequency all around the world? This was notnecessary in early times when people did nottravel with electrical appliances that were crucialfor shaving or drying hair. But as the world getsinterrelated we need standards that are not onlynational but international. This is very true intelecommunications where, without internationalstandards, equipment will not be interoperable,protocols will be different, transmission mediamay vary and international communications willbe impossible. This will affect the communityat large but definitely the business communitydoing international trade. Telecommunicationsare also needed in our world today because weare in an age of multivendors and multicarrierswhere interconnectivity and interoperability is

112

Page 126: Telecommunications and Networks

Standards

necessary and this can be achieved only throughinternational standards.

Lack of international standards will hamperthe marketing of products beyond their nationalborders; reduce information exchange betweencustomers, manufacturers and retailers; andslow technological progress because producerswill create duplicate products instead ofbuilding on previous work. The existence ofstandards not only improves communication,but also provides portability, helps planningand control of operations, reduces uncertainty,stimulates demand, spurs innovation, encouragescompetitiveness, and provides an organized wayof sharing and transferring technology.

Standards in telecommunications is part ofthe problem of standards for the computingindustry where standards are needed for pro-gramming languages, operating systems, inter-face devices, chips and wafers, database andknowledge-base design, systems development,data representation the list can go on and on.Without such standards we would not have theinteroperability we now have. These standardsare part of the work done by international stan-dards organizations like the ISO, InternationalStandards Organization. In the early 1980s, theywere active in many areas like those shown inFigure 11.1. Note that there was some interest

and acknowledgment of interconnections andcommunications. Since then, telecommunica-tions has become more ubiquitous and standardshave been developed in areas including specifi-cations for wiring, modems, connectivity devices,transmission, ISDN, OSI network architectureand protocols, voice and video processing, and soon. We look later at just one of these, the OSI.

We need international standards if we are toutilize the potential of telecommunications andincrease world trade and international interac-tions. But international standards are far moredifficult to achieve than national standards. Inmany countries there is a nationalized commu-nications PT&T and a government that can dic-tate national standards. In countries like the US,there is no ministry that can dictate standards.Instead there is a National Institute of Standardsthat must react to the needs of standards. Buteven in the US, it is easier to agree on standardsthan to have international standards because theplayers involved are global and greater in numberand diverse in interests. Theoretically it is all thecountries in the United Nations that must agreeto an international standard, but in practice it isa few industrialized countries that do the workand have the expertise. The rest follow. How-ever, getting agreement among these developedand interested countries is still slow and difficult

ISO/TC97Computers

andInformation Processing

SC1Vocabulary

SC10Magnetic

disk packs

SC3Characterrecognition

SC5Programming

languages

SC6Digital data

transmission

SC7Problemdefinition

and analysis

SC2Character

sets and coding

SC9Programminglanguage for

numeric controlof machines

SC11Flexible

magnetic mediafor digital

data interchange

SC12Instrumentation

tape

SC13Interconnectionof equipment

SC14Representation

of dataelements

SC8Numericalcontrol ofmachines

SC15Labellingand file

structure

SC16Open endconnection

SC17Identification

SC18Text andfacsimile

communication

SC19Office

equipment

Figure 11.1 Topics under consideration by subcommittees of the International Standards Organization, TechnicalCommittee 97

113

Page 127: Telecommunications and Networks

Telecommunications and networks

Equipmentmanufacturersand suppliers

Networkserviceprovidersandcarriers

Organizations• nations/countries• international• regional

Others• promotional bodies• user groups• professional groups• market (consumers)

Figure 11.2 Players in Developing International Standards

because they may have conflicting objectives andinterests.

The standardization players are shown inFigure 11.2. One set of players is the computingand telecommunications industry. This includesnot only carriers but also manufacturers of equip-ment that are directly part of telecommunica-tions or use telecommunications like the com-puter manufacturers which are far from being ahomogeneous group. In some countries the car-riers and manufacturers are the same. This wastrue in the US till recently when Western Electricproduced equipment for its parent company thatwas a carrier: AT&T. With deregulation laws,the two have split and now compete with dif-ferent vested interests. Another set of playersincludes the consumer, consumer groups, promo-tion groups and user groups. They represent themarket forces.

The relative strength and influence of play-ers in telecommunication standards (shown inFigure 11.2) is changing as shown in Figure 11.3.The organizations that reigned supreme in theearly years are now giving way to the techno-logical forces which in turn are giving way tothe market forces that are largely the consumer.This consumer driven sector is also changing.From what was businesses using telecommuni-cations for national trade and commerce, it isnow a global market-place also being used by thehousehold and individual who uses telecommu-nications for e-mail and entertainment. Henceinternational standards have to be increasinglyconscious and sensitive to these needs which

makes the mix of the market more heterogeneousand difficult to predict and to cover every aspectof suture intended applications.

Not all parties and all players are involvedall the times. Different players and parties areinvolved in different stages in the developmentof standards. We can see this in Figure 11.4,where the base standards, containing variant andalternative methods that are implement-definedfacilities, are proposed by an international orga-nization (with participation by other playersat a high policy level). These base standardsdevelop into functional standards also calledprofiles, and contain a limited subset of the per-missible variants. After feedback and participa-tion by regional and national standards bodies,trade associations and professional organizations,the functional standards are then tested. Test-ing could be done by independent organizationswhere the emphasis is on conformity with thedesign specifications of the standards. In parallel,the technical personnel from countries that aremembers of the international organization alsoperform testing but with emphasis on interop-erability and performance first and conformancesecond.

Testing is also concerned with compatibilityand at different levels: multi-vendor compatibil-ity, upgrading and multi-vintage compatibility,and product-line compatibility.

The process of decision-making for standardsis both upstream, from the corporation anduser to the international organization, anddownstream, from the top down to the user and

114

Page 128: Telecommunications and Networks

Standards

1970 1980 1990 2000

Drivingforce

Organization driven• market• user groups• Professional groups• promotions groups

Technology driven• carriers• suppliers• computer industry

Market driven• consumers

Figure 11.3 Forces Driving Standards

Professional organizations

Trade organizations

User groupsSIOs(Scientific & industrial organizations)

Technical

Personnel frommembercountries

Acceptance byinternational bodies

Yes NoOK ?

Testing• conformance• performance

Testing• performance• conformance

Independent

organizations

National Stand bodies

RPOAs(Regional Private Operating

Agencies)

Functional standards

Analysis

Base standards Inter. organizations

Regional Stand bodies

Figure 11.4 Decision-making process by international organizations

corporation. Ideally, there is much feedback allalong the way, both upwards and downwards.

It is not always possible or desirable to havecomplete agreement by all parties (the innermostand shaded part of Figure 11.2). Sometimes,standards proposed by ISO are accepted by theglobe. At other times, an international standard

is not adopted by all countries as in the caseof the OSI model. Fortunately, not all playershave to agree on everything all the time, noteven in the negotiating process where only afew sets of players are involved. For example, inthe development of standards for a modem orsay a cable connector, the players involved are

115

Page 129: Telecommunications and Networks

Telecommunications and networks

the suppliers and carriers and the internationalstandards organization represents the remainingparties. But there are some situations wheremany players are involved and in a very seriousway with high stakes. An example would be thestandards for network architecture and protocols,the OSI model. It is perhaps appropriate todiscuss the process of producing this standardfor it is a technology that we have discussedin previous chapter. Also, the standard canhave a far-reaching and important impact onglobal telecommunications as well as beingan interesting and controversial case from theorganizational point of view.

The development of OSIIt took ten years to develop standards for all theseven layers of the OSI. This delay is partly dueto the fact that the OSI was so comprehensiveand important, and partly due to the thorough-ness of the process of developing an interna-tional standard. The standards were developedin workshops with feedback from numerousgroups and tested by yet another set of person-nel. The organizational structure and personnelinvolved are shown in Figure 11.5. The nameschosen for Figure 11.5 are somewhat differentfrom the generic names used in earlier figuresbut are not inconsistent. The profile bodies hadrepresentation from regional bodies around theworld as well as from international and national

governmental bodies. Most of the governmentparticipation came from Australia, Canada, Swe-den, UK and the USA. They developed the basestandards which were then passed on for feed-back from numerous promotional bodies. Thetesting was done in parallel; one for conformityand one for interoperability. Again, this is notconceptually inconsistent with Figure 11.4.

As mentioned earlier, the OSI has been adoptedin Europe but not in the US. There are many rea-sons for this. One is that OSI had to compete withmodels that were operational in the US. One ofthem the SNA was developed by IBM and it wasdifficult to persuade IBM that their design wasnot the best. Another set of protocols, TCP/IP, wasalso popular and was based on UNIX, a very popu-lar operating system. Many of their concepts wereadopted by OSI but it is more difficult to build onan old structure than it is to start afresh from thebottom. Starting from scratch was what happenedwith ISDN and it is one of the many success storiesfor ISO. The second reason perhaps is that it tooktoo long for the international standard to come outin what was a highly volatile technological envi-ronment. Timing is highly critical for the successof international standards. In the case of the OSI,the ISO came on to the scene too late and took toolong. By the time that the standards were ready,the environment had changed: PCs had prolifer-ated, distributed processing had become popularand LANs had emerged. The assumptions under-lying the start of the standards specifications

I SO

PROFILEGROUPS

PROMO-TIONALBODIES

TESTINGGROUPS

NATIONALBODIES

National Institute of Standards and TechnologyOSI Implementors WorkshopEuropean Workshop for OSIAsia Oceanic Workshop Governmental Groups (mostly Australia, Canada, Sweden, UK and US)Other (industrial representations)

OOS

POSI

SPAG

X/Open

On conformance

On interoperabilityANSI (US)

BSI (UK)

DIN (Germany)

TTC (Japan)

Figure 11.5 Players for work on OSI

116

Page 130: Telecommunications and Networks

Standards

had mostly changed during the long period of ges-tation. This long period was partly due to the factthat it was an international standard prepared byinternational organization and this process justtakes a long time. Why? To answer this we mustlook at the organizational structure and process ofinternational standards organizations.

The ISO

The OSI model may not have been too successful(in the US, that is), but it is only one of manyISO products. One indication of the work of theISO and the growth of ISO standards can be seenfrom Figure 11.6. It shows the maintenance of40 international standards in 1992. Not all theseare related to telecommunications but many are,some directly and some indirectly.

1989 1990 1991 1992

Drafts for Publication 33 116 90 106Drafts for voting 0 21 104 103Published standards

and technical reports 0 18 47 184Standards being

maintained 0 0 0 40

Figure 11.6 Growth of standards and technical reports(source: International Herald Tribune, Oct. 14, 1993,p. 14)

The ISO (and its successor organization) havea vertical hierarchy of decision-making involvingits international membership at the regional andnational levels. The national levels have theirown hierarchy. One configuration of the hierar-chy of effort in the US is shown in Figure 11.7.The international body has its own internal hier-archy which includes councils, directors, advisorygroups, study groups, as well as regional confer-ences and international conference (Irma, 1994).

Working parallel to the ISO is the CCITT(Comite Consultatif Internationale Telephoniqueet Telegraphic). CCITT is part of ITU (Interna-tional Union Telegraphique) that was founded in1934 and in 1947 it became an agency of the UN.In 1991, the ITU had 165 members, each exercis-ing one vote. Mostly bodies like governments thatwere concerned with telecommunications or stateowned PT&Ts were members. The UK is repre-sented by the Ministry of Trade and Industry andthe US by a mix of representatives from both gov-ernment agencies and suppliers but coordinatedby the Department of State (Foreign Ministry).

In 1993, a structural reform of the ITU sawthe demise of CCITT and its resurrection as thestandardization sector of ITU (ITU-T). It has15 study groups including those on transmis-sion equipment and service systems, modems,switching, voice network operations and main-tenance, languages for telecommunications, andopen systems.

International organizations have a difficulttask of determining when to start on standards.

International

Regional

National

Local

ETS(Europe)

ISO/CCITT

TI(NorthAmerica)

ANSI

Special industry groups

Forums

Leadership role

Follower role

IndustryPrivate

Figure 11.7 Heirarchy of effort at ISO

117

Page 131: Telecommunications and Networks

Telecommunications and networks

National

Regional

Global

ISO/IEC

JTC1 IEC

IEEE

CCITTCCIR

TTC

DINANSI BSI

ITU

CCIR = Inter. Radio Consultative Comm.JTCI = Joint Technical Comm. 1IEC = Inter. Electro. Technical Comm.

Figure 11.8 Organizations involved in International Standards

You cannot develop too early before the technol-ogy has stabilized or the standards will not berelevant because they are obsolete. You cannotstart too late because then vested interests dig-inwith a loyal following and universal acceptancebecomes difficult.

ISO and the CCITT have numerous organi-zations with which it has a lateral and hori-zontal relationship. These are summarized inFigure 11.8 to give a picture of the many organi-zations involved which may explain the sluggish-ness of international organizations. The diagramalso emphasizes that each one of them has apotential input through national organizations.If you wish to influence an international contractyou will have to contribute your time and effort.The choice is yours.

You may still be in the dark as to the type ofinteractions that take place at the national andregional level. To throw some light on these pro-cesses, we discuss one regional (in Europe) andone national (in Japan) organization and see itsstructure and decision-making process. We makethe choice largely on geographic distribution.Coincidentally, these are the organizations andcountries that are most active in the formulationof standards.

European standards organizations

The European standards organizations areshown in Figure 11.9 with the importantrelationship down under being identified. Notidentified are the many interrelationships withthe CCITT, the ISO and other internationalorganizations, though some national bodies likethe national standards bodies and nationaltelecommunications administration do haveadditional direct links to international bodies asindicated in Figure 11.9.

At the national and corporate level, theconnections are through trade associations anduser groups at the same level as our government.Thus both the private sector and the publicsector as well are represented. Other inputsto the European Standards Bodies are fromNational Standards Bodies and the NationalTelecommunication Administration.

The ETSI (European TelecommunicationsStandards Institute) is the body that issues thestandards. It has 269 members with much ofthe detailed work being done in project teamsappointed for a defined task and for a limitedperiod of time. The proposals of the project teams

118

Page 132: Telecommunications and Networks

Standards

I S O I T U

Manufacturers 63%Public Network Operation 14%User/Service Providers 13%National Administration 10%

National Standards Bodies

National Telecommunications Administration

C E P TE T S ICENELECC E N

KEY:

CEN = European Committee for StandardizationCENELAC = European Comm. for Electrochemical StandardizationETSI = European Telecommunications Standards InstituteCEPT = Conference European des Administrations des Poste et des Telecommunications

Figure 11.9 Regional standards body in Europe

goes to the technical committees and is thenpassed on to the technical assembly for approvalwhere a weighted voting method is used.

The ETSI uses the ‘working assumption’process as shown in Figure 11.10. Proposals are

base on the ‘best’ estimate of technology. ‘In thisway, concepts can be put together quickly andadjusted to take account of changes, while stillproviding a measure of stability so important forimplementors.’ (Mazda, 1992: p. 61).

Experts

Nationl Bodies

Members(weighted voting)

DRAFTStandard

Analysis

PublicConsultation

ASSEMBLY

European Standards Institute

Figure 11.10 Standards-making process in Europe

119

Page 133: Telecommunications and Networks

Telecommunications and networks

Draft Standards

Review and analysis of draft standards

Advanced briefings of draft standards

Study of advanced briefing (3 weeks)

Analysis

Voting

Counter-proposals

Figure 11.11 Standardization process in Japan (adaptedfrom Iida, 1994: p. 49)

TTC in JAPAN

TTC stands for Telecommunications Technol-ogy Committee. It is organized with a Board ofDirectors, Council and Technical Assembly towhich one Research and Planning Committeeand five technical subcommittees (TCS) report.Each committee and the five TCS has a set ofbetween two and eight working groups each, 27in all in 1993. Their process of decision-making isshown in Figure 11.11. It is quite different fromthe other side of the Pacific, in the US, and isdiscussed in the case study for this chapter.

The TTC is the dominant organization inthe Pacific and the Far East. It is perhaps thede facto regional organization though Australiaand Singapore are becoming increasingly activein telecommunications. TTC is also a memberof the regional INTUG which includes mem-berships from Australia, UK and the US. TheUS is also a member of the North Americanregional organization on Telecommunications,T1 (Telecommunications 1).

Summary and conclusions

International standards in telecommunicationsis at the confluence of two sets of dynamic

forces: the telecommunications industry and thetelecommunications environment. The telecom-munications industry is changing rapidly withthe widespread installation of fibre optics, dig-itization of networks, intelligent networks, EDH(Electronic Data Handling), B-ISDN, EDI (Elec-tronic Data Interchange), evolving switchingtechnology, as well as mobile and personalcommunications.

The telecommunications environment ischanging because of the large proliferation ofPCs, the demand for distributed processing andthe client server system, the popularity of e-mail and multimedia, the persuasiveness of wire-less technology, and the increasing desire forhigh speed access. These are made largely fea-sible because of liberalization, privatization andderegulation of the computing and telecommuni-cations industry in some countries, and the glob-alization of these industries.

To respond to these changes and to meetthese diverse demands we need global standardsthat would provide a platform for compatibil-ity and interoperability of telecommunicationequipment and services. The standards not onlyhave to be feasible but they must also be accept-able to the computing industry and the importantgovernments involved. Standards must also beflexible in the ever-changing environment. Alsoimportant, the standards must be in place at theproper time: not too early lest an undevelopedtechnology gets ‘frozen’ by the standard; nor toolate lest vested interests of equipment manufac-turers and carriers become too entrenched forthem to compromise and cause them to resisttheir products becoming outdated and replacedby products following international standards.

Once the right standards are in place at theright time, there are many beneficiaries. Busi-ness can expand their markets abroad, therebyincreasing sales and achieving economies of scale,enabling them to offer a more competitive andbetter product for a lower price.

International standards cost millions of dollarsevery year. Little of this cost is borne by the saleof standards and most of the cost is borne by afew countries, many corporations and some indi-viduals dedicated to working on standards. Thecoordination of this international effort is doneby international organizations line the ISO andthe CCITT, now the ITU. They produce manystandards, one every 18 months on the average,but some can taken up to 10 years in some cases.

120

Page 134: Telecommunications and Networks

Standards

They are slow in delivery because of the inher-ently slow process of downstream and upstreamfeedback and consensus building between corpo-rations, industry and trade associations, as wellas national and regional bodies that often havevested commercial interests if not national inter-ests to support. Self interests are often at stake.Manufacturers of equipment want their designsto be internationally accepted thereby increas-ing their market share, whilst carriers do notwant to change what they already have any morethan necessary, and that if they must changethen the change should be easy and inexpensive.They agree to temporary alliances for specificprojects among national and international com-petitors for they all stand to gain from the par-ticipation with peers. The ultimate beneficiariesare the public at large who benefit from the manyapplications that telecommunications made pos-sible, especially for the rural and remote areas allaround the world. This is covered more fully inPart 3 of the book.

Despite this laudable effort, there are stillsome unresolved matters that relate to interna-tional standards in telecommunications. One isthe transfer of proprietary knowledge and tech-nology to the public domain. This raises the deli-cate problems of extracting proprietary informa-tion for sharing, despite the actual (or perceived)loss for those who hold the proprietary knowl-edge, patents and experience, and of making theparties concerned participate in the developmentof international standards. They must be broughtinto the consensus-building and cooperation pro-cess of developing international standards.

Case 11.1: Development ofinternational standards for theB-ISDN in the USThe development of international standards inthe US is different from that in other coun-tries largely because it is greatly influenced byits experience in the development of the 802standards by the IEEE (Institute of Electricaland Electronic Engineers) and the developmentof the TCP/IP (Transmission Control Proto-col/Internet Protocol) standards by IETF (Inter-net Engineering Task Force). Encouraged by thesuccess of the IETF and the IEEE 802 standards,there evolved another layer to the traditionalstandards bodies called the Forum. The Forum is

a consortium representing a very wide spectrumof industrial players highly focused on workingon a tentative standard developed by the tradi-tional standards organizations.

The new group then builds rapidly on the baseto reach an agreement among commercial inter-ests on the details of implementation. Becausethese agreements are driven by the impetus ofcommercial products and services, they tend todeal with short-term considerations. Such agree-ments, however, are quite important to mobilizemany segments to develop actual products andservices based on these technologies. (Amy,1994: p. 53).

The Forum focuses primarily on customerpremises interests and produces a consensus-based standard. This was true of the Forum onthe ATM (Asynchronous Transfer Mode) in thedevelopment of consensus on standards for theB-ISDN standards. The Forum along with IETFinteract with the four technical subcommitteeswhich are accredited to the ANSI (AmericanNational Standards Institute) via their parentcommittees. The Forum differs with the ANSIcommittees in that the ATM focuses primarilyon the interests of the customer premises whilstthe ANSI committees emphasize the interests ofthe public networks.

T1 is the North American standards organi-zation. It has many working groups, like theT1A1 group which focuses on performance issues;T1E1 is concerned with physical transmissioninterface; T1M1 Focuses on network managementissues; and T1S1 in concerned with network sig-nalling protocols and network service definition.There are also interrelationships with interna-tional organizations such as with the internationalorganization ITU (successor of CCITT) throughthe US National Committee which is coordinatedby the Department of State (Foreign Ministry).

The effort on the B-ISDN is contributed byover 200 individuals coming from more than 60organizations. Members who have voting rightspay a membership fee and every sponsoring orga-nizations pays for the expenses and the time allo-cated to the project. This contribution is con-siderable for it is estimated that over 200 000impressions are made for a normal meeting.

Source: Amy, Robert M. (1994). ‘Standardsby consensus’. IEEE Communications Magazine,32(1), 52 55.

121

Page 135: Telecommunications and Networks

Telecommunications and networks

Case 11.2: Networking standardsin Europe

There are four organizations in Europe thatare concerned with standards of networking.These are:ITU-T, the International TelecommunicationsTelecommunications sector; the ETSI, the Euro-pean Telecommunications Standards Institute;the NMF, the Network Management Forum; andthe Tina-C, the Telecommunications IntelligentNetwork Architecture Committee.

The ITU-T is mainly concerned with TMN,Telecommunications Management Network. TheTMN solution has its critics including thespokesperson of the German giant Siemensfor having ‘always been divided into fourlayers element, network, service and businessmanagement. This certainly does not facilitatethe rapid provision of comprehensive solutionsbecause the standard interfaces are required tolink the individual layers.’

Working against these criticisms of the TMNsolution is the ETSI which has helped to fleshout the TMN framework along with customeradministration and fault and traffic manage-ment. ETSI has also made recommendations tothe ITU on UPT, Universal CommunicationsSystem.

NMF is working on information moduledefinitions that are available for transmission,switching, broadband and access management.

Tina-C embraces the TMN as well as IN,Intelligent Networks. It attempts to increase theincrease the IQ of the IN ‘with focus on dis-tributed processing and object communications. . . also addresses service management more speci-fically than TMN.’

Source: International Herald Tribune, Oct. 11,1995, p. 12.

Bibliography

Burton, J. (1995). Standard issue. Byte, 20(9), 201 205.David, J. (1992). LAN security standards. Computers

and Security, 11(7), 607 619.Duran, J.M. and Visser, J. (1992). International stan-

dards for intelligent networks. IEEE Communica-tions Magazine, 30(2), 34 36.

Iida, T. (1992). Domestic standards in a changingworld. IEEE Communications Magazine, 32(1),46 49.

Irmer, T. (1994). Shaping Future telecommunications:the challenge of global standardization. IEEE Com-munication Magazine, 32(1), 20 28.

Knight, I. (1991). Telecommunications standardsdevelopment. Telecommunications, 25(1), 38 42.

Mazda, F. (1992). Standardization on standards. Tele-communications, 26(9), 54 61.

Mossotto, C. (1993). Pathways for telecommunications:a European look. IEEE Communications Magazine,26(8), 52 57.

Mostafa, H. and Sparell, D.K. (1992). Standards andinnovation in telecommunications. IEEE Communi-cations Magazine, 30(7), 22 28.

Nak, D. (1994). Coordinating global standards andmarket demands. IEEE Communications Magazine,32(1), 72 75.

Peterson, G.H. and Dvorak, C.A. (1994). Global stan-dards. IEEE Communications Magazine, 32(1), 68 70.

Trauth, E.M. and Thomas, R.S. (1993). Electronic datainterchange: a new frontier for global standards pol-icy. Journal of Global Information Management, 1(4),6 17.

Tristam, C. (1995). Do you really need ISO 9000? OpenComputing, 12(5), 65 66.

122

Page 136: Telecommunications and Networks

12

SECURITY FOR TELECOMMUNICATIONEvery new technology carries with it the opportunity to invent a new crime.

Laurence Urgenson

Introduction

Problems of computer security go back todays long before telecommunications and LANsbecame ubiquitous. A study for the period of1964 73 identified 148 cases of computer crimethat were made public. Telecommunications hasonly increased the exposure to computer crime.

Anyone who can scrounge up a computer, amodem and $20 a month in connection fees canhave a direct link to the Internet and be subjectto break-ins or launch attacks on others . . . InEuropean countries such as the Netherlands, forinstance, computer intrusion is not necessarilya crime . . . some of the most respected domainson the Internet contain computers that are effec-tively wide open to all-comers the equivalentof a car left unattended with the engine running. . . Computer science professors . . . assign theirstudents sites on the Internet to break into andfiles to bring back as proof that they understandthe protocols involved. (Wallich, 1994: p. 94).

The US Secret Service has estimated the annualcost of fraud by telecommunications at around$2.5 billion; industry numbers range from$1 billion to $9 billion. One reason for this highcost is that PCs are now very common and thepopulation that know how to use PCs is solarge. The opportunity is great and ‘opportunitymakes a thief’. There are also many peoplewho are either using computer systems or haveaccess to computer systems. One author hasidentified 55 sources for potential computerintrusion. (Wallich, 1994: p. 90 91). Someintrusions are the unintended consequencesof corporate action when in their enthusiasm

for downsizing, they pushed applications (andsometimes mission critical applications) on toremote servers which had to use a LAN orsome other telecommunications system. Thisincreased the points of vulnerability to intrusionand unauthorized access. With every changein technology comes an opportunity to violatethe system and the newer intrusions are alwaysgetting to be more creative and effective. In onecase in 1993, a long distance telephone companycardholder compromised his card and 600unauthorized international calls were placed onthat card before the network specialists detectedthe problem and disconnected the violator. Allthis happened in less than 2 minutes. We haveto be faster and smarter. We cannot continuethe traditional approach of ‘security throughobscurity’ which is the keeping of vulnerable datasecret. We need to use technology and devisesecurity policies that not only catch the violatorbut dissuade the violator from breaking into thesystem. It is such technologies and policies thatare the subject for this chapter. In it we examinethe threats and responses to access which is anew twist to an old problem of computer centresecurity. We also examine the unique threatsto telecommunications including the threat ofcomputer viruses.

We will not discuss organizational considera-tions of security such as administrative controls,the appointment of a security officer and opera-tional security, because these considerations arecommon to corporate security (and not entirelyunique to telecommunications). Hence these con-siderations lie outside the scope of this book. Theone exception is the assessment of risk and thedetermination of how much security is needed,because this is somewhat different and oftenmore critical in telecommunications.

123

Page 137: Telecommunications and Networks

Telecommunications and networks

Espionagefraud andtheft

Inadvertenthuman mistakes in ProgrammingDesignInputOutputProceduresOperations

Invasionofprivacy

Danger of naturaldisasters and accidents

Legislative

Personnel

Operational

Communications

Authorization

Terminal use

Plant

deterrents

policies and screening

security

hardware/software

controls

security

Figure 12.1 Layers of control

Security

Security is where data (and information) are pro-tected against unauthorized modification, cap-ture, and destruction or disclosure. Personal dataare not the only vulnerable data. Confidentialdata on market strategies and product develop-ment must be kept from the eyes of competitors.Large sums of money transferred daily by elec-tronic fund transfer must be protected againsttheft. The very high volume of business infor-mation processed by computers today means thatthe rewards of industrial espionage and fraud areof a much higher magnitude than in the past andare ever increasing.

Records must also be protected from accidentsand natural disasters. For example, a breakdownin air-conditioning may cause some computers tooverheat, resulting in a loss of computing facil-ities. Fire, flood, hurricanes, and even a heavysnowfall causing a roof to collapse can causethe destruction of data and valuable computerequipment.

Security measures described below are de-signed to guard information systems from all the

above threats. These measures can be envisionedas providing layers of protection as shownin Figure 12.1. Some controls guard againstinfiltration for purposes of data manipulation,alteration of computer programs, pillage orunauthorized use of the computer itself. Othermeasures guard against physical plant, monitoroperations and telecommunications, and regulatepersonnel. These controls are now discussedbelow.

Terminal use controlsBadge systems, physical barriers (locked doors,window bars, electric fences), a buffer zone,guard dogs and security check stations are pro-cedures common to restricted areas of manufac-turing plants and government installations wherework with secret or classified materials takesplace. A vault for storage of files and programsand a librarian responsible for their checkoutprovide additional control.

With on-line systems using telecommuni-cations, security is a greater problem, sincestringent access controls to terminals may not

124

Page 138: Telecommunications and Networks

Security for telecommunication

exist at remote sites. The computer itself must,therefore, ascertain the identity of persons whowish to log on and must determine whether theyare entitled to use the system. Identification canbe based on:

ž what the user has, such as an ID card or key;ž who the user is, as determined by some bio-

metric measure or physical characteristic;ž what the user knows, such as a password.

Keys and cards

Locks on terminals that require a key before theycan be operated are one way to restrict access toa computer. Another way is to require users tocarry a card identifier that is inserted in a cardreader when they want to use the computer.

A microprocessor in the reader makes anaccept or reject decision based on the card.

Many types of card system are on the market.Some use plastic cards, similar to credit cards,with a strip of magnetically encoded data on thefront or back. Some have a core of magnetizedspots of encoded data. Proximity cards containelectronic circuitry sandwiched in the card; thereader for this card must include a transmitterand receiver. Optical cards encode data as a pat-tern of light spots that can be ‘read’ or illumi-nated by specific light sources, such as infrared.In addition, there are smart ID cards that have anintegrated circuit chip embedded in the plastic.The chip has both coded memory, where personalidentification codes can be stored, and micropro-cessor intelligence.

The disadvantage of both keys and cards isthat they can be lost, stolen or counterfeited. Inother words, their possession does not absolutelyidentify the holder as an authorized system user.For this reason, the use of passwords is often anadded security feature of key and card systems.

Biometric systems

Some terminal control systems base identificationon the physical attributes of system users. Forexample, an electronic scan may be made of thehand of the person requesting terminal access.This scan is then measured and compared by com-puter to scans previously made of authorized sys-tem users and stored in the computer’s memory.Only a positive match will permit system access.

Fingerprints or palm prints can likewise beused to identify bona fide system users. Such

security systems use electro-optical recognitionand file matching of fingerprint or palm printminutiae. Signature verification of the personwishing to log onto the computer is yet anothersecurity option. Such systems are based on thedynamics of pen motion related to time whenthe signer writes with a wired pen or on a sensi-tized pad. A biometric system can also be basedon voiceprints. In this case, a voice profile ofeach authorized user is recorded as an analoguesignal, then converted into digital from which aset of measurements are derived that identify thevoice pattern of each speaker. Again, identifica-tion depends on matching: the voice pattern ofthe person wishing computer access is comparedwith voice profiles in computer memory.

Biometric control systems, of special interestto defence industries and the police, have beenunder development for many years. Althoughtechnological breakthroughs that enable discrim-ination of complex patterns have been maderecently, pattern recognition systems are stillnot problem free. Many have difficulty recog-nizing patterns under less than optimal condi-tions. For example, a blister, inflammation, cut,even sweat on hands, can interfere with a finger-print match. Health or mood that changes one’svoice can prevent a voiceprint match. A combi-nation of devices, such as voice plus hand anal-ysers, might ensure positive identification; butsuch equipment is too expensive at the presenttime to be cost effective for most operations inbusiness.

Passwords

The use of passwords is one of the more pop-ular methods of restricting terminal access. Oneexample of a password system is the required useof a personal identification number to gain accessto an automated teller machine at a bank.

The problem with passwords is that they aresubject to careless handling by users. Some userswrite the code on a sheet of paper that they carryin their wallet, or they tape the paper to theterminal itself. When given a choice, users fre-quently select a password that they can easilyremember, such as their birth date, house num-ber or names of pets, wives or children. Top ofthe list in Britain seems to be ‘Fred’, ‘God’, ‘Pass’and ‘Genius’.

Someone determined to access the computerwill make guesses, trying such obvious passwords

125

Page 139: Telecommunications and Networks

Telecommunications and networks

first. Even passwords as complex as algebraictransformations of a random number generatedby the computer have been broken with theassistance of readily available microcomputers.Of course, the longer a password is in use, thegreater the likelihood of its being compromised.

One-time passwords are a viable alternative.But systems of this nature are difficult to admin-ister. First of all, each authorized user mustbe given a list of randomly selected passwords.Then there must be agreement on the method ofselecting the next valid password from the list, amethod that is synchronized between computerand user. Finally, storage of the list must besecure, a challenge when portable terminals areused by personnel in remote sites where securitymay be lax.

Recently, a number of password systems havebeen put on the market that generate a newpassword unique to each user each time access isattempted. This is done with a central intelligentcontroller at the host site and a random passwordgenerator for each user. Typically, the systemworks as follows: To gain mainframe access, theuser enters his or her name (or ID code) on aterminal keyboard. The computer responds witha ‘challenge number’. This is input to the user’spassword generator. By applying a cryptographicalgorithm and a secret key (a set of data uniqueto each password generator) to this challenge‘seed’, a one-time password is generated. The userthen enters this password into the computer. Thecentral controller simultaneously calculates thecorrect password and will grant access if a matchoccurs.

Such password management systems aredifficult to compromise, because passwords areconstantly changed. Only a short period of timeis allowed for entry of the correct password.Furthermore, the control system is protocoldependent. This compounds the problems of aperson trying to breach the system in a network

having a variety of protocols. The advantageto the user is that the password generator isportable, usually a handheld device, and easyto use.

In recent years, much publicity has been givento hackers, usually youths, who often derivemalicious pleasure in circumventing computeraccess controls.

Authorization controlsIn addition to the identification systems outlinedin the preceding sections, control systems can beinstalled to verify whether a user is authorizedto access files and databases, and to ascertainwhat type of access is permitted (read, write orupdate).

Data directory

A computer can be programmed to reference astored data directory security matrix to deter-mine the security code needed to access specificdata elements in files before processing a user’sjob. When the user lacks the proper security clear-ance, access will be denied. In a similar manner,the computer might be programmed to referencea table that specific the type of access permittedor the time of day when access is permitted.

The data elements accessible from each ter-minal can likewise be regulated. For example,according to a programmed rule, the terminal inthe database administrator’s office might be theonly terminal permitted access to all files andprograms and the only terminal with access tothe security matrix itself. A sample printout froman access director, sorted by user identificationnumber, is shown in Table 12.1.

Assigning access levels to individuals withinan organization can be a difficult task. Infor-mation is power, and the right to access it is

Table 12.1 Access directory

User identification: 076-835-5623Access limitation: 13 hours (of CPU time for current fiscal year)Account Number: AS5842

Data Elements Type of access Security level Terminal number Time lock

Customer number Read 10 04 08.00 17.00Invoice number Read 10 04 08.00 17.00Cash receipt Read/write 12 06 08.00 12.00

126

Page 140: Telecommunications and Networks

Security for telecommunication

a status symbol. Employees may vie for clear-ance even when they do not require such clear-ance for their jobs. Manager should recognizethat security measures designed to protect con-fidential data and valuable computing resourcesmay antagonize loyal employees. It is importantthat the need for security be understood by work-ers and that security controls be administratedwith tact.

Security kernel

Unfortunately, the use of a security matrix doesnot provide foolproof security. In a multiusersystem, data in a file can be raided by installinga ‘Trojan horse’ program. Figure 12.2 shows howthis is done. Although the data directory does notauthorize Brown to access File A, confidential

data from that file is copied into another file thatBrown is entitled to access, on the direction ofa secret program, thereby circumventing systemsecurity.

The concept of a security kernel addressesthe Trojan horse issue. A kernel is a hard-ware/software mechanism that implements areference monitor, a systems component thatchecks each reference by a subject (user or pro-gram) to each object (file, device or program) anddetermines whether the access is valid accordingto the system’s security policy. Figure 12.3 showshow Brown is foiled by a reference monitor in hisattempt to raid File A.

A security kernel represents new technologystill in the developmental stage. Although a num-ber of projects have attempted to demonstrate thepracticality of this security approach, results thusfar have been mixed.

Smith

Brown

Smith uses

Program B Pro

gram

secr

etly

co

pies d

ata

from

File

A

in F

ile B

Program B(game programwith embeddedTrojan horse)

Data File B

Data File A(sensitive data)

Security matrix

File Authorized user

Type ofaccess

A Smith

B Brown

B Smith

Read/writeRead/write

Write

Program A

Figure 12.2 Raiding files: a Trojan horse program

Smith

Brown

Program A

Datafile A

Datafile B

Program B(game programwith embeddedTrojan horse)

Smith uses

program B

Acces

s to

file A

block

ed

Ref

eren

ce m

onito

r

Figure 12.3 How a reference monitor blocks a Trojan horse raid

127

Page 141: Telecommunications and Networks

Telecommunications and networks

Virtual machine

An entirely different approach to security in amultiuser environment is a virtual machine. Withthis systems structure, each user loads and runshis or her own copy of an operating system. Ineffect, this isolates one user from another althoughthey use the same physical machine, because eachvirtual machine can be operated at a separatesecurity level. With a virtual memory structure,several user programs can reside in computermemory simultaneously without interference.

Communications securityComputer processing is today closely linked withtelecommunications, which allows the transfer-ence of computer data between remote points.Protecting the confidentiality of this data atthe initiating terminal, during transmission itselfor when transmission is received, has requiredthe development of sophisticated security tech-niques. For example, a handshake, a predeter-mined signal that the computer must recognizebefore initiating transmission, is one way to con-trol communications. This prevents individualsfrom masquerading, pretending to be legitimateusers of the system. Most companies use callbackboxes that phone would-be users at preautho-rized number to verify the access request beforeallowing the user to log on. A hacker who haslearned the handshake code would be deniedaccess with such a system. Protocols, conventions,procedures for user identification (described ear-lier in this chapter) and dialogue termination alsohelp maintain the confidentiality of data.

During transmission, messages are vulnerableto wiretapping, the electro-magnetic pickup ofmessages on communication lines. This may beeavesdropping, passive listening or active wire-tapping involving alteration of data, such as pig-gybacking (the selective interception, modifica-tion or substitution of messages). Another typeof infiltration is reading between the lines. Anillicit user taps the computer when a bona fideuser is connected to the system and is paying forcomputer time but is ‘thinking’, so the computeris idle. This and other uses of unauthorized timecan be quite costly to a business firm.

One method of preventing message intercep-tion is to encode, or encrypt, data in order to ren-der it incomprehensible or useless if intercepted.Encryption, from the Greek root ‘crypt’ meaning

to hide, can be done by either transposition orsubstitution.

In transposition, characters are exchanged bya set of rules. For example, the third and fourthcharacters might be switched so that 5289 becomes5298. In substitution, characters are replaced. Thenumber 1 may become a 3, so that 514 reads 534.Or the substitution may be more complex. A spec-ified number might be added to a digit, such as a2 added to the third digit, making 514 read 516.Decryption restores the data to its original value.Although the principles of encryption are rela-tively simple, most schemas are highly complex.Understanding them may require mathematicalknowledge and technical expertise.

An illustration of encryption appears inFigure 12.4. A key is used to code the message,a key that both sender and receiver possess. Itcould be a random-number key or a key basedon a formula or algorithm. As in all codes, thekey must be difficult to break. Frequent changingof the key adds to the security of data, whichexplains why many systems have a key base witha large number of alternate keys.

In the past, the transportation of the encryp-tion key to authorized users has been as Achilles’heel to systems security. An additional problem isthat there sometimes is insufficient time to passthe key to a legitimate receiver. One solution isa multiple-access cipher in a public key cryp-tosystem. This system has two keys, an E pub-lic encryption key used by the sender, and a Dsecret decryption key used by the receiver. Eachsender/receiver has a set of D and E keys. To codedata to send to Firm X, for example, a businesslooks up Firm X’s E key, published in a publicdirectory, and then transmits a message in codeover a public or insecure transmission line. FirmX alone has the secret D key for decryption. Thissystem can be breached but not easily, since atremendous number of computations would beneeded to derive the secret of D. The code’s secu-rity lies as much in the time required to crackthe algorithm as in the computational complex-ity of the cipher, because the value of much dataresides in timeliness. Often, there is no longerneed for secrecy once a deal is made, the stockmarket closed, or a patent application is filed.

Cryptography, in effect, serves three purposes:identification (helps identify bona fide sendersand receivers); control (prevents alteration of amessage); and privacy (impedes eavesdropping).With the increased reliance of businesses on

128

Page 142: Telecommunications and Networks

Security for telecommunication

Sender

Message sent(plain text)

Encryption DecryptionMessage transmitted

Receiver

Message received(plain text)

Key base Key baseKey Key

Example (in binary numbers)

Message−original input

key−to add

Encryptedmessage− (output)

1 1 0 0

1 0 1 0

0 1 1 0

Encryptedmessage−(input)

key−(subtracted)

Originalmessage−(output)

0 1 1 0

1 0 1 0

1 1 0 0

Figure 12.4 Encrypting and decrypting data in teleprocessing

teleprocessing, much research has been doneon cryptographic systems. But experts disagreeabout how secure even the most complex codesare. Some claim that persons bent on accessingencrypted data will be able to break all codes,using the power, speed and advanced technologyof modern computers.

Cryptography can be implemented throughhardware or through software. The hardwareimplementation can keep up with high speedseven with gigabits of high speeds. But hardwareimplementations are more expensive in initialcosts as well as in space occupied and runningcosts of power consumption. In contrast, softwareimplementations may not always keep upwith high speeds but are more versatile withsolutions such as encryption. Encryption canbe mathematically complex and yet it can bedecrypted given enough time and motivation.One solution to this problem is to embed a cipherin a document identifying both the owner andthe user. The user is the buyer who can purchasea cryptographic key to decode the document.If a person purchased one-time rights, thedocument would turn into gibberish when put ona bulletin board or transmitted via e-mail. Thisprotection against unauthorized transmission isattractive since encryption algorithms protectingprivacy of documents can be protected by codessuch as the PGP (Pretty Good Privacy). PGPwas written by a programmer in Coloradowho has made it available on the Internetand some non-Internet bulletin boards for nocharge. This has angered the US Governmentand frightened its agencies of law enforcement,

defense and intelligence. They support the ‘key-escrow encryption’ which allows the governmentto hold keys to any encrypted communication tobe used for occasions of national security and thefight against money laundering, terrorism andcomputer crime. The US government wants thiscode to be part of all encryption software thatis exported. US exporters argue that foreignerswill not buy such a code and will get encryptioncodes elsewhere, thereby losing the US a lot ofexport. Also, it has been argued that this may bea violation of freedom of speech and expression.And so the controversy rages.

Security for advanced technologyThus far, we have discussed the security con-siderations that are traditional to IT. However,computer technology is well known for its manyentries (and exits) of innovation. Some of theseare listed in Table 12.2. Space limitations do notallow us to examine all or even some of them inany detail except perhaps one: image processing.This will illustrate the types of threats to securityposed by one advanced technology. These threatsinclude:

ž Unauthorized copying and downloading ofimages on terminals, PCs and workstations;

ž Unauthorized release of images by users(including end-users);

ž Integrity of images and unauthorized modifi-cations made to them;

ž Authenticity of images stored in documents.

129

Page 143: Telecommunications and Networks

Telecommunications and networks

Table 12.2 Technologies affecting security

TeleprocessingLANs/MANS/WANS/Internet

SuperhighwayGlobal networksPCN (Personal Computer Networks)Wire-less communicationsFax connections

Image processingSmart cardsNNets (Neural Networks)Laptop computersPalmtop computersPen-based computersPDA (Personal Digital Assistant)Personal communicatorsPocket palsTeleconferencingVideo-conferencing

Since image processing requires large mem-ory and fast computing power, it uses minis andmainframes which are accessible through a net-work and telecommunications. This opens a wideset of threats. Also, the large computers necessaryfor image processing contribute to the tempta-tion of using these powerful resources to breakthe security codes of other systems.

Each of the technologies listed in Table 12.2present a unique set of benefits, exposure torisk, and potential losses, as well as a cost ofimplementation of a response to the threat. Man-agement must continuously evaluate the conse-quences of failure to protect its assets of infor-mation and the cost of updating its protectionagainst technological changes. To achieve this,management must maintain knowledge of thetechnological changes and its potential impacton security; anticipate threats and vulnerabil-ities; and develop protective defense measuresnecessary to combat the threat. The anticipa-tion aspect is important since the protective mea-sures must be instituted in the design stage ofdevelopment and sometimes even earlier in theplanning stage and user specification stage ofdevelopment.

Continued violations of computer and telecom-munications security should be anticipatedespecially from the computer virus, the dan-ger increasing with telecommunications and net-working.

Computer viruses

A computer virus is software that, when enteredinto a computer system can cause it to stopor interrupt operations. It could also corruptdata, destroy data/knowledge-bases, and causeerrors or disrupt operations. The virus does notjust appear or grow, it is inserted (knowinglyand deliberately) into the system. It is definitelyunauthorized.

A virus is a class of programs. Another sort ofcomputer vermin is the worm which is a programthat ‘worms’ its way through a system, alteringsmall bits of code or data when it gets access to it.A worm may also be a virus if it reproduces itselfand infects other programs. In contrast, there isthe logic bomb, which is a computer programthat is set to ‘explode’ when a specified conditionis met.

The computer virus has the ability to propa-gate into other programs. However, the computervirus program must be run in order to reproduceor to do damage. For this to happen, a computervirus must be introduced into the system. Oneway a computer virus can enter the system isin the form of a Trojan horse, which is a com-puter program that seems to do one thing butalso does something else. But, the virus has anew and malicious twist to it. The virus can actinstantly or lay dormant in the system till it istriggered by a specified date or an event such asthe processing of five programs, or the loggingon as a specific user, or whatever.

How does a virus operate? Zajac (1990: p. 26)describes the Lehigh virus (named after theLehigh University in the US). This virus con-sisted of seven lines of code in Pascal and wasplaced in a DOS command file. It operated asfollows:

. . . when a user typed a DOS command, thevirus would check to see if there was a non-infected . . . file on the system. If so, it infectedit and incremented a counter that kept track ofhow many other disks it had infected. The viruswould then execute the user’s command. All thisunbeknown to the user . . . when the infectioncounter hit four, it would totally erase the harddisk.

The computer virus has many objectives andcan have many consequences. Some cases illus-trating its variety are listed below.

130

Page 144: Telecommunications and Networks

Security for telecommunication

The Pakistani virus infected untold numberof PCs as it travelled around the world creatinghavoc and fear among PC users. The authors ofthis virus had the gall (or courtesy!) to announceits existence with the message: ‘Beware of thisVIRUS. Contact us for vaccination.’ The messagewas followed by a 1986 copyright date and twonames (Amjad and Basit) and an address in Pak-istan. The virus is also known by its generic type:the BRAIN virus, named after the volume labelof an infected disk which reads ‘(c)BRAIN’. Thisnaming is perhaps because the authors workedfor ‘Brain Computer Services’.

The BRAIN virus surfaced in the US at theUniversity of Delaware in 1987 followed a monthlater by the Lehigh virus.

The Cornell virus was a passive virus withthe intent of collecting names and passwords.It infected thousands of mainframe computersthroughout the world to enable them to be usedlater as deemed desirable.

In 1987, a virus appeared in a computer net-work in California and interfered with the scancontrol on video monitors and caused one toexplode.

The Jerusalem virus followed the Delawarevirus by two months and its first strain appearedin the Hebrew University in Israel.

In the early 1990s, the ‘Bulgarian factory’replaced Israel as the source of viruses andover 100 viruses from Bulgaria have infected theEastern European countries.

A magazine publisher in Germany distributedover 100 000 disks to its subscribers. Unknownto the publisher, each disk was infected with aSTONED II computer virus.

A student in California was given a disk witha free program. She used the disk (which wasinfected and unknown to her) to do her home-work assignments. She took the disk to her uni-versity and loaded it on the university networkto continue with her assignments. Inadvertently,she infected all the students using the networkand caused the system to operate incorrectly.

An employee in a large firm used a program(not known as being infected) available exter-nally on a public network and downloaded it toher PC. She then used this program on the firm’snetwork thereby infecting the network and eras-ing many corporate files.

A ‘cruise’ virus is similar to the Cornell virus.It is a sophisticated passive virus that couldinfect disks that are distributed openly. When

loaded onto a system it collects information likenames and passwords and can be accessed bythe intruder and used for authorized access tothe system. ‘Once inside the system, the intruderunleashes the virus, which lurks there until anauthorized individual decrypts material or entersthe access sequence; the virus then attacks thematerial.’ (Dehaven, 1993: p. 140).

The ‘stealth’ virus is named after the stealthbomber that attacked Iraq in the Gulf War. Itcruises stealthily inside a system for a long timebefore it strikes. It is known to be designed toelude most of the anti-virus systems. Its versionsinclude the 4096, the V800 and the V2000.

One may ask: How many viruses are around?The answer can be found in various studies. Onestudy in 1991 estimated that there are over 900known viruses. Over 60% of the 600 000 PCsstudied had been hit by a virus. Of the sitesinfected, 38% were confronted with corruptedfiles; 41% complained of unsolicited screen mes-sages, interference or lockup; and 62% reporteda loss in productivity (Sanford, 1993: p. 67).

How can an end-user protect a computer sys-tem from these infectious and dangerous viruses?One answer lies in knowing the possible moti-vations and the sources of the viruses. Themain motivations are greed for money, revenge(against an employer or a firm), and the ‘intel-lectual challenge’ to outsmart others.

There are two main sources of computerviruses: (1) an employee (insider) and (2) openlydistributed programs such as those distributed bymagazine publishers and software vendors. Theemployee is motivated by greed or by revenge.The intruder that uses the open distributionchannels is mainly motivated by the intellectualchallenge of breaking a system. This type ofintruder is a professional who does not getthe public media attention but accomplishes asinister mission none the less. The reward forthis type of intruder is a rise in ego.

The main counter-strategy for (1) is to:

ž control access to programs and data/know-ledge and susceptible media

ž invest in people who are the greatest threatand also the greatest asset to security for theycan often stop or at least discourage intruders

ž monitor and control access to all ‘vital’ com-puter programs as well as data/knowledgeeven when provided by insiders especiallythose that may be disgruntled or unhappywith the organization

131

Page 145: Telecommunications and Networks

Telecommunications and networks

ž control all access during the conversion phaseof new systems. This is when the systemhas been satisfactorily tested and every onerelaxes, a perfect time for the intruder tosneaking a virus into the systems. A strategyto prevent this is for a copy of the testedsystem to be locked-up and used periodicallyto check for any unauthorized insertions.

The important controls for (2) are:

ž do not use unknown softwarež mainframes and even PCs should have a

‘quarantine box’ to sample check new soft-ware

ž centralize software purchasing and purchaseonly from an approved list of vendors

ž do not use freely distributed disks unless theyare ‘reliable’ and tested in a ‘quarantined’program.

ž control access to networks. Do not allowpersons or ‘workstations’ access without the‘need’ to such access. When access is allowedit is logged. This logging may not alwaysidentify the intruder but it may dissuadethe intruder to regularly change the commonsystems passwords

ž educate and instruct employees of the dangerof viruses and their epidemics keep track ofwhere your disks/tapes and programs (includ-ing updates) have come from and where theyhave been

Controls for both (1) and (2) are:

ž latest and frequent back-ups to recover, some-times an extra generation deep

ž anti-viral computer programs, scanners andfilters (programs that check for ‘signatures’ ofknown viruses and alert the user to a possibledanger)

ž keep abreast with the virus technology onejournal on the subject is The Virus Bulletinpublished in the UK.

A combination of filters (also called screens)and gateway(s) act like a wall against virusesor other unauthorized traffic. They are calledfirewalls, and prevent unauthorized traffic (asdefined by network policies) from inside to out-side or the other way around.

The main danger of all these strategies is thatthey may lull the potential victims into a senseof being protected. The anti-viral strategies we

have are against ‘known’ viruses only. Corporatemanagers and end-users must recognize that theintruder, especially the ‘intellectually motivated’intruder, may be challenged by the control mech-anism into finding a new strain and a new twistto an old threat or devise a new threat to beatthe systems that we have not yet heard of or eventhought of.

There is also a cost to all this control andstrategies against viruses. There is a possible lossof morale when employees are not fully trusted.There is also a loss in efficiency. Each layer andlevel of security has an overhead cost and lossin productivity and performance. In addition tocalculating these costs, one must estimate theprobability of attack and the value of the lossentailed if the attack is successful. This analysisis necessary before a security system is designedand implemented to combat viruses.

To guard against viruses and other threatsto security, a firm needs policies that will helpmake the system secure, the next subject of ourdiscussion.

Policies for securityTo implement and enforce any set of securitymeasures there is a need for policies. The objec-tives of these policies could be:

ž specify security measures and specify andassign roles, responsibilities and accountabil-ity to those who own and those who managethe messages being transmitted

ž specify who should have what resources andfor what purpose

ž devolve responsibility (whenever possible) tothe organizational point where controls areimplemented

ž ensure interoperabilityž ensure basis for security review and auditž base security access levels on risk involved

so that there is no or little unreasonableimposition

ž ensure sharing of network in a responsibleand controlled manner

ž protect resources from misuse, unauthorizeduse, malicious actions and carelessness.

To carry out these objectives for security, theremay be a need for a set of policies whichcan be carried out modularly in increasing

132

Page 146: Telecommunications and Networks

Security for telecommunication

progression of complexity, or instituted all atonce as a comprehensive security plan. Thesepolicies (Symonds 1994: p. 479 80) can be:

Network security policy

This needs to be an overall policy stating suchprinciples as: the responsibility for network secu-rity lies with the individual managers of theend system or subsystem except when centralcontrol is needed for the benefit of all theusers. Domains for security control and manage-ment are assigned with specification of necessaryauthority and accountability.

Connection rules

These specify the operational implementation ofthe broad overall network security policy. Theserules include:

ž rules for construction of access media whetherthis be badges, cards or passwords

ž set up and manage accounts for usersž check for virusesž check for backup and integrity of telecommu-

nications systemž set up logging proceduresž set up monitoring of successful and unsuc-

cessful attempts at accessing system.ž set up displays for warnings of possible mis-

use or abuse of system.

Access agreements

These are ‘agreements’ hopefully mutuallyagreed upon but if necessarily imposed thatspecify:

ž definition of origin, destination, dispatch anddelivery of messages

ž definition of authorized originž definition of what constitutes ‘authorized’

trafficž definition of responsibility of custodian and

ownership of messagesž definition of the ‘rights’ to access by a remote

userž acceptance of compliance of standards

for telecommunication accepted by theorganization

ž responsibilities as to the protection and secu-rity of messages.

Administration of authorization

Authorization can be the consequence of aformal computation of values of authorizationvariables as shown in Figure 12.5. Here theobjective attributes are such considerations as theimportance and sensitivity of the authorizationinvolved; the subjective variables include the roleand record of the person seeking authorization;and the access requested will involve the typeof access desired, such as WRITE, READ,MODIFY or DELETE. The algorithm will then

Request forauthentication

AUTHORIZATION

ALGORITHM

Analysis

OK?

Authorization denied Authorization approved

Information on subject

Information on object

Access type information

Authorization authority

No Yes

Figure 12.5 Process of authorization

133

Page 147: Telecommunications and Networks

Telecommunications and networks

determine whether a request for authorizationshould be granted or rejected. An algorithm mayalso be used to determine the level and typeof authorization. This formalization is desirablein order to move the decision-making from asubjective and hence appealable decision to onethat has the perception of being objective. If thedecision is strictly subjective, then not only cantime and energy be consumed in appealing thedecision, but a lot of unnecessary bad feelingcan also be generated by large egos and sensitivepeople.

This authorization process may seem like a lotof bureaucracy for a simple operational matter,but experience has shown that more of a SecurityManager’s time is spent in personal matters thanone would expect. Personal emotions are oftenevoked on matters that involve authorization andsecurity levels for access.

The administration of network security poli-cies can be carried out in one of five ways:

1. centralized, where one person or a group ofpersons is authorized to grant and revokeauthorization;

2. cooperative, where more than one personmakes the decision;

3. hierarchical, where a higher level managerdelegates the authority to a lower level per-son who then exercises all the authority;

4. ownership, where the owner of the objectcreated assigns the authority for its access;

5. decentralized, where the authorization isassigned by the owner.

One decision not determined directly by poli-cies but equally important is the determinationof how much security is needed. This is our nextand last topic for this chapter.

How much security?Security is costly. In addition to the expenseof equipment and personnel to safeguard com-puting resources, other costs must be consid-ered, such as employee dissatisfaction and lossof morale when security precautions delay orimpede operations. In deciding how much secu-rity is needed, management should analyse risk.How exposed and vulnerable are the systems tophysical damage, delayed processing, fraud, dis-closure of data or physical threats? What threatscenarios are possible?

As illustrated in Figure 12.6 systems and usercharacteristics should be assessed when evalu-ating risk. Opportunities for systems invasion,motives of a possible invader, and resources thatmight be allocated to invasion should be consid-ered. The resources available to deter or countera security breach should also be appraised. Theamount of security that should be given to sys-tems should be based, in part, on evaluation ofexpected losses should the systems be breached.One way to calculate expected losses from intru-sion is by application of the formula:

Expected loss D L ð PA ð PB,

whereL D potential loss,

PA D probability of attack,

PB D probability of success,

An insurance company or computer vendorcan help management determine the value of L.Probability values are more difficult to obtain.Rather than attempting to assign a specific value(0.045 or even 0.05 may be of spurious accuracy),relative risk (high, medium or low) should firstbe determined and a numerical value assigned toeach of these relative probabilities (for example,0.8, 0.5 and 0.2, respectively). The risk costs cannow be calculated according to the formula. Forexample:

Exposure L ð PA ð PB D Expected loss1 £500 000 1.0 0.2 £100 0002 200 000 0.6 0.5 60 0003 50 000 0.2 0.8 8 000

Total £168 000expected loss

Loss is determined for each exposure; the sumof the expected losses is the total risk value tothe system. If PA and PB are probabilities for theyear, expected loss is £168 000 per year.

The application of this formula will help man-agement determine whether the addition of secu-rity measures is worth the cost and where thegreatest reduction of expected losses could occurby improving security.

The figures derived from the formula areapproximations at best. We simply do nothave the data to calculate reliable probabilities,because the computer industry is too new to havea long historical record of invasions on which tobase probability assessments. Furthermore, firmsare reluctant to publicize how their security has

134

Page 148: Telecommunications and Networks

Security for telecommunication

Assignment of− Resources− Technology− Procedures

Decisionto

safeguard

Exposure and vulnerabilityof information system

Safeguards

Threatscenarios

Invader'sopportunityfor invasion

Invader's− Resources− Motivation

Invader

Informationsystems

Systemcharacteristics

Usercharacteristics

Probability of− Attack− Success − Expected losses

Figure 12.6 Factors in assessing expected losses from systems intrusion

been breached lest their credibility suffer, sonews of security invasions is seldom broadcast.This means data on security infractions areincomplete. More serious, persons who designsecurity measures are not always aware of thetricks and techniques used by perpetrators ofcrime to break systems security and so cannotplan countermeasures.

Summary and conclusions

There seems to be universal agreement amongknowledgeable commentators that computercrime figures are destined to rise unless thecomputer industry and organizations that usecomputers and telecommunications pay greaterattention to security issues and devote moreresources to the protection of informationsystems. All known protective mechanisms intelecommunications can be broken, given enoughtime, resources and ingenuity. Perhaps the major

objective of security systems should be to makeintrusion too expensive (in equipment, cost andrisk) and too time consuming (in planning effortand time needed to actually break safeguards) tomake attempted violations worthwhile.

Management’s dilemma is not whether secu-rity is needed but how much. Computer crimeis increasing at an alarming rate. This can beattributed, in part, to the temptation arisingfrom the large sums of money being trans-ferred by electronic fund transfer and to the factthat more criminals are becoming knowledgeableabout computer technology and are equippedwith powerful computers to help them plan andexecute their crimes. There are also individualswho are challenged simply to ‘beat the system’.

Risk analysis, and the assessment of expectedlosses and gains from security protection, is onemethod of helping management determine whichsecurity strategies are mot cost effective andwhich policies are to be revised to stay relevantdespite the changing computing environment.

135

Page 149: Telecommunications and Networks

Telecommunications and networks

Advances in computer technologies (like wire-less communication, image processing, tele- andvideo-conferencing, smart cards, neural nets andintelligent systems) pose new and unique prob-lems of security in teleprocessing. Teleprocessingsecurity is often part of security in corporate IT.IT management and corporate management musttake steps of appointing a security officer, plan-ning for security needs, identifying assets to besecured, preparing risk analysis, instituting poli-cies and procedures for security, and then enforc-ing and monitoring these safeguards.

In the 1970s and even early 1980s, there wasgreat concern over privacy. In the late 1980sand the early 1990s, concern for privacy hasgiven way to concern for viruses. This is aproblem that became public in 1984 and sincethen has become both troublesome and com-plex. Despite the growing number of scanners,immunizers and memory-resident activity mon-itors that can defend against some viruses, theparasites seem to grow faster than they can beidentified and information systems are compro-mised. Many viruses may be in their ‘incuba-tion’ stage and unknown to us. Also, the virusesare getting better and more ingenious mak-ing them difficult to identify let alone arrest.Furthermore, the problem has now become aglobal threat with potential insertions of a virus

that cruises through a globally accessed network.We need informational laws and judicial sys-tems with sufficient punitive penalties to dis-suade the potential intruder. And we need bet-ter hardware, and software, and procedures tobeat the intruder if he or she is not sufficientlydissuaded.

Threats to teleprocessing as well as possiblecountermeasures in technology and in organiza-tional policies are summarized in Figure 12.7.

Costs and benefits of security management areoften intangible. For example, the costs of accesscan be measured, but the cost of refusing accessis intangible. A rational decision on access maywell be that access should be based on a ‘need-to-know’ basis. Often, however, the ‘want-to-know’is more than the need-to-know and refusing suchaccess can cause bad feelings, especially whenthe ‘want to know’ by a supervisor is higherthan the ‘need-to-know’ of one supervised. Thesecurity manager must use diplomacy and muchhandholding to diffuse such situations if theneed-to-know/want-to-know discipline is to bemaintained. Thus security management is notjust a problem of computer technology but anexercise in human resource management.

Corporate managers and network managersmust be increasingly conscious and aware of thesecurity of information systems for they may be

SECURITY

Threats

Power supply interrupted

Interception

Disclosure of information

Unauthorized access

Contamination of info.

Virus infection

Copyright infringement

Biometric system

Authentication

Authorization

Firewalls

Encryption

Locks

Audit Trails

Place

Time

Subject

Type ofaccess

Object

Password

Key/token

Badge

Smart

Dump

Brain

Logic bomb

Trojan horse

Worm

Hacker

Others

Piggyback riding

Masquerading

Countermeasures

Figure 12.7 Summary of threats and countermeasures

136

Page 150: Telecommunications and Networks

Security for telecommunication

held legally responsible, not just for ‘prudent’protection of the company’s information assets,but also for the ‘prudent use of the informationavailable to the company in order to protect itscustomers and employees’. (Fried, 1994: p. 63).

Case 12.1: Examples of hackingAT&T security in the UK spotted a impostor,later known as ‘Berferd’, on its lines in 1991 anddecided to follow him. In the next four monthsBerferd assaulted numerous organizations on theInternet including 300 in just one night. TheAT&T lawyers decided to halt the monitoringfearing that they may be accused of harbouringhackers. At the time Berferd was in the Nether-lands where hacking is legal. The Dutch author-ities did nothing about it till Berferd attackedtheir machines.

In 1993, three friends in Milwaukee, betweenthe ages of 15 and 22 belonged to a gang 401(named after the telephone area code for Mil-waukee in the US) penetrated three database (abank, Sloan Kettering Cancer Center and theLos Alamos National Laboratory in New Mex-ico. They were caught and admitted that the onlymotivation was to ‘have fun’ and to demonstratetheir technical wizardry.

Case 12.2: Examples of maliciousdamage

1. A ‘worm’ has destroyed many a computermemory and database. This use of a wormis the malign mutant of the useful worminvented by John Shoch of the XeroxCorporation in California. Shoch created aworm to wriggle through large networkslooking for idle computers and harnessingtheir power to help solve the problem ofunused resources. The malign mutant nowburrows holes through computer memory,leaving huge information gaps.

2. Loss of storage can also result from a ‘Trojanhorse’ also known to security specialistsas ‘logic bombs’. This happened to DickStreeter when his screen went blank ashe was transferring a free program from acomputer bulletin board into his machine.Then the following message appeared:‘Got You’. Nearly 900 accounting, word

processing and game files that were storedin Street’s machine were erased.

3. A ‘trap-door’ collects passwords as they logon, giving the hacker an updated file ofaccess codes. The technique was used to gainunauthorized access to hospital records atManhattan’s Memorial Sloan-Kettering Can-cer Center.12.4

4. A French programmer, after being fired, lefta logic bomb as a farewell salute in therecord-keeping software that he had beenworking on when fired. The bomb explodedtwo years later on New Year’s day, wipingout all the records stored on tape.

5. A logic bomb was placed in the Los AngelesDepartment of Water and Power. It froze theutility’s internal files at a preassigned time,bringing work to a standstill.

6. An Oregon youth in the US used his terminalto gain access to the computer of the Depart-ment of Motor Vehicles, then put the systeminto irreversible disarray just to illustrate itsvulnerability and to prove to himself that hecould ‘beat’ the system.

Case 12.3: German hackerinvades US defence files

Suspicion that someone was wrong at theLawrence Berkeley Laboratory was aroused whenClifford Stoll, manger of a multiuser computersystem at the laboratory, noticed a 75 centaccounting discrepancy. Eighteen months ofdetective work followed in which Stoll cooperatedwith law enforcement officers to track down ahacker who used 75 cents of unauthorized time.The hacker was subsequently arrested by Germanauthorities under suspicion of espionage.

Stoll was able to monitor the hacker’s activ-ities and observe that he methodically invadedfiles in some three dozen US military complexesto sift out information on defence topics. Butthe hacker’s identity remained a mystery untilStoll’s girlfriend suggested setting up a trap, afictitious file on the Strategic Defense Initiative(also known as Star Wars) was inserted by Stollin the Lab’s computer. The hacker, whose inter-est was aroused when he spotted the file, stayedon-line long enough to be traced.

German authorities believed that they crackeda major ring that has been selling sensitive

137

Page 151: Telecommunications and Networks

Telecommunications and networks

military, nuclear and space research informationto the Soviets.

Source: Karen Fritzgerald (1989). The quest forintruder-proof computer systems. IEEE Spec-trum, 24(8), 22 26. See also the delightful bookby Clifford Stoll (1989): The Cuckoo’s Nest: Track-ing a Spy through the Maze of Computer Espionage.Doubleday.

Case 12.4: Buying the silence ofcomputer criminalsThe computer industry research unit in the UKreports that the practice of offering amnestiesto people who break into computers and stealfunds is widespread. Rather than prosecute, thecorporation keep silent on the crimes if part ofthe money is returned and the swindler revealshow the fraud was carried out. Employers fearthat business might be lost if customers learnthat their computer security is flawed.

In one such case, a programmer who legallydiverted £8 million to a Swiss Bank account gaveback £7 million for a non-disclosure agreementprotecting him from prosecution. According to amember of Scotland Yard’s fraud squad, employ-ers who make such agreements may end up incourt themselves, prosecuted for preventing thecourse of justice.

Source: Lindsay Nicolle and Tony Collins. TheComputer Fraud Conspiracy of Silence. The Inde-pendent, 19 June 1989, p. 18.

Case 12.5: The computer ‘badboy’ nabbed by the FBIKen Mitnick was long known for burrowing hisway in the most secret silicon nerve centresof telephone companies and corporate computercentres. He even invaded the North AmericanDefense Command computer and DEC steal-ing $4 million worth of software. In 1989 Mit-nick was caught, convicted and put into in alow security jail. The judge ordered Mitnick toparticipate in a treatment program for compul-sive disorders. Then Mitnick was on parole andescaped. He was back violating sensitive com-puter systems. His first mistake was to invadeand steal software from Shimomura, a com-putational physicist, on of all days Christmas

day. Mitnick’s second mistake was to taunt Shi-momura by mocking voice-mail messages. Thisangered Shimomura (30) who then cooperatedwith the FBI in a search for Mitnick (31). Alsocooperating with the FBI was the Well networkof 11 000 users. Mitnick had violated Well’s sys-tem in January 1995. Then, on 16 February 1995,Mitnick entered a system 5000 km away in Cal-ifornia and wiped out all the accounting recordsof one of Well’s subscribers. It was later learnedthat Mitnick had made a typing error and acci-dentally destroyed the accounting records. ButWell’s management did not know that and theydecided that they could not survive any more ofMitnick and had to cut him off (and thereby warnhim) or risk their entire business. They tried tocontact the FBI but the FBI was on its way toarrest Mitnick and had shut off their cellularphones for fear of alerting Mitnick. Soon there-after, at 1.30 a.m., the search ended in Mitnick’sflat where he was arrested.

Mitnick faces thousands of dollars in fines anddecades in prison, and without parole. The FBIis pushing for a harsh sentence to deter futurecomputer criminals.

Source: US News & World Report, 118(8), Feb.27, 1995, pp. 66 67, and International HeraldTribune, Feb. 18 19, 1995, p. 3.

Case 12.6: Miscellaneous casesusing telecommunicationsVIRUS OFFENDER CAUGHT ANDPUNISHEDRobert Morris is the son of a well respected com-puter expert in the US. Robert inserted a virus ina computer network that impacted negatively onover 6000 users. He was caught and tried. He wassentenced to three years’ probation, 400 hours ofcommunity service and a fine of $10 000.

Source: Ralph M. Stair (1992). Principles of Infor-mation Systems. Boyd and Fraser, p. 628.

BUYING JEWELS IN RUSSIAA consultant for American bank used his knowl-edge of password codes and transferred moneyto Switzerland to buy jewels in Russia. His mis-take was that he bragged about it to friends andwas caught. He was on bail and made anotherunauthorized transfer and was caught again. The

138

Page 152: Telecommunications and Networks

Security for telecommunication

jewels were sold and the bank was the first tomake money out of a fraud committed on it.

BREAKING A SECURITY CODE

Ronald Riverset, Adi Shamir and Leonard Adel-man are authors and managers of the RSA pub-lic key encryption. In 1977 they offered a $100reward to anyone who could break their RSA-129code (named after the 129 digits (429 bits) longcode). In 1993, Arjen Lenstra, a scientist at theBellcore Research Institute took up the gauntletand in May 1994 won the bet and the award.

The breaking of the code has not exposedtelecommunications to every hacker or computercriminal because the encryption code has beenenhanced by using 512 to 1024 bits. It isestimated that a 1024 bit RSA key would require3 ð 1011 MIPS-years to crack.

Source: Byte, Sept. 15, 1995, p. 154.

Supplement 12.1: Popular virusesIt is estimated that there are some 600 virusescruising networks. The six most often encoun-tered during a one-month study done in the UKand reported for their percentage occurrencesare:

Forms 19%Parity Boot 16%NYB 9%AntiEXE 7%Sampo 5%Jack Ripper 5%

Sources: Virus Bulletin, Abingdon, Oxfordshire,UK; and Computerworld, Oct. 9, 1995, p. 49.

Bibliography

Bates, R.J. (1995). Security across the LAN. SecurityManagement, 39(1), 47 50.

Bellovin, S.M. and Cheswick, W.R. (1994). Networkfirewalls. IEEE Communications Magazine, 32(9),50 52.

Bird, J. (1994). Hunting down the hackers. Manage-ment Today, July, 64 66.

Cadler, E. (1991). Security strategies for the 1990’s:security as an enabling technology. Computer Secu-rity Journal, VII(2), 19 25.

Cobb, S. (1995). Internet firewalls. Byte, 20(10),179 180.

Dehaven, J. (1993). Stealth virus attacks. Byte, 18(6),137 142.

Fried, L. (1994). Information security and new tech-nology. Information Systems Management, 12, 57 63.

Ganesan, R. and Sandhu, R. (1994). Introduction (tospecial section on securing cyberspace). Communica-tions of the ACM, 37(11), 28 31.

Gassman, H.P. (1991). Computer networks, privacyprotection, and security. International Journal ofComputer Application in Technology, 4(4), 203 206.

Hafner, K. and Markoff, J. (1991). Cyberpunk: Out-laws and Hackers of the Computer Frontier. Simon &Schuster.

Landwehr, C.E., Bull, A.R., McDermott, J.P. andChoi, W.S. (1994). A taxonomy of computerprogram security flaws. ACM Computing Surveys,26(3), 211 254.

Molva, R. Samfft, D. and Tsudik, G. (1994). Authenti-cation of mobile users. IEEE Network, 8(2), 26 35.

Murray, W.H. and Farrell, P. (1993). Toward a modelof security for a network of computers. ComputerSecurity Journal, IX(1), 1 12.

Nash, J.C. and Nash, M.M. (1992). Matching risk tocost in computer file back-up strategies. The Cana-dian Journal of Information Sciences, 17(2), 1 15.

O’Mahoney, D. (1994). Security considerations in anetwork management environment. IEEE Network,May/June, 12 17.

Parsons, T. and Hsu, A. (1997). Find the right firewall.2(2), 61 74.

Peukett, H. (1991). Enhancing the security of networksystems. Siemens Review, H & D Special, Spring19 22.

Salamone, S. (1993). Internetwork security: unsafe atany node? Data Communications, September, 61 66.

Sandhu, R.S. and Samarai, P. (1994). Access control:principles and practice. IEEE Communications Mag-azine, 32(9), 40 48.

Sanford, C.C. (1993). Computer viruses: symptoms,remedies, and preventive measures. Journal ofComputer Information Systems, XXXIII(3), 67 72.

Sherizan, S. (1992). The globalization of computercrime and information security. Computer SecurityJournal, VII(2), 13 20.

Svigals, J. (1994). Smart cards a security assessment.Computers and Security, 13(2), 107 114.

Symonds, I.M. (1994). Security in distributed andclient/server systems a management view. Com-puters and Security, 13(6), 473 480.

Wallich, P. (1994). Wire pirates. Scientific American,27(3), 90 101.

Zajac, B.P.Jr. (1990). Computer viruses: can they beprevented. Computers and Security, 19(1), 25 31.

139

Page 153: Telecommunications and Networks

13

NETWORK MANAGEMENTNetwork bottlenecks could be choking your company’s power to operate efficiently, and so to grow,to export, to survive. Managing the network becomes a strategic issue for the whole company . . . Aswe begin connecting departments together, the network becomes a mission critical resource.

Chris BidmeadThe old world was characterized by the need to manage things. The new world is characterized bythe need to mange complexity.

Stafford Beer

IntroductionIn the 1980s, we witnessed the strong emergenceof PCs (personal computers) as stand-alonecomputers on desktops. With trends towardsdecentralization and deregulation of the telecom-munications industry in the US, innovationincreased and prices dropped, especially theprices of PCs. As PCs became more robustand user friendly, the number of PCs on desk-tops of most corporations increased. However,many corporations could no longer afford thedesired peripherals, databases and computingpower that they needed. They had to pool andshare resources. Many of these resources weredispersed and had to be connected. The solution tothe connectivity problem was networks: LANs forlocal area networks, MANs for metropolitan areasand WANs for wide area networks. In this context,a network is a set of nodes connected by links andcommunication facilities that have both physicaland logical components. You need to manage andcontrol networks where zipping ‘across your net-work are thousands of packets of information.Information that’s vital to your organization. Los-ing even some of that data can cost your companymillions of dollars.’ (Derfler, 1993: p. 277).

The increase in the number and use of PCs ledto the complexity of networks. There were bottle-necks in message flow, concurrent access, defec-tive devices, devices that flooded networks withjunk signals and slowed systems performance,loose connections, overloaded components, andincorrectly connected components resulting in

crashes of networks. To restore order to thischaos, there was need for an organization of man-agement of networks that achieved most if not allof the following objectives:

ž Increase productivity of end-users;ž Facilitate cooperative work between end-users;ž Function smoothly, continuously, efficiently

and effectively without loss of integrity andsecurity of system;

ž Be robust against errors and misuse;ž Detect fraud activity but verify and account

for selected legitimate activities;ž Monitor system to identify potential fault

conditions and ‘fix’ faults in real-time withminimum loss of performance;

ž Provide statistics (on system and its compo-nents) necessary for planning and control ofsystem;

ž Perform maintenance when needed andpreventive maintenance to avoid or at leastreduce breakdowns of system;

ž Plan hardware/software configurations forgrowing needs of applications and information;

ž Control devices remotely if necessary as incases of failures;

ž Implement and enforce security zones aroundsensitive resources;

ž Be able to effectively integrate within thecorporate information system.

Achieving some of the above objectives is thefunction of management of networks and alsocalled network management. It is the subject of

140

Page 154: Telecommunications and Networks

Network management

this chapter. In it, we shall examine the func-tions of network management identified by ISO(International Standards Organization) as:

1. Fault/problem management;2. Performance management;3. Security management;4. Configuration management;5. Accounting management.

In addition, we will examine the software nec-essary for network management, the managingof the human element in networks management,the development of networks and the acquisitionof resources needed. We will examine how ournetworks can be safe and running. We concludewith an overview of the process of network man-agement and look at future trends in networksand network management.

Management of networks

Fault/problem management

Perhaps the most important and certainly themost urgent function of network managementis to detect and fix problems on a network.These problems could be a loss of performance,impaired transmission or a systems crash. Theycould immobilize an organization and the prob-lems must be quickly detected and corrected.

Fortunately, we have many aids and tools tohelp identify and trace a problem and to providestatistics for maintenance and planning. Thesetools include pollers, monitors and analysers. Atraffic poller sends echo packets to a specificdevice to check if there is any fault in the trans-mission line. A traffic monitor is concerned withactively checking for faulty devices. In contrast tothe poller, a traffic monitor passively ‘listens in’while displaying histograms of traffic patterns.Then there is the analyser. One type of anal-yser is the packet analyser which identifies thedevice that is clogging the network. Like the traf-fic monitor, the analyser is passive, but providesmore information about the packets. The anal-yser for a network, sometimes called the LANanalyser, provides more information and for theentire network. It can search for duplicate net-work addresses, isolate failing nodes, identify theprobable cause of failures, and collect clues onhow the network can be improved.

There is a wide range of services offered bya network analyser. On the low end, we havethe software analyser that will be adequate fora low density of traffic (around 50% of capac-ity), trouble shooting between two network nodes,one or two protocols, and with limited functional-ity. But for more complex systems, one needs thehardware software platform handling high den-sity traffic which may be around 140 different pro-tocols blazing across the network at any one time.Such analysers can perform remote monitoringand control, using a large number of predefinedfilters. A filter allows you to sift through trafficfor a selected set of parameters such as addresses,frame types and even devices by the manufacturer.

Analysers can be specialized like the protocolanalyser which can trigger a specific action whena preset threshold is exceeded. Some analysersare portable ones and are dispatched to a trou-ble spot. This may not be adequate in a continu-ous process production line especially with a highvalue added product in which case the analyser isdedicated to a function or process and resides ina probe attached to the LAN, providing frequentor continuous information to the central networkmanagement. If the analysers use a distributedclient server architecture, its tasks (of monitor-ing, filtering, analysing, offering graphic repre-sentations of data and alerting errors to the oper-ator) can be distributed between the embeddedanalyser and the manager software at the centre.

In addition to the tools discussed above, thereare other facilities for fault management such asthe trouble ticket. It records the time, date, place,operator, alarm or action taken, with each prob-lem that occurred. Also recorded is the equip-ment involved and its vendor. Such informationis not only useful in tracing the cause of the prob-lem, but in helping maintenance and in trying toensure that the problem does not occur again.

In addition to the trouble ticket and analysingtools, there are other pieces of informationnecessary for fault management. These includeaggregated and disaggregated statistics on errorscounts; traffic densities; network performance; aswell as notices, alerts and alarms generated. Thisinformation should be available in easy to digestformat in addition to graphical map representa-tion of the network to locate problem areas.

The network administration staff in testingthe system and reactivating it, may need controlover the network, its links, and devices so thatthey can be initiated, closed down or restarted

141

Page 155: Telecommunications and Networks

Telecommunications and networks

remotely from the central console. They shouldalso be able to divert traffic from failing linesand devices without the end-user being inconve-nienced or even knowing about it.

The problem of detecting faults can becomecomplex because many network systems (an orga-nization may have more than one) not onlyhave many different devices, but many of thesedevices may be from different vendors. Eachhardware device is associated with specific net-work software and protocols, as well as spe-cialized techniques to interpret the alarms andreports generated before other appropriate diag-nostic and tests are performed. However, throughthe use of loop-backs and tests, some faults canbe isolated all the way down to a particular seg-ment of a network, modem or other device on thenetwork. Also, invalid configurations and front-panel tampering on various types of equipmentcan be detected, often remotely from the centralnetwork management console.

Performance management

The management of performance involves mea-suring, monitoring and metering of variablesrelevant to operations, maintenance and the plan-ning of a network. The variables include net-work (and its components’) availability (whichis the mirror image of downtime of the system);response time (the elapse time from query to out-put); utilization of various devices and resources:both hardware (like CPU, disks and other memorydevices, bridges, gates, routers, communicationcards, buffers, repeaters, modems, multiplexers,switches, clients, and servers); and software (likeNOS, software utilities and application programs),as well as traffic density by segments of the net-work. Such statistics when properly analysed canidentify bottlenecks (current and potential), otherpotential problems, and areas that need expansionor contraction, and also help predict future trendsfor planning and budgeting of future systems.

Metering and monitoring is not just ‘BigBrother’ snooping but provides data and statisticsnecessary for operations. For example, statisticson the use of licensed software can be very helpfulin deciding how many copies of each licensed soft-ware to acquire rather than support clients withdedicated copies that may seldom or never be used.

Monitoring can be selective and activated by aset thresholds. For example, the monitoring of afile system may start when it is 80% full, or for a

printer when the print queue is longer than fiveor ten minutes’ wait. Likewise, licensed softwarecan be monitored when, for example, 85% of itsprivilege has been exceeded.

Security management

We have discussed security management inanother chapter but that was largely from the viewpoint of planning and design. During operations,security management may require enhancementsto existing features and the addition of newfeatures. Hence there is an overlap in ourdiscussion but that overlap could serve as a review.

Many of the problems of security (and their solu-tions) are common to IT, as are the cases of control-ling access to a database, an applications program,or even a device and peripheral. The principles ofsecurity management were discussed in the previ-ous chapter. There are, however, special problemsof security that arise in telecommunications andnetworking because of the exposure of messagesduring transmission. Messages are vulnerable towiretapping, the electromagnetic interception ofmessages on communication lines. There may alsobe eavesdropping, passive listening, or active wire-tapping involving alteration of data, such as piggy-backing (the selective interception, modulation, orsubstitution of messages). Another type of infil-tration is reading between the lines. An illicituser taps the computer when a bona fide end-useris connected to the system, but while ‘thinking’leaves the computer idle and unattended. This isa tempting occasion for penetration of the system,which along with other unauthorized uses of com-puter time, can be quite costly to the organization.

Security management involves the definitionof the jurisdiction of network zones, which areavailable to everyone or only to users in selectedzones. This capability, for example, prevents theusers in one department from using an opticalscanner and printer in another department.

Another approach to security management innetworks is to build a ‘firewall’ referred to brieflyin the previous chapter. The term is taken fromfire-fighting, where it refers to the technique ofpreventing a fire from going beyond a line ofdefence. We do that in business when we need torestrict access to certain buildings. One approachwould be to demand an identification and loggingof all those visiting the building. A more secureapproach would be to page the person beingvisited who would then escort the visitor in the

142

Page 156: Telecommunications and Networks

Network management

building at all times. A still more secure approachwould be to not permit the visitor to enter, but toleave a message (or package) at the front desk.In networking, all these approaches to buildingfirewalls are used depending on the preference ofthe network manager. Access is controlled espe-cially at the connections between networks suchas a proprietary network and a public network.

Security management is also responsible forbacking up the system in the event of a failureor a disaster on the network and to do so quicklyand without loss of valuable data/knowledge.

Another problem is with viruses, mentionedbriefly earlier. This is not an issue new to ITand arises even with stand-alone systems that donot use networks. However, if the virus gets on anetwork the contamination can be great becausethe propagation can be fast and spread widely.A lot of network software is able to scan forknown viruses. But, the population of ‘unknown’viruses is increasing (six were added each day in1991) and it seems that a network system is nevercompletely secure. Network management needsto be increasingly and continuously vigilant.

Empirical data shows that a virus often infectsa corporate database through a corporate end-user swapping floppies, especially an end-userwho has a computer at home and swaps floppiesfrom home to the office. Other common viruseson networks are the boot sector virus, the file-infecting virus and the memory resident virus.Fortunately, there are numerous virus detectingcommunications software products with differenttypes of scanners (including signature scanners),memory resident activity monitors and immuniz-ers. In addition, there are many protective pro-cedures that should be adopted against viruses.These include: not putting executable files on theserver in directories where end-users can changethem, restricting dial-in access, and not leavingthe computer on and unattended all night or dur-ing the lunch break.

Another responsibility of security managementis to record all information that may be useful inthe future. This means that careful logs must bekept of all failures, intrusions, and unauthorizedaccess, and qualifying each with identifiers thatwill trace each problem and help avoid them, orat least minimize risks of security infringementin the future.

In security management, it is important notto overspecify or underspecify. If security is lax,then there are violations by intruders and there

may be some who are waiting for the opportunity.If security is too tight, then it is not only unnec-essarily expensive to implement, but also has ahigh psychological cost of alienating the end-userwho may then either bypass the system, ignorethe systems procedures (such as being carelesswith the passwords), or just not use the systemsas much or not use them at all.

Configuration management

All network systems need to be initialized andthen reconfigured. Reconfiguring may mean theaddition, withdrawal or modification of an exist-ing configuration. In practice, to configure adevice means assigning it a zone number and net-work number for routing purposes. These num-bers are then used in collecting statistics on theoperations of the devices.

Reconfiguring a system is also necessarywhen one needs to reroute networking traffic,either through different links and connections orthrough different permutations or combinationsof equipment devices.

Another function of configuration managementcould be the reconfiguration of the applicationprograms and their updates. In the early days ofcomputing, a network manger went around eachoffice cubicle with a computer, installing new soft-ware packages or their updates. Fortunately, thereis now network software that does the distributionautomatically across all legitimate workstationsand clients. This saves a lot of time and energy.Also, the programs loaded onto each client nodeautomatically draws updates from the server.

Reconfiguration can also include the reroutingor bypassing of traffic from overloaded or failed(or failing) links or congested devices to othersthat have slack. When the quality improves, theoriginal configuration is restored. This automaticrerouting is very important for ‘mission critical’applications where unnecessary delays and out-ages cannot be tolerated.

Routers needed for reconfiguration are proto-col-specific, which means that more than one rou-ter is often required. New technology, however, isdelivering multiprotocol routers which are capa-ble of routing several protocols simultaneously.

Accounting management

This is not accounting in the financial and audit-ing sense but an accounting of network assets,

143

Page 157: Telecommunications and Networks

Telecommunications and networks

both hardware and software. The accounting ofeach asset should be by physical location as wellas by ownership so that each asset can be locatedquickly when needed (as in the case of a systemcrash) and for purposes of planning and operat-ing the system.

Accounting of software is also important whenthe software is under licence. Such systems willkeep track of use under the licence so that viola-tions are avoided or at least kept to a minimum.This tracing of licences is important for the pur-pose of paying for the licence as well as for ensur-ing that no legal requirements are violated. Itis therefore sometimes necessary to enforce thelimitations of use and is often done by warningswhen thresholds are repeatedly violated.

Accounting of assets is also important forfinancial accounting where the use of resourcesmay be charged back to the user for paymenttoward maintenance and upgrading.

Software for networkmanagementWe have discussed the many functions of net-work management. But how are they imple-mented? In the early days, much was donemanually. Nowadays, almost all such work isdone by network software. Selected features ofnetwork management software are displayed inTable 13.1. Some of these features are availablein stand-alone packages. Some features are ‘bun-dled’ in other packages. And some suite packageshave most of the features.

These features match most of the features inthe list of functions discussed above (though notin the same order) as being part of the functionsof network management. However, the functionsneeded by any one network will be unique. It istherefore up to the network management person-nel to select one or more software packages or toacquire a suite of integrated programs which maybe more expensive but more comprehensive.

Such a choice is not unique to network man-agement. Every user of a PC has to face thedecision of buying the best software packagesfor each function like word processing, spread-sheets, database management and even net-working, or, alternatively, buying an integratedpackaged suite that has all the desired featuresplus much more. The suite choice is expensiveand not optimal for each function, but it does

Table 13.1 Selected features of network managementsoftware

Diagnostic analysersServer monitoring and reporting

CPU utilizationmemory utilizationlog-on and securityprovides configuration information

Network traffic monitoringApplication metering

enforces blocking when licence is exceededqueues users when licence is exceeded

Application distributionNotification and alertingVirus protection

serverclient

Software distributionHardware and software inventory

allows setup of licence limitsReports

predefineduser-definedfiltered

Automation and scheduling of tasksclient automationserver automationremote control of client PCs

Software supportIntegrationUser interfaceDatabaseFree technical support

avoid some of the problems of making the differ-ent packages compatible with the operating sys-tem available and with each other. Furthermore,the format for each package is consistent and onedoes not have to learn a different set of com-mands and icons for each package. This is theequivalent problem faced by network managers:acquire a suite that does all or most of what youwant and pay more, but do not have the hassle ofpackages being incompatible and having to learnmore instructions and formats.

User managementBesides hardware and software, there is anotherimportant element in every network system:the end-user. Some of them are professionals

144

Page 158: Telecommunications and Networks

Network management

with considerable knowledge of computers andtelecommunications. There are, however, manywho have little or no knowledge of computers orof telecommunications. They have to be trainedand educated. For some, use of networks canbe a cultural shock. Take for instance an officewith a large volume of interdepartmental mailthat was traditionally delivered manually by amessenger. Now this is to be replaced by e-mail. The advantages do not appear instantly,nor are they obvious, but the immediate problemis having to learn about telecommunications andnetworks and having to change the lifestyle at thework place.

End-users need to be trained not only inaccessing and navigating networks, but also onthe policies and procedures of handling datafiles; on trouble shooting and recovering froma crashed system; on the nature of networkmaintenance and network security; on protocols(rules and procedures that must be followedin transmission); and on the functions andlimitations of routers (facility that selects andprovides a path for a message).

End-users should also be trained on computerprograms for network management. For example,there is a profiler program that collectsinformation provided by the end-user. If notcompleted correctly by the end-user, thenprograms and updates due to the end-user willnot be forthcoming.

The timing of the training is also important.For example, training on the applications spe-cific to a LAN should be done before the LANis installed or else the ‘shock’ resulting from anunfamiliarity with the new application can bedemoralizing, causing a drop in productivity.

Introducing personnel to networking involvessome of the same problems of introducing laypersons to computers. The change has to be man-aged with great care, some psychology and a lotof patience. It would help if the network system,especially the interfaces, were end-user friendly.What does this mean? An example would be asystem with a fast response time, tolerance ofcommon human errors, and one that keeps theend-user fully informed. For example, if thereis a system crash, breakdown or slow-down inthe network, the system should acknowledge theproblem and estimate the time it would take tocome up again. Many end-users are tolerant andunderstanding of problems provided they knowwhat is happening and what to expect.

Development of networksThus far we have assumed the existence of anetwork which must be reconfigured, made secure,accounted for and maintained. But, how aboutan organization without a network? Or,how aboutone that has a network but wants another LAN, orconsiders the existing LANs to be inadequate andneeding to be replaced? Then we need a projectwhich must be developed. This requires a develop-ment methodology. The methodology appropriatefor a complex project like a network is the SDLC,Systems Development Life Cycle. The activitiesof the SDLC for a network are similar to otherprojects in IT: feasibility study, user’s require-ments specification, design, implementation andthe system made ready for operations. Duringoperations, there are evaluations. If a modifica-tion is required and it is minor, then the sys-tem is maintained. If the modification requiredis deemed to be major, then the system is rede-veloped. These activities and their iteration andrecycling is shown in Figure 13.1. There are somedifferences in content with a typical IT project andthese differences will now be examined.

The first decision to be made is what typeof network is needed. The choice is betweenLAN, proprietary network, ISDN, public packetswitching network and a public switched tele-phone network. There are also choices withinthese types of network, like the type of LAN thatis best for the needs at hand. Also, the originationof the network (should it be centralized or be aclient server system?) should be considered.

Another important consideration in the designof a network is to decide whether the entire sys-tem is to be supported by one vendor or by manyvendors and whether to make it an open system.The advantage of having one vendor is mostlyin the short term since this will avoid the has-sle when something goes wrong and one vendorblames another. With only one vendor the sys-tem is well integrated and easy to install, trainon, maintain and manage. However, not beingtied to one vendor has the great advantage ofbeing able to select the best (or cheapest) com-ponent and ‘plug and play’. This is the advan-tage of the open architecture (where architectureis the style of construction of structure includ-ing layers in the construction) and open systems,where there is interoperability between compo-nents available from different vendors (preferablyinternationally). In contrast, a closed architecture

145

Page 159: Telecommunications and Networks

Telecommunications and networks

Organization of personnel, education, maintenance and administration

FeasibilityStudy

UserRequirementSpecifications

Design Implementation

Yes

Yes

No

No

Maintenance

MinorModification

?

OPERATIONS

EvaluataionOK?

Figure 13.1 Development of a network

is proprietary technology developed by firms inthe telecommunications and computing industrylike SNA by IBM, DecNet by Digital Corporation,XNA by Xerox and those developed by govern-ments like the DDN in the US).

The important design decision of selecting anarchitecture often depended on where you wereand what equipment you had. If you are inthe US and have IBM equipment, the chanceswere that you would have selected SNA. If youhad a UNIX operating system, then you wouldhave selected TCP/IP. However, if you were inEurope, you would have preferred OSI. The OSIis developed by the ISO (International Stan-dards Organization) and was strongly supportedby many countries in Europe. Europeans alsosupported standards developed by CCITT (Con-sultative Committee on International Telegraphyand Telephone) and ECMA (European Com-puter Manufacturers Association). But all over theworld, including the US, there was strong pres-sure to adopt international models such as theOSI model. The OSI model would add to flexi-bility, reduce response time, reduce costs and addto the availability of standard network products.Network managers had to decide what products toadd so that these would cause the least disruptionwhen a standard was ultimately adopted.

Another important set of decisions concernssoftware. At this design stage, we are not con-cerned with software required for managementof networks which was discussed earlier, butsoftware required for operating a network. Suchsoftware is also concerned with protocols suchas SNMP and CMIP. The SNMP is the Simple

Network Management Protocol which definessystems/network management standards for pri-marily the TCP/IP-based networks. The TCP/IP(Transactional Control Program/Internet Proto-col) is a system designed for the US Departmentof Defense and is a commonly used communi-cations standard. In competition, though, is theCMIP (Common Management Information Pro-tocol), currently the only internationally ratifiednetwork management protocol standard for theOSI. The CMIP was intended to support busi-ness at different levels (local, metropolitan andwide area networks, i.e. the LAN, MAN and theWAN). The CMIP, however, has not yet caughton. It is likely, though. that in the late 1990s,the CMIP and the SNMP protocols will con-verge. Meanwhile, there are several network man-agement platforms, such as OpenView, that arebased on standards including SNMP and CMIP.OverView is actually a family of over 130 solu-tions from over 80 vendors.

Other important design decisions include theselection of bridges and gateways, security fea-tures, back-up and recovery procedures, trans-mission mode, links to carriers, switches, andeven surge protection.

The implementation of a network will mostlikely involve acquiring (‘buy’ decision) somenetwork software and the rest is developed in-house (the ‘make’ decision). Most of the hardwareneeded will be acquired and only seldom madein-house. In all cases, there are well tried andtrue procedures of resource acquisition in IT thatare applicable to network development.

146

Page 160: Telecommunications and Networks

Network management

During operations, there are evaluations. Herethe decision needs to be made as to whether amodification needed is a minor one for mainte-nance, or a major one for redevelopment. This isan important and difficult decision. Such mainte-nance decisions are not unfamiliar to IT person-nel, but the terms of defining a minor or majormodification are different. Consider, for exam-ple, three requests for modification of a network:

1. My LAN connection worked fine for a yearand now is down. Can you please help?

2. I would like a LAN connection for my assis-tant who now is located in Room 39 of Build-ing 131.

3. The current network is inadequate for myneeds. I think that I need a token ring networkfor my department. How soon can I have it?

Of the above three requests, the first is clearly aproblem with maintenance and can most likelybe covered by the maintenance budget and exist-ing maintenance personnel. The second problemis minor if Building 131 is wired for a LAN andis connected to the corporate LAN. Otherwise,it is a major problem of modification and maywell require redevelopment. This request needsfurther investigation and is placed under advise-ment. The third request requires a major mod-ification entailing new project development andnew funding.

The problem facing network management isto distinguish as clearly and quickly as possiblewhich is a minor and which is a major modifi-cation. What network management needs, then,is a set of guidelines for policy and proceduresrelating to maintenance of networks so that themaintenance process is clearly stated and knownto all end-users.

Resources for networkmanagementFrom the description of the nature and functionsof network management it becomes clear that itis an important responsibility. It is also a diffi-cult one, especially if there are hundreds if notthousands of nodes connected to the network andwhen the operations, and indeed the survival ofthe enterprise, may depend on the network(s)for the enterprise. The resources needed arehardware, software and personnel. These arediscussed below.

Hardware and software

Hardware and software are needed to provideinformation necessary for network management.What is the information that is needed? Much ofit is needed for operations at the console on an on-line real-time mode. The console operator shouldbe able to see the network displayed (preferablyin colour) at any level of aggregation, identify-ing the current and potential bottlenecks whichmay be flashing. The status of each crucial phys-ical and logical component can be traced (eachphysical and logical unit may well be identified byits parameters such as name, state, physical loca-tion, etc.). All monitoring information and evenstatistics should be available that are either menudriven or command driven. The console operatorshould be able to get answers to questions like:

ž What are the network loads on different sec-tors and at different times?

ž What types of errors are occurring and where?ž What types of conflicts (like concurrency) are

occurring and where?ž What channels are loaded or near loaded with

what load factors?ž What are the waiting-line queues and to what

extent are they exceeding or approaching theset thresholds?

ž Where is security weak or possibly beingviolated?

ž What printer and print driver is location 119using? (Are they compatible?)

ž What is the printer and print-queue status ofB562?

Answers to some of the above questions andmore may come without a query. That dependson the network management systems programbeing run. Some systems may even have a sim-ulation program that would give answers neces-sary for smoothing the load and increasing theefficiency of the system. What-if questions askedmay include:

What if I added a workstation (or ports) atpoint 119? How would this affect the servicetime and length of queue?What if I added a router on segment 5?How will the performance be affected by theaddition of specific hardware or software?What is the maximum distance travelled byend-users if I were to add a workstation at point146? (The answer to this question will require

147

Page 161: Telecommunications and Networks

Telecommunications and networks

a database of floor plans, site plans, wiringclosets, and equipment inventory by location.)

The answers to some of the simulation questionsmay be long term solutions that are importantto planning, but such capability is often usefulin problem-solving and decision-making for anetwork.

Some consoles have the ability to send andreceive messages with the receiving being pri-oritized. Some of these incoming messages arerecorded and appear as reports for later analysis.Some systems have alarm filtering. Some per-form functions of monitoring and reconfigura-tion automatically. Some systems can even pre-dict network faults based on historical data.Unfortunately, however, there is no system that isfully integrated that provides an end-to-end man-agement, partly because different components ofthe system are manufactured by a proliferationof vendors. As a result, a network managementcentre for a large and complex system would havemultiple consoles each manned by a trained oper-ator and performing one or more functions.

Many of these problems of network man-agement will come close to resolution as weapproach a truly open system (some systems use abuffer layer to approach an open standard archi-tecture). The open standard will not only provideflexibility in operations, but will decrease delays,improve response times and reduce costs.

One desired feature of a network system onthe wish list of many network managers is thatnetwork management systems programs be intel-ligent (by using AI techniques for making infer-ences). We already have some intelligent com-ponents like the intelligent router, which couldroute a message between Paris and Frankfurtthrough New York. What we do not have is anintelligent hub and intelligent network manage-ment systems.

Personnel

To perform the functions needed in network man-agement and to manage its resources, there isa need for staff headed by a network manager,also called a network administrator. The staffcan be organized either centrally or in a dis-tributed fashion. The principles involved in theselection of a central or distributed organizationalconfiguration is no different from those relevantfor a distributed IT organization. The personnelinvolved are of course different and must be

grouped for functional efficiency and effective-ness. The organization will also depend on thesize and complexity of the network organization,the organization culture of the enterprise, and thepersonalities involved both in IT and in corporatemanagement. In the case of a small organization,there may be just one person (carrying a screw-driver) responsible for a network; in complex net-works the number of personnel involved may bein the tens.

The functions of network management havebeen mentioned as being reconfiguration, mon-itoring and maintenance. These functions couldresult in additions and subtractions of nodes onthe network. This can be a difficult political deci-sion in cases of a zero-sum game. i.e. additionsmust be balanced by subtractions. Which onegoes and which one stays? A somewhat similarproblem arises in assigning software and secu-rity levels. This may not be a zero-sum game, butwithdrawing or not assigning high security lev-els of new software can cause ill feelings. Securitylevels can be a status symbol, and if a higher levelof security (or new software) is given to a subor-dinate on a need basis, but not to the supervisor,serious problems of unhappiness and loss of facecan result.

Such decisions, along with the other decisionsneeded for planning, developing and operatinga network, fall on the network administratorwho, in addition to being a technician, must becomfortable with software (especially operatingsystems software), have a working knowledge ofhardware, be knowledgeable of accounting andbudgeting practices, and also be a politician.

To perform the functions of network manage-ment, the staff have to be organized. We dis-cussed organization in an earlier chapter but theemphasis then was on starting a teleprocessingdepartment. However, as the volume of process-ing increases and as it becomes more diverse andcomplex, there is need for adapting the organi-zation to changing conditions. One such config-uration, shown in Figure 13.2, is for a networkof a medium sized company. In it, each per-son may represent one function or multiple func-tions. For example, the librarian or the securityofficer may be part time jobs assigned to a per-son most inclined and trained for the task or theassignments made in order to ‘smoothe’ and dis-tribute the load. Also, one person may do otherjobs as the need arises. For example, a softwareor hardware engineer assigned to maintenance

148

Page 162: Telecommunications and Networks

Network management

NETWORKADMINISTRATOR

Planning andBudgeting

• Design and Implementation• Acquisition

Operations andMaintenance

HardwareEngineers

NetworkSpecialists

HardwareSpecialists

SoftwareEngineers

NetworkSpecialists

Administration• Personnel• Security• Standards• Education• Change Management

Administrators

Specialists

Planners

Economists

TechnologyWatchersFinancialSpecialist

Figure 13.2 Organization for network planning

may well help out in the installation and imple-mentation of a system and may even contributeto a feasibility study. This rotation of work isgood for both the personnel involved (providingvariety of work), but also for the organization(providing back-up and knowledge necessary forintegration).

Network management must not only be con-cerned with the operations of networks, but alsowith the planning of networks. This may includefacilities planning. For example, if the orga-nization is to build a new building, should itbe wired for networks? If networking is neededafter the building is built, then wiring will bemuch more expensive. Likewise, putting cableconduits underground in order to connect build-ings for networking is much cheaper and easierif planned before the buildings are built. Some-times, such planning is not possible, but whenpossible, networking should be planned becausesuch planning ahead is not only cheaper butmuch less disruptive for the organization.

Training is another important function ofnetwork administration. Typically, this meansthe training of the end-user and education intelecommunications and networking for corporatemanagement and other corporate personnel. Thisis important, yet the principles involved apply toother IT functions such as those discussed earlier.

Network personnel, including the networkmanager and network planners, must be cross-trained on the central network console. Alsoimportant is the training and upgrading ofnetwork management staff, including the network

administrator. Their field is ever changing withchanges that are technological with organizationaland social implications. Network managementmust keep well abreast of these developments andnew networking products as they occur and insome cases before they occur. Some of the changesin telecommunications and network technologywill now be discussed.

Network management personnel have a virtualjob that changes all the time, not only because theenterprise environment changes, but also becausetheir technology changes. This may be said ofmany an IT professional, but more so for telecom-munications personnel because their technologyis ever changing and changing very fast. Telecom-munications is one of the sectors of computer tech-nology that is changing the fastest and this canhave profound implications, not just for the enter-prise, but for the nation and indeed all of us.

Networking is not just networking of telecom-munications and computer equipment, it is thenetworking of people. It not only brings peo-ple together through e-mail and bulletin boards,but it can facilitate group work and impact ondecision-making and problem-solving.

Successful networking will require changes,changes required by corporate managementand end-users as well as those required bymaintenance (preventive or otherwise). Also,there is a lot of swapping of data, for example,from the client to the server and vice versa. In allcases, this should be transparent to the end-user.They often get nervous with obvious changes andworry how they will be affected. The satisfied

149

Page 163: Telecommunications and Networks

Telecommunications and networks

end-users are often ones that are not required tochange what they do not initiate and want. Also,they do not always want to know about the bitsand bytes of the system. When they need to learnabout the system, they should be given what theyneed and when they need it, not too much beforebeing needed and certainly not too late.

Training and education should be reactive, butalso proactive. Education and training, as well ascommunication between the end-users and thetechnical personnel, are important strategies toreduce the stress of systems change. The rela-tionship between the provider of communicationservices and customers of services must be reac-tive as well as proactive. There is often tensionbetween the provider and customer because thecustomer often feels that the provider is not giv-ing all that is possible and something is beingheld back. The customer and end-user may alsofeel that the provider can potentially reach fur-ther into their domain than they would like. Theprovider has more control over the customer’sequipment than the customer would prefer. Thecustomer may feel that their equipment is notrestored fast enough and long after the systemsperformance is degraded.

There is sometimes a tension between end-users and corporate management on one sideand network personnel on the other side.Why? Partly because telecommunications andnetworking personnel have great power since

possession of information leads to power. Thesepersonnel are what Boguslaw calls the ‘computerelite’. They have complete access to the cor-porate and enterprise information that entersthe network. This situation arises with otherIT personnel especially database personnel. But,telecommunications personnel have great powerof another kind. They can often start and stopany client and any server. They can ‘listen’ intoany message that comes in or goes out. These per-sonnel may not use their power and commit com-puter crime but they still have access to enter-prise and corporate information that they maymisuse to affect decision-making and problem-solving by corporate management. One way toovercome this problem is to build trust amongthe end-users and corporate management so thatsuch misuse could never happen. Some manage-ment find it prudent to bond all telecommunica-tions personnel.

Selecting the best equipment configurationfor networking is another important responsi-bility of network management. A state-of-the-artconfiguration might consist of a multimedia opensystem with clients and servers using PCs, work-stations and smart terminals, as well as cellu-lar and wire-less devices, connected to LANs,MANs and WANs. The other extreme could bethe stand-alone system around a mainframe witha telephone and PBX connection. Alternatively,there may be in-between configurations based on

TIMETelecommunication

configuration

CPU hardware

(mainstay)

OrganizationsInterconnected(Intra Organ.)

LAN//MAN/WAN/InternetClient−server systemOpen systemMultimediaCellular

PCconnected(IntraOrgan.)

Stand-alonesystems

Data/Telecommunicationsnetwork

Fibre Optics

Mini

Mainframe

PC

Smart Terminal

Workstation

Telephone

PBX Mainframe

Figure 13.3 Evolution of networks

150

Page 164: Telecommunications and Networks

Network management

Table 13.2 Summary of functions of network manage-ment.

Fault/problem managementTroubleshoot, diagnose and predict potential

faultsoutages

Identify, classify, analyse and report faults/outagesInitiate selected automatic ‘fixes’ and restorals

Performance managementMeasure/meter/monitorNetwork availabilityResponse timesDowntimeHardware utilizationSoftware utilizationTrafficPredict trends for planning, budgeting and

maintenance

Security managementControl of access to networkDefine network security zonesDetect and protect against virusesErect and maintain ‘firewalls’Backup and restoreReport unauthorized use and misuse

Configuration managementInitialize network entitiesReconfigure

systems (add/subtract/modify)rerouting of traffic through links and devices

Account managementManage assets of inventory

hardwaresoftware (application programs)

Track and enforce licences

User managementManagement of changeTrainingMake system end-user friendly

mainframes and minis using fibre optics and adata telecommunications network. In many largeorganizations, there is not one but more thanone configuration, each serving different unitsof the organization. Each set of configurationhas its own growth curve depending not only onthe network personnel but also the end-users. Bythe time the top part of one curve is reached,another more advanced configuration curve is

found that is overlapping and reaches higher levelsof performance and service. Knowing on whichcurve you are and when to switch to another growthcurve or even leap-frog a curve is an importantdecision that network mangers must make.

Figure 13.3 shows the evolution in networkconfiguration.

Summary and conclusionsA summary of functions to be performed bynetwork management are listed in Table 13.2.

Case 13.1: Networking in theBritish parliamentNetworking has made a wide range of informa-tion services instantly accessible to members ofthe British Parliament. It has a FDDI (FibreDistributed Data Interface) backbone and spansthree buildings including the Palace of West-minister. The standard services include e-mail,word processing, bulletin boards, and access to aCD-ROM library containing archives of nationalnewspapers and Hansard. The system already hasover 400 users but is designed to serve up to 4000end-users.

The plan for the ultimate network is still beingimplemented. The going is slow because cablingthrough the twelve inch thick walls of the vener-able building is a slow process. Also, any cablingshould allow for the future. However, some of thefuture services planned do not face technologi-cal problems but purely political and social ones.For example, take the implementation of video-ing all the happenings and debates in parliamentinto the offices of the Members of Parliament.This is technologically possible. But is it politi-cally desirable? If implemented, would Membersof Parliament stay in their offices rather thango to the floor? Here is a good example wherenetwork management is not a just technologicalmatter but a political one also.

Source: George Black. All-party networking,Which Computer?.

Case 13.2: Analyser at Hondaauto plantThe Honda automobile production plant in theUS needs analysers for maintaining its large

151

Page 165: Telecommunications and Networks

Telecommunications and networks

network. Since it has a high value added product itcannot afford delays in its monitoring efforts andin detecting failures in its token ring network.

Management wanted probes connected to itscritical LANs all the time, ‘so we can get immedi-ate alerts of problems and already have the diag-nostic equipment in place.’ The site has a LAN-vista system that comprises a master console withfour remote token ring ‘slave’ probes.

Jeffers, at Honda adds: ‘Using the master con-sole, we can toggle a view of any LAN, or allthe LANs, to view any network activity, or per-form diagnoses . . . Because probes are continu-ally monitoring, we may already have data onproblems which is important if something hascrashed to the point where we can’t recreate theevent’.

Because of the client server architecture used,network management ‘can make changes to thesettings on the remote units, such as chang-ing thresholds and trap requests, from the mas-ter console, can monitor and control multiplescreens from a single console.’

Source: Daniel P. Dern (1992). ‘Troubleshootingremote LANs’. Datamation, 38(4), 53 56.

Supplement 13.1: Prices of LANmanagement software in 1995

Problem management (includingthe providing of monthly reports)

$17K

Remote monitoring $15KAdministration (manages user

IDs/passwords and resources)$ 5K

Backup and recovery $ 5kPerformance tuning (tracks baseline

performance, analyses trends andrecommends improvements)

$ 5K

Source: Computerworld, Nov. 27, 1995, p.54.

Bibliography

Antaya, D. and Heile, R. (1995). Digital access devices:criteria for evaluating management options. Telecom-munications, 29(6), 51 52.

Boehm, W. And Ullmann, G. (1991). Network man-agement. International Journal of Computer Applica-tions in Technology, 4(1), 27 34.

Briscoe, P. (1993). ATM: will it live up to user expec-tation. Telecommunications, 27(6), 25 30.

Broadhead, S. (1992). Network management. WhichComputer?, 15(10), 11 125.

Chapin, A.L. (1994). ‘The state of the Internet.Telecommunications, 28(1), 13 18.

Corrigan, P. (1997). Fundamentals of network design.LAN, 12(1), 93 103.

Derfler, F.J. Jr. (1993). An eye into the LAN. PCMagazine, 12(1), 277 300.

Henderson, L.B. and Pervier, C.S. (1992). Managingnetwork stations. IEEE Spectrum, 29(4), 55 58.

Kosiur, D. (1991). Managing networks. Macworld,February, 152 159.

LeFavi, F.A. (1995). Network quality assurance: achecklist. Telecommunications, 29(7), 57 60.

Marx, G.T. (1994). New telecommunication technolo-gies require new manners. Telecommunications Policy,18(7), 538 551.

Muller, N.J. (1992). Integrated network management.Information Systems Management, 9(4), 8 15.

Nachenberg, C. (1997). Computer virus-antiviruscoevolution. Communications of the ACM, 40(1),46 51.

Rigney, S. (1995). LAN tamers. PC Magazine, 14(20),237 267.

Schneier, B. (1994). Applied Cryptography. Wiley.Steinke, S. (1997). Tutorial on SNMP. LAN, 12(2),

21 22.Strehlo, C. (1988). A 10 point prescription for LAN

management. Personal Computing, 12(7), 109 118.

152

Page 166: Telecommunications and Networks

14

RESOURCES FOR TELEPROCESSINGThis is a Man’s Age. The machines are simply tools which man has devised to help him do a better job.

Thomas J. Watson (of IBM fame)

I don’t think the mind was made to do logical operations all day long. Patrica

IntroductionWe have discussed some resources needed forteleprocessing such as the media for transmis-sion, the devices for connectivity, network man-agement resources, and telecommunications per-sonnel. We also recognized that the traffic israpidly changing from data to text and nowto multimedia. As the complexity of the trafficincreases and as the volume of traffic increaseswe need more capacity for the processing of allthis traffic. At the client end, we have PCs andworkstations. At the server end, PCs and evenminis were no longer adequate and we neededmainframes and file servers. But even this pro-cessing capacity may not be adequate and soonwe will need parallel processing and perhapseven neural computing. It is these computingprocessors that we will examine in this chapter.

There are two good reasons for discussing par-allel processing in a book on telecommunica-tions. One is that parallel processing will be thefaster way to perform a large volume of compu-tations simultaneously; and two, because parallelprocessing is very appropriate for pattern recog-nition which is the procedure for voice and imageprocessing and other applications of AI (Arti-ficial Intelligence). Such processing along withdata and text is the multimedia processing thatwill be important in the future mix of telepro-cessing traffic.

Some processors will come with their ownoperating systems for telecommunications. Butin addition we need software at the client-end,referred to a comms software; software in theconnectivity devices; and software that facilitatesdistributed processing of applications programs,referred to as middleware. Such software we

will examine in this chapter. However, we shallnot discuss applications software because that isindependent of telecommunications and beyondthe scope of this book.

We start with parallel processing.

Parallel processingOne solution to the demand by AI systems forraw computational power is to move to parallelarchitectures, i.e. hardware that can use morethan one processing unit on a given problem atthe same time (in other words, processors workingin parallel). So, if the AI system has to searchmany options in order to find a ‘good enough’one, then perhaps it can search more than oneoption at once, i.e. the search can be executed inparallel. This is the sort of promise that paral-lel processing hardware offers for AI. But, at afiner level of detail, there are several rather dif-ferent styles of parallel architectures, and thereare a number of very different ways to exploitparallelism in AI programming.

To begin with, there are both coarse- and fine-grain parallelism to be found in AI problems.At the so-called coarse-grain level a problem iscomposed of a small number of significant sub-problems that can be processed in parallel. Thus,at the top level of the management decision-making task, there will be a number of quite dis-tinct alternative possible decisions that could bemade. A coarse-grained parallel implementationof such a problem may be able to use a smallnumber of processors all working in parallel,each one processing a different alternative deci-sion. At some point each independent processormust then report its findings as to the worth of

153

Page 167: Telecommunications and Networks

Telecommunications and networks

the specific decision it processed (this might bein terms of how well it solves the original prob-lem, its cost and side-effects, etc.) to a centralprocessor, which will then choose the best deci-sion to make.

Notice that there is always a need for someprocessor in the system to assume exclusive con-trol from time to time. In the case of our simpleexample, this centralization of control was onlynecessary at the end of the task (and presumablyat the beginning in order to decide how and whatalternative decisions to explore initially). But, ingeneral, there is a continuing need for this sortof processor intercommunication throughout theprocess of complex problem-solving. In addition,it is often hard to predict, in advance, exactlywhen and what will need to be communicatedbetween the processors working in parallel.

Hence the difficulty in designing systems toexploit such coarse-grained parallelism. It is rela-tively easy to construct computer hardware com-posed of a number of processors that are bothcapable of working independently in parallel andof intercommunication. But it is very difficult tostructure programs so that they can effectivelyexploit the coarse-grained parallelism inherentin a given problem. The construction of accu-rate and efficient simple sequential programs(i.e. conventional programs) is in itself a verydemanding task. To construct parallel programsappears to be much more difficult. So, in thisclass of parallel system’s work, the hardware iswell in advance of ability to use it effectively.

The answer is, of course, for the computer sys-tem itself to detect and exploit the parallelisminherent in a problem as and when it occurs.Then the programmer’s task would reduce to theconventional one of providing a correct sequen-tial algorithm (with perhaps some indications ofparallel possibilities), and the computer systemwould, according to its sophistication, parallelizethe algorithm for more efficient computation.It is quite conceivable that an operating systemcould exercise this sort of judgement on a pro-gram that it is running, and, although there are anumber of system-development projects workingtowards this sort of goal, such systems are stilllargely a future event.

Coarse-grained parallelism, with its relativelysmall number of parallel processors, is sometimesknown as mere parallelism to contrast with mas-sive parallelism, which, as the name suggests,involves large numbers of parallel processes (also

known as fine-grained parallelism a term thatemphasizes the elemental nature of the parallelprocesses). These are shown in Figure 14.1. Com-puter hardware is available for directly support-ing this fine-grained parallelism. These machines(such as the Connection Machine; see Waltz,1987) offer the programmer a large number ofquite primitive processors, as opposed to the pre-vious category, which made a small number ofpowerful processors available for the program-mer’s use.

A first point to clarify concerns the possibilityof using such massive parallelism when we havejust explained the great difficulties involved withthe effective use of only a few parallel processors.In fact, these massive parallel machines are easierto use than the previous ones we have discussed.And this is because the primitive processors eitherdo not need to communicate with each other whileexecuting, or because the necessary intercommu-nication is simple and predictable in advance.

When does this type of parallelism arise? Itactually occurs in a number of somewhat differ-ent ways in AI problems. It commonly occurs invision systems and also in many different typesof so-called neural-network models the newconnectionism.

You should know that computer processingof visual input amounts to the processing ofthousands of simple data items known as pix-els. Furthermore, the early stages of this visualprocessing e.g. the noise removal and edgefinding often involves a very simple compu-tation that has to be repeated on each pixel inthe image. And finally, the operation to be per-formed on each pixel typically only involves thefew nearest-neighbour pixels. Each quite simpleoperation can then be performed independentlyof most of the other pixels in the visual image,and so we have an application of massive, butsimple, parallelism.

Such parallel hardware is so important for effi-cient image processing (and pattern recognitionin general) that there is a whole range of differenthardware products designed for just these sortsof applications. So we can confidently predict thecoming importance of this type of parallel com-puter in business applications of the future.

The new AI subfield of connectionismis another potentially major application ofmassively parallel computers. At the moment,this is largely an unfulfilled potential, because,although there are many connectionist models

154

Page 168: Telecommunications and Networks

Resources for teleprocessing

One step at a time:traditional sequentialarchitecture

Several processorsoperating simultaneouslyand in parallel(mere parallelism;coarse-grained parallelism)

A network of processorsinterwoven in complex andflexible ways in massivelyparallel systems(fine-grained parallelism)

Figure 14.1 Sequential, parallel and massively parallel processing

being built, most of them are being implementedon conventional, sequential computers weexpect this to change. There are many, verydifferent, schemes that are being exploredunder the banner of connectionism or neuralcomputing.

The unifying theme in this AI subfield isthat the main computational effort is distributedover a large number of primitive processors, net-worked together and operating in parallel. Theanalogy with brain structure and function, withits network of neurons operating in parallel andcommunicating via on/off pulses, is easy to seebut not very productive. Currently, the analogyis only superficial; there are no deep similaritiesbetween human brains and so-called neural com-puters or connectionist models.

Nevertheless, neural-network models seem tooffer problem-solving potential that is, in somesignificant ways, superior to that obtainablefrom more conventional implementations of AIproblems. The neural computers themselves

have proved to be powerful pattern-recognitiondevices.

Software for telecommunicationsThere are many types of software (besides appli-cations software) needed for telecommunicationsand networks. These include: operating systemsfor networks, called the network operating sys-tems (NOS); software on devices especially smartand intelligent devices; software that facilitateslinking to say a LAN (MAN or WAN) calledtelecommunications or comms software; andthen there is middleware that that is the enablinglayer of application program interface softwarethat is in the ‘middle’ of the network and theapplication. We shall discuss each below.

Network operating systems

Just as an operating system for a PC enables theuser to access peripherals and manipulate data

155

Page 169: Telecommunications and Networks

Telecommunications and networks

off a disk drive, likewise a network operatingsystem enables a PC connected to a networkto use peripherals. It controls access and use ofa network and ensuring correct and hopefullyefficient use of these resources.

Comms software

We have implied the use of comms softwaremany times in earlier discussions. For example,in the discussion of modems we asserted thatmodems have the capability of error checkingand compression. Well how does a modem havethese capabilities? Through comms software. Thecomms software enables the linking of a com-puter to a LAN to perform on-line computingremotely. It has a combination of features thatmay include the following:

ž Allow for a variety of transmission codes. Forexample, a PC would use the ASCII code, thatis, a seven-bit code. In contrast, a mainframemay use a six-bit EBCIDIC code. The commsprogram must recognize differences if anyand make the necessary conversion for com-patibility. Another code used in a comms pro-gram is the control code, like a parity bit. Thisguarantees proper transmission and arrival.

ž A low level of communication may be emu-lation as in say a terminal emulation wherea terminal will bypass the local computer’sresources (like CPU, memory) and rely on aremote host computer for all the processingtasks and the terminal then behaves like acomputer to the end-user.

ž A second level of communication will be hav-ing the ability to send and receive data. Send-ing a file from a local computer (or client) toa remote host computer is (or server) upload-ing. Receiving a file that has been transmittedby a host is downloading.

ž X-ON/X-OFF support which allows flow con-trol for accurate uploads and downloads.

ž Answering an incoming phone call at anoperatorless computer system, also calledautoanswering.

ž Dialling a phone number without assistance,also known as autodial.

ž Recall and perform a logon sequences ofinstructions for a remote computer system.

ž Compress data to reduce time and charges fortransmission.

ž Encrypt and decrypt data for security pur-poses.

ž Check file names on a disk without leavingthe comms program.

ž A wide range of communication speeds.ž Set user-set defaults specifying speed, parity,

etc. and not have to reset every time.ž Changing settings while still connected to

another system.ž Break signal, which allows the sending of a

signal instead of a control-key and letter com-bination to interrupt a mainframe computer.

A combination of features will perform differentfunctions. Some of the many functions include:

ž Dial-up mail;ž Remote file access;ž Network access;ž Host links;ž E-mail;ž Connection to a BBS (Bulletin Board Sys-

tem);ž Connection to the Internet;ž Internet e-mail;ž On-line services

We shall discuss the last five applications in laterchapters when we discuss telecommunicationsapplications. There are, however, other applica-tions that are telecommunications-independent.These include spreadsheets, word processing, anddata management. The comms program may needto be integrated with these applications in addi-tion to telecommunications applications. In fact,modern computers include software with inte-grated applications or at least integrated commsprograms and modems. The author bought a PCin 1993 that had a comms program with its in-built modem (including a fax modem).

Comms programs do meet most network prob-lems. However, if ‘you have a specialized commu-nications need, there’s probably a comms pack-age designed to meet it. But the best bet for boththe personal and the professional user is still ageneral-purpose package.’ (Derfler, 1995: p. 210).

Software for linking devices

Most software for linking devices performs a spe-cific function. However, some devices, like say arouter, need to perform the function of routingmessages optimally if possible. There are some

156

Page 170: Telecommunications and Networks

Resources for teleprocessing

operations research models that compute optimalsolutions. Also, devices could be smart not in thesense that it has a computer in it and can com-pute as a stand-alone device, but in the sensethat it has intelligence in the sense of AI. Thusit needs to be able to make inferences given deci-sion rules. This makes the device quite valuablebecause it no longer is ‘dumb’ in the sense thatit must do exactly what it is programmed to dobut in addition it is able to adapt to changingconditions.

Middleware software

Middleware is the software enabling layer thatsupports multiple communications platforms andprotocols. In many computing enterprises, datais scattered all across various incompatible net-works, computers, operating systems, networkarchitectures and protocols. Middleware is anapplication program that has APIs (applicationprogram interfaces) that enable working withseparate programs that interoperate even whenthey are running on different LANs, MANs orWANs with varying protocol stacks. Middlewarehas file-transfer capabilities and can hook into e-mail software for development of mail-enabledapplications as well as support object-orientedprogramming techniques.

One can look at middleware in the contextof the seven-layer ISO network architectureas shown in Figure 14.2. Here the networktraffic can be seen to split into differentsessions, each corresponding to a separate

applications program. The middleware is theinterface between the sessions layer and theapplication programs. The APIs enhance thesessions layer functionality without reducing itsfunctional complexities but makes it easier forthe programmer at the applications end.

Middleware functions will vary with the ven-dor. The essential services offered will includedistributed computing with security, balancingof load, location transparency, error recovery andtransaction management.

There are many types of middleware:

Database middleware, which gives the pro-grammer a single API that can be used toaccess different databases.Remote Procedure Calls (RPC), which enablethe access by calling desired procedures locatedin different places on the network. When anRPC is invoked, a program executes a proce-dure call very much like executing a subrou-tine in our standard 3GL procedure languagelike Fortran or COBOL. The call is received bythe remote server and the results are returnedto the sender. RPCs are not very popular in aclient server environment because proceduresare not the best way that processes communi-cate in advanced operating systems software.Object Request Brokers (ORB), which enablesaccess to services offered by any object any-where on the network. The objects could beprograms or they could be resources.Structured Query Language (SQL) Orienta-tion, which means that the middleware willsupport the high level languages SQL found

1 2 N

MIDDLEWARE

Sessions Layer

Transport Layer

Network Layer

Data Link Layer

Physical LayerTransmission media

Applications

Figure 14.2 Middleware in the layered architecture

157

Page 171: Telecommunications and Networks

Telecommunications and networks

in many DBMSs (Database Management Sys-tems) and other dialogue languages systems.Message middleware, which enables programsto communicate via messages which is impor-tant in distributed computing environments.A message middleware could establish a com-munications sublayer that could support othermiddleware such as the RPCs, ORBs or othersoftware that needs to work in more than oneplace simultaneously.

For message middleware, a programmer whowants to communicate between programs suchas send data from program A to program B willsend a command like SEND, ‘Data point’, ‘Datadestination’, i.e. specify the sending and receiv-ing data point after the command ‘SEND’. Afterthis call is made, the message-based middlewareuses literally hundreds or thousands of lines ofcode to set up a session with the remote des-tination node, initialize the necessary protocolsand ensure any recovery handling that may benecessary. This is a great savings to the appli-cations programmers and makes the process somuch easier and user friendly.

Message middleware is inherently asyn-chronous and ensures that a process is neverblocked and processes the messages concurrently.In contrast, the database middleware and theRPCs are synchronous and may stay idle until itgets a clearance and permission to transfer. Thusthe messaging middleware is more efficient thanthe database middleware and the RPCs.

Message middleware comes in two types: pro-cesses messaging, which requires both the send-ing and receiving processes to be available atthe same time, or message queuing, which doesnot require simultaneous processes and can stackthem in a queue. The message middleware canvary with queuing characteristics: transactionalqueuing, that is, survive failures; persistent mes-sage queuing, which survive some failures and donot lose the message; and non-message persistentqueuing, which do not survive across any failures.

Message queuing implies a queue manager, asoftware program that is responsible for supply-ing the message queuing services used by appli-cations including the ability to store messages ina queue for later delivery.

Summary on middleware

Summarizing middleware, we can say thatmiddleware offers a high-level interface to

Table 14.1 Summary of Middleware characteristics

APPLICATION ENABLING SERVICESTransaction workflowE-mailEDI (Electronic Data Interchange)

DISTRIBUTED SYSTEMS SERVICESDISTRIBUTED SERVICESLocationTimeSecurity

COMMUNICATION SERVICESDatabase accessRPC (Remote Procedure Call)OBR (Object Broker Request)SQL (Structured Query Language)Message middlewareProcess-to-process messagingMessage queuingTransactional message queuingPersistent message queuingNon-persistent message queuing

cross-platform file servers and provides interop-erability to total enterprise applications and net-work services (even something simple like print-ing). Also:

. . . the basic chore of middleware is to move data

. . . the APIs enhance sessions layer functional-ity while reducing its complexity . . . the goal isto find the API that has just the right level ofabstraction and functionality for the job at hand. . . When middleware is at its best, the effect isthat of a distributed operating system not justas simple interprocess communications API . . .No single vendor of message-delivery APIs sup-port all platforms, protocols, and programminglanguages, but between them they address allconceivable mini, micro, and mainframe overjust about every protocol. (King, 1992: pp. 63,64, 66).

Different characteristics of types of messagingis summarized in Table 14.1.

Summary and conclusionsAs teleprocessing grows in volume and complex-ity, processors will need to be powerful and ver-satile. Currently, mainframes are used as servers

158

Page 172: Telecommunications and Networks

Resources for teleprocessing

of teleprocessing services as well as for applica-tions that need teleprocessing. With the telepro-cessing of large volumes of real-time data andmultimedia traffic like video-on-demand, theprocessing needs will increase. Even supercom-puters may not be adequate. Parallel processingand the emergent option in parallelism neuralcomputing may become desirable. We mayalso see ‘programming currently available hard-ware in the form of a distributed-memory mul-tiple instructions stream multiple data streamcomputer . . . a multicomputer’. (Norman andTharnisch, 1993: p. 263).

Along with processors for telecommunications,we need hardware network connectivity hard-ware like bridges, gateways and routers, but wealso need protocols and a network architecture.All these resources have been discussed in pre-vious chapters. What had not been discussedthus far was software. Some software like oper-ating systems software and applications softwareand software needed at the connectivity devicesare not discussed and considered beyond thescope of this book. What is discussed is mid-dleware which is the software between the net-work and the application. This software withits APIs are especially necessary for distributedcomputing where there are a diverse set of plat-forms, programming languages and protocolsinvolved.

Modern teleprocessing has a wide variety ofoptions for each of the resources needed. In hard-ware, one has the whole spectrum of scalabil-ity from microcomputer PCs to supercomput-ers and some parallel processing. In architec-ture and protocols, there are choices in the USbesides the OSI model and its protocols whichhas many supporters in Europe. In connectivity,there are topologies, access methods for LANs,MANs and WANS. And for software, there is thechoice of integrated packages with comms soft-ware, multiple operating systems and many pro-gramming languages, together with middlewaresoftware and its many types. This array of choicesis good for the consumer and end-user but posesa difficult decision-making process for the net-work manager.

Whatever the choice made by the networkmanger, we now have APIs that are uni-versally available to all applications for bothclient server and peer-to-peer interactions. Thedistributed environment with its many interfacesand services as well as development tools and

Table 14.2 Summary of Services and Languages in aDistributed Environment

SERVICESTransactional managementTransactional queuingMessaging queuingMessaging middlewareProtocol stacksError checkingResource recoverySecurity

DEVELOPMENT TOOLS3GL (Third-Generation Procedural Languages)4GL (Fourth-Generation High Level Dialogue

Languages)SQL (Structured Query Language)CASE (Computer-Aided Systems Evaluations),

integrated or otherwiseOO (Object-Oriented Systems)

programming languages available to the appli-cation programmer in a distributed environmentis summarized in Table 14.2.

Case 14.1: Replacement ofmainframes at EEI

On 2 March 1995 the IBM mainframe 4381was replaced along with its 7.5 gigabytes ofdisk storage running VM. It cost $250 000 peryear and had reached its cost-effective life. Ithad a negative salvage value and thousandsof dollars had to be paid to haul the equip-ment away. The controllers (IBM 3174/3274)and telecommunications hardware which wasolder than the 4381 proved to be worth moreand collected a dew hundred dollars as salvagevalue.

The computer room was much quieter withTricord servers running NetWare and variousother PCs running e-mail and the communica-tions programs. The servers now had greater stor-age capacity and the system ran on a token ringtopology reflecting the IBM heritage and the util-ity industry that EEI (Edison Electrical Institute)represented.

EEI took the existing Statistical AnalysisSystem (SAS) and its COBOL applications

159

Page 173: Telecommunications and Networks

Telecommunications and networks

running on VM and rewrote them in theprogramming languages Magic (from MagicSoftware Enterprise Ltd.) and Btrieve (BtrieveTechnologies Inc.) on the LAN. The new ‘LAN-based applications allowed EEI to improveservice to its members, avoid duplicate mailings,and increase the quality of its databases’.

Source: InfoWorld, 17(16), April 17, 1995,p. 64.

Supplement 14.1: Top worldtelecommunications equipmentmanufacturers in 1994

Vendor Revenuesranking (US $ billions)

Alcatel 15.94AT&T 14.28Motorola 13.41Siemens 12.78Ericson 9.65NEC 9.08Northern Telecom 8.87Fujitsu 4.92Nokla 3.68Bosch 3.23

Source: International Herald Tribune, Oct. 4, 1995,p. 14.

Bibliography

Bernstein, P.A. (1996). Middleware: a model for dis-tributed systems services. Communications of theACM, 39(2), 86 88.

DeBoever, L.R. and Max. (1992). Middleware’s nextstep: enterprise-wide applications. Data Communi-cations, 21(9), 157 164.

Derfler, F.J Jr. (1995). Do we still need comm soft-ware? PC Magazine, 14(5), 201 204.

Dolgicer, M. (1994). Messaging middleware: the nextgeneration. Data Communications, 23(7), 77 83.

Dragen, R.V. (1997). Java tools get real. PC Magazine,12(1), 181 214.

King, S.S. (1992). Middleware. Data Communications,21(3), 69 76.

Linthicum, D.S. (1995). Serving up Apps. PC Maga-zine, 14(18), 205 236.

Norman, M.G. and Thanisch, P. (1933). Modelsof machines and computation for mapping inmulticomputers. ACM Computing Surveys, 25(3),263 302.

Rao, B.R. (1995). Making the most of middleware.Data Communications, 24(12), 89 99.

Rose, R. (1997). Personnel training. LAN 121(2),65 71.

Schreiber, R. (1995). Middleware demystified. Data-mation, 41 45.

Yoon, Y. and Peterson, L.L. (1992). Artificial neuralnetworks: an emerging new technique. Database,23(1), 55 58.

Zahedi, F. (1993). Intelligent Information Systems forBusiness: Expert Systems with Neural Networks.Wadsworth.

160

Page 174: Telecommunications and Networks

15

NATIONAL INFORMATIONINFRASTRUCTURE

One of the greatest contributions technological development can make is in systematically, carefully,intentionally building a national infrastructure for information exchange.

George H. Heilmeister

Many of the technologies and players needed to conduct the information infrastructure are alreadyin place.

Andy Reinhardt, 1994

IntroductionEvery country and even many cities have aninfrastructure, be it for roads, electricity, water orsewage. It requires an investment that no one per-son or firm can afford but it is built and supportedby the government and its services are offered toits users free or for a price. Every so often, everyfew decades, advances in technology require aninfrastructure that is very expensive but whichhas the potential of having a profound impact onsociety. One such infrastructure was the buildingof the interstate freeway system in the US underthe Highway Act of 1956. It was 41 000 miles longand cost over $50 billion by the time it was com-pleted in the 1970s. The infrastructure highwayenabled one to travel from most points to otherpoints in the US without stopping for a singletraffic light or pay a toll. If you got tired on theway, you could stop for a rest and a picnic on restareas located strategically on most freeways. Andif you wanted to rest for the night, you would getoff the freeway on one of the many exits wherehotels and motels and restaurants have arisen totempt the weary traveller. This freeway systemhas created new businesses and eliminated oth-ers. In fact, it has made towns out of ghost townsalong old highways and created new towns aroundthe new freeways. It has encouraged travellingfor vacations and business and in many ways haschanged the life and landscape of the country.And some three decades later, the effects of the

interstate highway are still to be seen. And nowcomes the possibility of another infrastructure,that for information, which may have an equalif not greater impact on our lives. On this infor-mation highway you can cruise for business, plea-sure, adventure, entertainment (including filmsand movies) and knowledge. It may not create (ordestroy) towns and cities but they will change ifpeople take to teleworking and reduce the needfor high rise buildings and parking lots. You mayno longer need to go to library buildings butcan download books and articles that you want.Health-care, education and shopping could all beelectronic and a very different experience.

In short, life will be different with an infor-mation infrastructure. There may well be a sea-change. The impact will be profound but unlikethe building of the interstate highway, the auto-bahn in Germany and the motorway in the UK, theinformation highway may not be so well plannedor orderly. It may well be like the free-for-all com-petitive struggle of construction of railroads in the19th century with highly competitive companies(and international alliances) striving to get advan-tages and be the best and first to succeed.

There are questions that arise such as: Whereis the information highway headed? Will it be a‘free’ way or a toll highway? Will it use the ana-logue telephone or the digital system or both?Will the use of the infrastructure be controlledand regulated by the government? What will bethe role of the government in relation to the

161

Page 175: Telecommunications and Networks

Telecommunications and networks

businesses and industries involved? Who willbe the major users of the infrastructure.? Whatapplications will be transported? Will there beuniversal access or universal service or both? Willprivacy be secured and property rights protected?What will the system cost? Who will pay for it?When and how will it be implemented?

In this chapter we will attempt to answer theabove questions. We will start by providing a sce-nario of what an NII (National Information Infras-tructure) may look like. We will then examine theadvantages, limitaions and obstacles for its imple-mentation. We will rely on the opinions of expertsand chief executives in the field of telecommu-nications, networking and computing (Pelton,1994). But first, we take an overview of what ishappening in constructing NIIs around the world.We then look in greater detail at plans in the US.

NIIs around the worldWe do not yet have an NII anywhere in the world,but there are many countries that are laying afoundation for NIIs. One is Japan where NTT,Japan’s largest common carrier, plans a $410Action Next Generation Communications infras-tructure with fibre backbone and ISDN by 2015.Its current fibre backbone already serves intercityand intracity communications for professionaland business applications. But Japan does nothave a large cable market, nor does it have thesoftware or applications for the consumer andhome market. It is being said that what Japanwill have in 2015 is a very large and beautiful lakebut no fish to inhabit it nor any boats or recre-ation facilities to take advantage of its capacity.

Neighbouring countries like Hong Kong andSingapore are upgrading their telecommunica-tions capabilities. Singapore has declared thenational objective of being a wired island.

In France, there is Bete (Broadband Exchangeover Trans-Eurpoean Links) with researchfacilities connected in France and Switzerland.There is also Brehan, a complete communicationsystem for teleconferencing, video transmission,LAN interconnection and circuit emulation. Thebest known project is Minitel. It started asa computerized telephone book in Paris. Yourauthor recalls wanting a phone connection inParis and was asked to wait for three years.Instead he inherited the phone that was in theapartment but also inherited a phone number

that belonged to the occupant some twenty yearsago. The phone system was archaic. Francethen took the bold step to leap-frog existingtechnology and go for the next generation. Itcalculated (not allowing for inflation) that thecost of a terminal would be less than a Parisiantelephone book and so gave every one a terminalwith the ability to access the telephone directory.Thus Minitel now has the same access to homesas did the telephone. Now Minitel offers otherservices including video text and is an excellentbase for an NII.

Germany has installed fibre in about 80 largecities and this is the basis of VBN (Vernittelandes

Table 15.1 NII related projects in Europe

DECT (Digital European Cordless Communication)are standards being developed for the cordlesstelephone, cordless switchboards and office networks.ESPIRIT (European Strategic Planning for Researchin Information Technology) is concerned among otherthings with compression techniques for interactivemedia.GEN (Global European Network) is expected to beabsorbed in METRAN (Managed EuropeanTransmission Network) which supports transmissionacross Europe at rates up to 155 Mbps.GSM (Global Systems for Mobile Communications). . . Twelve years and about 6000 pages later, thestandard will be DCS 1800 for 1.8 Ghz band. A thirdgeneration European standard planned for 2002.IMPACT (Information Market Policy Action) is aneffort to establish services in the areas of interactivemultimedia and geographic information.OPN (Open Network Provision) is intended to makeliberalization possible by eliminating the technicalobstacles in the way of the deadline for theprivatization of telecoms in the year 2000.RACE (Research and Development in AdvancedCommunication in Europe) is focused on integratedbroadband communications and image/datacommunications.VSATS (Very Small Aperture Terminals) is concernedwith devices that appear on roof tops and behindgarage doors have been agreed upon and now in thetesting stage. A committee is now studying devices forsatellite news-gathering systems.TERA (Trans-European Trunked Radio) incorporatesthe best in trunk technology for standards for voiceand data communication in optimized packet dataservices and for private mobile radio use.

162

Page 176: Telecommunications and Networks

National information infrastructure

Breitbandnetz) with broadband connections forvideo-conferencing via satellite with connectionsto international video-conferencing networks.BERKOM (Berlin Kommunikation) is a pilotproject that has applications for business as wellas for publishing and medicine.

In the UK, one will find a very special casewhen compared to many PT&T countries inEurope: it has a liberalized telecommunicationsindustry since 1981 allowing the telephone andTV on the same network and making fibre opticsprofitable for private companies in the UK, withcable companies making half their money fromtelephony. One nation-wide project is Energis,owned by 12 electric companies that piggybacksthe fibre optic network on the power grid. Thereare other test beds including one by BT inIpswich. BT is also replacing all its copper wirewith high capacity fibre.

There are many projects with joint participa-tion of many countries in Europe. These are listedin Table 15.1. In addition to participation withcountries in Europe, there are many projects withparticipation from Europe and the US includ-ing PEAN (Pan American ATM Network) withAT&T as the American participating with 18other countries in a high speed network for data,text, voice, video and image transmission. Andthen there are projects in the US itself betweenUS companies and by US companies themselves.

NII in the USThe US has been on the path of an NII for thelast thirty years or so. Not by design but by acci-dent: an unintended consequence of the cold war.With the cold war between the US and the USSRheating up, the American Department of Defensewanted a communication system that would notbe paralysed in the event of unfriendly actionand at the same time a network systems that willenable its researchers in academia and researchinstitutions to be able to communicate betweenthemselves. And so was born the ARPANETwhich was the test bed for many of the topologiesand switching methods that we use today. Andthen there were developments in architecture andprotocols giving us SNA and TCP/IP. Alongsidenetwork products were being developed, whichincluded hubs, routers, switches, LAN connec-tivity products and networking software (DataCommunications, 1994: 9 208). But perhaps thebiggest milestone in the path towards an NII is

the Court Judgement in 1982 which broke upthe monopolistic telecommunications industry inthe US, allowing it to go into computing and atthe same time allowing computer companies togo into telecommunications. This liberalization(like that in the UK) freed the creative juices ofthe industry. We also saw a number of networksbeing implemented, some public and some pri-vate but all from private enterprise. Some of theprivate networks are international, large and veryeffective like those implemented by DEC, TexasInstruments and IBM. Then there were networksbuilt by individuals in the computing profes-sion (with many contributions like CERN fromEurope) that gave us the Internet, a successor tothe ARPANET. The Internet in the mid-1990sbecame the de facto national and even an inter-national network. We shall discuss the Internetin Chapter 20 and the global network in the nextchapter. For the remaining part of this chapterwe will look at plans for an NII in the US, exam-ine its need, its limitations and the obstacles inits path.

Who are the players in this business of theinformation highway infrastructure? One may betempted to say ‘the government’ since the gov-ernment is often responsible for infrastructures,even in the US, as in the case of road high-ways. But not so in the case of the informa-tion highway in the US, at least not when itcomes to implementation of the infrastructure.Even so, industry would rather have the govern-ment participate actively because the governmenthas the power of regulating the industry. Thegovernment responded in the mid-1990s when itannounced goals for connecting all schools, med-ical facilities, libraries, community centres, busi-nesses and homes in something like the scenariodepicted in Figure 15.1. Also, the governmentwanted the system to be safe from criminals andabuse and it wanted the right to tap the systemfor what it may consider security reasons. Therewas also another condition: the access to comput-ing had to be ‘fair and equitable’ and essentialservices were to be available universally.

Governments must choose points of interven-tion carefully. It must bring creativity to theproblems of supporting universal service. It mayhave to settle for universal access (i.e. access forthe poor, disadvantaged, minorities and the eco-nomically deprived) to the NII.

Does universal services and universal accessmean that the poor and the disadvantaged had

163

Page 177: Telecommunications and Networks

Telecommunications and networks

N I I

Schools

Medical facilities

Homes

Offices Businesses Services Government agencies

Libraries

Community Centres

Figure 15.1 A scenario for an NII

to be subsidized? And if so, will the governmentdo that or must the industry bear that burden?This is not explicit but there is support for theNII at high levels. The US in the early 1990s hada president and vice-president who did know thedifference between a potato chip and a computerchip. They each even had an e-mail address.

And there was support in the legislature too. Inthe early 1990s, the leader of the House of Rep-resentatives in a TV address to the nation had astrand of fibre optics in one hand and comparedit to the copper bulky cable that now connectsmuch of the capital and called for moderniza-tion of telecommunications. So the vibrations aregood but they are in a cloud of uncertainty thatmakes the industry cautious and nervous.

And who is involved in industry? There are ofcourse the hardware manufacturers that gain inproducing the many computers, microprocessors,interfaces and interconnecting products. Eventhe multibillion dollar software companies areinterested in developing software that will giveaccess to the NII and the global network with aclick of the mouse and, with a few more clicks,access to an encyclopedia or video-on-demand.

The software industry in the US should notbe discounted for it accounts for twice whatis spent on hardware, which by one estimatewill be around $700 800 billion (Pelton, 1994:p. 27). But the biggest players are the carriers ofthe gigabits of information that will roar downthe information highway. They are the carriersof telephone, TV and cable. Each has billionsinvested in their technology and are pushing ithard for national adoption. The war is ON.

Watching the rise of NIIs are industries thatstand to gain from an infrastructure, includingthose of advertising, entertainment, banking,publishing, game producers and software houses.All industries look to an NII for increasing

productivity, creating new products and services,improving competitiveness, and increasing trade.

Since the stakes and risks are high, test bedsare being used to determine consumer preferences.Some of the pilot studies testing these preferencesare listed in Table 15.2. An example of the typeof question being tested is the consumer demandfor any film at any time. This would require a vastlibrary available electronically at all times, whichcould be very expensive. Or would consumers behappy with a limited set of films say the top tenranked at the time? This would be relatively easyto provide, for each film could be run at ten minute

Table 15.2 Pilot studies in the US

FEDERALLY FUNDED PROJECTSAURORA Has 2.4 Gbps channels using

packet transfer mode.BLANCA Test bed links for FDDI

LAN with SONET-basedATM switches.

CASA Uses fibre optic lines at 633Mbps for remotesupercomputer use.

MAGIC Employs SONNET links andATM to create a gigabitWAN.

NECTAR Uses ATM to link gigabitLANs and WANs to oneanother and tosupercomputers.

VISTANET Concentrates on medicalimaging with gigabitnetworks.

COMMERCIALLY FUNDED COLLABORATORYON INFORMATION INFRASTRUCTURE

Attempts to find solutions topractical problems ofend-user interface andnetwork navigation

NIXT Prototyping data highwayconcepts using the Internet,FDDI, frame relay and ATM.

SMART VALLEY To promote datasuperhighway throughprivate and publicpartnership.

XIWT To work out the kinks inbringing gigabit technologyto the business desktop andto the home.

164

Page 178: Telecommunications and Networks

National information infrastructure

intervals and all ten films shown on ten differentchannels, so that there would never be more thanten minutes to wait. And then, of course, a largenumber of permutations are possible. Selecting theright option in the correct mix is a serious problem,but a managerial one not a technological one.

This is not to say that there are no serious tech-nological problems. There are many and the mostbasic is the selection between telephone, cableand TV as the medium for information transfer.The telephone has the advantage of being theoldest with an entree into many million homes.

But the telephone has a heavy bag of disadvan-tages. One is that there is a heavy investment inthe analogue equipment when the world is goingdigital. Also telephone systems are based on cop-per wire, a $60 billion investment, when fibre hasa much greater capacity (for the same weight andvolume) and is easier to maintain. Copper cablescan be converted to fibre but at a high conversioncost.

TV has many advantages, one being its entryinto most homes in the country by being the mostpopular home appliance for a very long time. Andit promises the choice of 500 channels. But TVhas a serious problem for telecommunications:the image on the screen. The quality of the imageis not so good on the screen for data and text asit is for pictures and it is text that will occupy alarge part of the traffic in the future especiallywith the recent exponential growth of e-mail.

Cable has a much greater capacity than the tele-phone, but it is mostly a one way street, what isscheduled (being determined by management ofthe cable company) with little if any interaction bythe consumer. You cannot interact by asking ques-tions about a product and placing an order. Cablefirms are in general financially weaker and areheavily indebted. But they have the great advan-tage that with the addition of a box they canupgrade their system to pump torrents of digitaldata in addition to its images. Video-on-demand(one in every five people in the US rent a filmvideo once a month) and interactive video gamesare also possible but all these enhancements willcost more, around $250 350 per home.

Neither cable nor TV has the nifty switchingfacilities or the communication skills and expe-rience of the technical personnel of the telephonecompanies. It would seem that cable and TV havemany characteristics that are complementary tothe telephone and that they should cooperate andproduce a better joint product and better services.

Instead it seems likely in the highly competitiveworld of the US that these industries will competein each other’s markets. This may well expand themarket but this may still be somewhat of a zero-sum game where one’s gain is the other’s loss. Thewinner in the final analysis may not win on techno-logical grounds but on how they finance expensiveprojects and how they package their technologywith a content that the end-user wants to consume.The technology that the end-user may well want isat leastonechannel that isubiquitous,multimedia,interactive and end-user friendly. The experiencefor the user should not be difficult or frustrating;in fact, it should be an adventure and even somefun. But what mix of technology and content willthe consumer prefer? ‘What is truly impossible toforetell is how much ordinary people will pay forthe new offerings . . . But at present, no firm quiteknowswhatpeoplearewillingtopayfor,norhowtodeliver it to them at an attractive price.’ (Economist,Feb. 25, 1995, p. 63).).

A comparison of telephone, cable and TV forimportant technological characteristics is shownin Table 15.3. These carriers are testing their pack-ages of technology and content (like news, weathermaps, shopping, etc.) in different test beds acrossthe nation. The decision on which package to offerwill not be made mostly on technological groundsbut on consumer response. If you give the con-sumer a choice, you will get a response in themarket-place. Testing this response are numer-ous pilot studies, a selection of them listed inTable 15.4. Some of these projects have since beenabandoned and others may be on the way to beingabandoned. So there is a high risk because the

Table 15.3 Comparison of telephone, cable and TV

Cable Telephone TV

Affordability Fairly good Good Fairly goodAvailability Good Very good ExcellentBandwidth Good Very poor GoodActing OK Excellent OKCapacity Very high Not high HighContent Questionable Depends Regulated

on user somewhatEase of use Very good Excellent Very goodSecurity Not so good Poor Not so goodReliability Not so good High HighPopularity Not much High HighInteractivity Not possible High High

165

Page 179: Telecommunications and Networks

Telecommunications and networks

Table 15.4 Sample trials for interactive services on TV

Year Location Company Targeted end-usersin 1995

1994 Florida Time Warner 4,0001994 Omaha US West 40,0001994 New Jersey Bell Atlantic 7,0001994 New York Nyex 8001994 New Jersey Bell Atlantic 7,0001995 California Viacom Not available1995 Washington TCI 2,000

funding involved for each player is not just in thebillions but tens of billions of dollars. And so thecarriers are looking for partners in this adventureof high risk and high profits. Also, managementis concerned about control so that they may pro-tect their existence and survival. One example isthe publishing industry. They recognize that thetelecommunications technology can soon transfera 20 volume encyclopedia in a matter of seconds.So why would anyone own a set of encyclopediasor even buy books or go to the marbled librarybuilding when they can scan any book, decide ifit is worth reading, and then download it froma national digitized library, all the while sittingcomfortably at home? The same logic applies tofilms. Why go to a cinema when one can see afilm-on-demand? And so the moguls of the filmindustry in Hollywood are concerned. They havethe content but not always the control over thedistribution of the content of the highway. Thusthere has arisen a set of incredibly complex poten-tial alliances of computer manufacturers, softwarehouses, carriers, publishers and Hollywood stu-dios. Often one firm is with another in one allianceand competing against the same firm in another

alliance. What is at stake is the confluence ofthe telecommunications industry, the computerindustry, the publishing industry and the filmindustry. Each player has to decide that mix ofcontent and distribution that they want to con-trol, all this within a cloud of uncertainty of gov-ernmental intervention and consumer response.

One cannot push the government lest theymake too many regulations, but one can test theconsumer response. The main options are com-pared in Table 15.5. They are being tested in dif-ferent pilot studies, some of which are listed inTable 15.2. In these projects, what is being testedis not just technology and content but also themanagerial skills and viability of alliances witha mix of partners. There have been failures ofalliances, which may partly be due to the stockprices and foreign exchange rates dropping, butmay also be due to the difficulties of manag-ing complexity of international multi-industrialalliances. This may have been the reason for theloss of $3 billion by Sony in its purchase of theHollywood giant Columbia Pictures, and the $7billion sale of the film company MCA by theJapanese computer giant Matsuhita. At the timeof the purchase in 1990, Matsuhita called theMCA purchase a pillar in the multimedia strat-egy for providing entertainment software thatcould use electronic hardware.

Significant mergers and alliances are notalways with content as witnessed by the BT(British Telecommunications) which is makingstrategic acquisitions and alliances in thetelecommunications and computer industry notjust in Europe but in the US. The search forthe right alliance and the proper mix of servicesoffered with adequate control was on and mostlikely will go on right through the 1990s.

Table 15.5 Alternative strategies and their characteristics for NII

Cable Telephone Internet

Topology: Unswitched Star or circuit switched.Good switching capability

Packet switched, router

Protocols: Proprietary analogue ATM, ISDN TCP/IPBackbone: Analogue, fibre, satellite Fibre optic NSFnetOrientation: Entertainment Communication (verbal) Communication (written)Key Users: Homes Everybody Business, government and

individualsRelationship: One-to-many One-to-one Many-to-manyInteractivity: None Some Some

166

Page 180: Telecommunications and Networks

National information infrastructure

Issues for NIIThere are many issue involved, both technolog-ical and organizational. One technological issueconcerns the choice of a network architecture andset of protocols. In the US there is strong compe-tition between SNA and TCP/IP. The former hasthe clout of its designer IBM, but the latter hasthe support of the operating system UNIX andthe reality is that it is adopted by the ubiqui-tous Internet. In Europe they do not have such adifficult choice. Being the home of internationalorganizations for standards it has the OSI by ISO.Europe also has an adoption of the B-ISDN.

There is also the problem of bandwidth man-agement and determining what bandwidth willbe required and where. Who will want interac-tive computing and who will not? Who will useinformation for consumption and who will wantit for communication? Or both, and then whatwould be the mix?

Another technological decision relates to havingan ATM that can allocate bandwidth on demand,handle interactive multimedia in large quantitiesat high speeds, and assign priorities to data evenat the cell level and all without any loss of content.

As for organizational issues, there is the prob-lem of determining who controls the content ofwhat is transmitted. Surely those who own thefilms will want its control but how about thecontrol of crime in films? How about pornog-raphy? And how about the content of what anindividual user can send and receive on the NII?Can someone put a computer program on thenet that he or she has written even if designed tobreak a government code? Each of these issues iscontentious and is being contested in the courts.Many of these issues are addressed in the USTelecommunications Act of 1996 along with theprovision of a V chip to provide parents with con-trol on what their children can and cannot see.But these control issues of the 1996 Act are beinghotly contested in the courts.

Industry wants government to be intervention-ist but only in selected areas like the hooking up ofmedical specialists with rural clinics; the connec-tion of schools in rural areas to educational insti-tutions; and the connection of poor communitiesto community centres and national libraries.

Surely the government has some role to playbut a balance has to be found between control ofinformation and entertainment and its freedomof expression. How much protection should the

couch potato be given if he (or she) wished to seemud wrestling all day long? Or gamble half theday away?

Whatever the decisions, whether they be tech-nological, organizational, or political, they shouldresult in the NII being open, easy to use, afford-able even for the poor, multipurpose, seamless inaccessibility and protective of privacy, as well ashaving a rich and balanced information content.Governments and industries should move towardscommon standards that are not too rigid, so thattechnology can grow and innovate without one seg-ment getting an advantage over another.

We conclude this chapter with a positive notefrom a CEO in the communications industry:‘We are poised on the verge of an informationinfrastructure that I believe will serve us well inthe next century. Its development is inevitable.’(Heilmier, 1993: p. 34).

Summary and conclusionsWe started this chapter by comparing theinterstate highway in the US with the informa-tion highway infrastructure. It may be no coin-cidence that the information infrastructure iscalled a highway because there is a desire forit to be as successful as the interstate highwaywas and still is. There are of course commonal-ties, but there are also differences. The interstatehighway was ordained and paid for by the govern-ment. It had origins and destinations in a staticsystem that is bounded by geography. It had agrand design. In contrast, the information high-way is not ordained by law or the governmentbut may well evolve through private enterpriseand populous support. The infrastructure for theinformation highway has origins but no destina-tion and no assurance of where it is going or whattraffic mix it will carry. It has no specific designexcept that there must be a seamless dynamicweb of networks that will be equipped with intel-ligent ATMs that will direct (switch) the multi-media traffic roaring down at gigabits per secondto where it needs to go without any loss of con-tent integrity, privacy or security. For local con-nectivity we may have local telecommunicationnetworks in addition to some private networks,LANs, as well as wire-less and wired telephones.

The NII that will emerge may well not be amonolithic system but many highways all inter-connected seamlessly together, but with gigabitsper second speeds and even tera bits per second

167

Page 181: Telecommunications and Networks

Telecommunications and networks

capacity. Also, the system will most likely beopen with universal or near universal protocolsand access. Ideally, we may even hope for theintegration of 6GCS (sixth-generation computersystem) with communication carriers, suppliers,as well as with the electronics industry to offercontent that is rich and varied.

An NII may lead the country to greater produc-tivity and even greater competitiveness in busi-ness and industry. It may create more jobs andmay improve our health services, enhance ourknowledge, and give us greater access to enter-tainment. But if this mix of services is usedmostly for playing games or entertaining thecouch potato, then the information infrastructurewould have failed. It would also fail if it did notmeet the real-world needs of its end-users and cit-izenry, or if the system was too expensive or toocomplex to use. The infrastructure would also bea failure if it was inequitable and left out thosewho could not access the infrastructure, be theychildren, business people, civil servants or bluecollar workers. It is here that government canplay a part and subsidize for services that giveaccess to schools, community centres, and hos-pitals that would otherwise be inaccessible. Thegovernment may also subsidize pilot demonstra-tion projects that are too risky for private enter-prise, thereby providing important feedback onend-user needs and acceptance thresholds. Thiswould help raise the levels of interoperabilityand end-user friendliness of interfaces, and pre-pare the public for the changes in technologythat may well affect the way they work and play.Furthermore, the government should not with-draw its support for free enterprise in computingor support monopolies or the separation of thetelecommunications and computer industries ashappened in earlier days in the US.

Standards will be crucial, not just to an NIInationally, but to it internationally, enabling itto be connected globally. This is the subject ofour next chapter.

Case 15.1: Alliances and mergersbetween carriersBritish Telecommunications PLC plans analliance with VIAG Intercom AG in Germanyaiming to compete with Deutsch Telecom. Thejoint venture will offer domestic and internationalprivate virtual networks for voice and data trafficwith international connections to be handled by

Concert which is a result of a joint venture withMFS Communications in the US. VIAG Intercomwill use its 4000 kilometres of fibre optics ownedby a regional power company that VIAG owns.

MFS Communications in the US has won alimited licence to operate a fibre network inParis, the carrier will free users from having topurchase leased lines from France Telecom.

Telecom Finland Ltd has signed an interna-tional agreement with MFS to operate an ATMbackbone extending to various European citiesincluding St Petersburg, Russia.

MFS is operating a similar network (as withTelecom Finland) in Frankfurt, Germany, to linkwith an international backbone to networks inLondon, Stockholm and the US.

MFS has won a international carrier licence toown an end-to-end subsea fibre between the USand Sweden.Source: Data Communications, Feb. 1995, p. 18,and July 1995, p. 48.

Case 15.2: Share of the EuropeanVAN marketVAN, Value-Added-Network, includes e-mail,EDI, reservation transactions, videotex, cardauthorization, and enhanced fax. In Europe,these VAN services are dominated by PT&Ts asshown in their share of the world market:

France Telecom 20%DBP Telecom 13%Telefonica 10%Reuters 7%BT 4%IBM 4%Swift 3%Unisource 3%Stet 3%Geis 3%Others 31%

Source: Ovum Ltd (London), printed in DataCommunications, Aug. 1993, p. 18.

Case 15.3: Telecommunicationslaw in the USIn 1995, the telecommunications industry inthe US had revenues of over $700 billion,

168

Page 182: Telecommunications and Networks

National information infrastructure

around 6% of the entire US economy. Thegrowth came despite it being burdened underthe Federal Communication Act of 1934. At thattime there were only 3 TV-cable companies andone telephone monopoly. In the 1980s, therewas some deregulation with the dismantling ofthe one monopolistic AT&T into seven largelocal telephone companies. Then came 400 long-distance telephone companies (dropping ratesby 65%); numerous TV and cable operators; 30communications utilities; the increase in ourdaily lives of computing, information processingand satellite services; a proliferation of PCs; useof digitized voice-mail and the merging of data,text and voice transmissions; and the popularityof e-mail and the Internet. Such changes inenvironment required a change of the 1934 Act,which resulted in the Telecommunications Actsigned in February 1996. The 1996 Act is acomprehensive rewrite of regulations regardingthe communications and computer industries,including the deregulation of long-distanceand local telephone companies, cable operatorsand computer manufacturers, allowing each tocompete on each other’s otherwise protectedturf. The 1996 Act allows for cable rates to begradually deregulated by 1999 (allowing the highrates in the short run to provide capital forintegration and expansion), and also extends thelimits on the number of TV stations allowed ina single company from 25 to 35.

Between 1934 and 1995, the FCC (FederalCommunications Commission) supervised themonopolistic carriers. With the 1996 Act,the FCC will encourage creativity and afree market economy for communications andcomputer industries whilst insisting that allthe competitors will be interconnected andwill share certain resources and facilities. Oneproblem is that long-distance companies do nothave the wires into homes and must rely onlocal providers. This reliance represents 45% oftotal long-distance revenues. Economies can beachieved by integrating long-distance and short-distance transmission resulting in efficienciesand cost reductions that could (theoretically) bepassed on to the consumer.

The contrasting philosophical approach of the1996 Act compared to the 1934 Act is that in1934 the government tried to predict all thepossible future problems and formulate regu-lations for each of them; the 1996 Act recog-nizes that the telecommunications and related

industries are too volatile and shifting to enableprediction of the future and so the industriesare allowed to develop and innovate unfetteredwith detailed regulations to be legislated as andwhen the need may arise. The Act has alreadyresulted in a flurry of acquisitions, joint ventures,alliances, partnerships and mergers in attemptsto ‘bundle’ services of the residential phone withlong-distance calling, TV, cable, fax, video, wire-less and cellular communications, paging, on-lineinformation services, Internet services, entertain-ment like video-on-demand, and interactive ser-vices. Conceivably, most if not all services will beavailable with ‘one-stop-shopping’ through onecarrier or utility in one bundled package andcharged for in one monthly bill.1

The 1996 Act will not formally institute an NIIor even a formal national infrastructure in the US.However, one stated objectives of public policy isto have computer access to all homes, librariesand health-care facilities, especially in the poorand rural areas, by the end of this century. TheFCC has been empowered to encourage computeraccess to all schools, libraries and health-carefacilities, especially those in rural and isolatedareas, through inducements and incentives to theappropriate industries.

It is expected that the 1996 Act will encourageinvestors, reduce the inhibitions of entrepreneursthreatened by regulations, offer open marketsresulting in reduced rates, and spur innovationthat will bring new products and services tothe market-place, thus giving consumers greatercommunication choices and rate options. TheAct will trigger an explosion in learning bypeople of all ages, at all income levels, and inall areas (urban and rural). It is also expectedthat soon medical doctors and other professionalswill be able to offer multimedia state-of-the-art services and consulting advice to all personswho need it irrespective of where they are.For the telecommunications industry, the Actallows integration, mergers and expansion. For theconsumer, they will have integrated and end-userfriendly services at a lower price (it is predictedthat phone prices will drop by 20% in three years).

There are two additional features of the 1996Act that were extensions to the 1934 Act. Theserelate to the V-chip and the display of indecentmaterial on the Internet.

The V-chip provision requires that all TVequipment manufacturers install a microcircuit(in all its equipment with 13 inches (or more) of

169

Page 183: Telecommunications and Networks

Telecommunications and networks

their diagonal screen) costing about $1 each inorder to enable parents to ‘block’ material thatthey consider unsuitable for children. To helpparents make the blocking decisions, broadcastprogramme Rating Boards are expected to assigna ‘grade’ classifying each of their programmesbased on violence and ‘indecent’ content.

The indecent provision makes it criminal to‘knowingly’ publish material that is ‘indecentfor minors’ on public networks like the Inter-net. Violators of this provision face a fine of$25 000 for individuals and $500 000 for compa-nies in addition to a possible jail sentence ofup to two years. The term ‘indecent’ is opento interpretation and it is feared that it couldwell include anything on, say, abortion, rape, sex,nudity, obscenity, and even crime, violence andterrorism. The term ‘indecent’ is to be definedby the FCC in the ‘public interest’. This will notrule out challenges in the courts. The day the billwas signed into law as the 1996 Act, the Amer-ican Civil Liberties Union and 19 other organi-zations filed a suit challenging the law as beingunconstitutional by violating the First Amend-ment, which allows freedom of expression. Also,49 national organizations (including one in NewZealand) organized the ‘Coalition to Stop NetCensorship’. The Coalition argues that the lawwill chill free speech such that public discussionwould be diluted to the level of that which isacceptable only to children.2 Other organizationsalso protested, including some 150 internationalWeb sites by turning their pages black in mourn-ing for 48 hours.

Judge Greene, known as the telecommunica-tions Czar in the US for having made manyimportant rulings concerning communications,including the 1984 decree to break-up AT&T,is concerned that the 1996 Act may not ‘pre-vent domination’ by a few large corporations over‘what is rapidly becoming the central factor ofAmerican life’.3

While the resolution of the legal battle overthe 1996 Act is still uncertain, there is lessuncertainty about the economic consequences:the restructuring of the communications indus-try and the likely dominance of the US in theglobal telecommunications market.

Sources: 1. The Washington Post, Feb. 2, 1996,p. A15.2. Gopher at://gopher.panix.com:70/0/vtw/exon

/faq.3. Wall Street Journal, Feb. 12, 1996, p. B2.

Supplement 15.1: Milestonestowards the development ofan NII1966 ARPANET, birth of networks.1972 SNA, the first network architecture.1976 X.25, the first public networking

service.1978 ETHERNET, the first LAN standard.1980 Internet, the dawn of networking for

the masses.1982 Modified final judgement and the

break-up and deregulation of AT&T.1985 Deregulation of the

telecommunications industry in theUK.

1993 British Telecom buys 20% of MCIand the rush to merge and acquiretelecommunication assets startsglobally.

2015 Planned completion of ‘NextGeneration Communications’infrastructure in Japan.

2015 Planned date for a nation-wide fibreinfrastructure in the US.

Bibliography

Benhamou, E. (1994) NII development: where do wego from here. Telecommunications, 28 (1), 23 24.

Heilmier, G.H. (1993) Strategic technology for thenext ten years. IEEE Communications Magazine, 31(2), 30 37.

Jackson, D.S., McCarroo, T. Ressner, J. and Wood-bury, R. (1995). Battle for remote control. Time,Spring, 69 72.

Kay, K.R. (1994). The NII: more than just a datasuperhighway. Telecommunications, 28 (1), 47 48.

Lippis, N. (1994). The new public network. Data Com-munications, 23 (17) 60 64.

Mercer, R.A. (1996). Overview of enterprise networkdevelopments. IEEE Communications, 34 (1), 30 37

Pelton, J.N. (1994). CIO survey on the national infor-mation infrastructure. Telecommunications, 28 (12),27 32.

Reinhardt, A. (1994). Building the data highway. Byte,19 (3), 46 74.

Taylor, M. (1996). Creating a European informationinfrastructure. Telecommunications, 30 (1), 27 32.

Telecommunications in Europe: creating new links.International Herald Tribune, 14 October, 1993, 13 24.

170

Page 184: Telecommunications and Networks

16

GLOBAL NETWORKSThe world has become a market-place for information. . . Computerized data recognizes no bordercheck-points, customs duty or immigration officers.

Wayne Masden

Introduction

In earlier discussions, we mentioned or impliedthe many problems like bandwidth capacity lim-itations, control of ‘content’, as well as privacyand security considerations. All these problemsare magnified in global networks. There are alsoproblems of global telecommunications relatingto developing countries. In addition, there areproblems unique to globalization of telecommu-nications like international standards, interna-tional protection of intellectual property, trans-border flow of information, and the brain drainresulting from global outsourcing (contracting ofcomputing services) abroad. These are some ofthe topics that we shall examine in this chapter.

An NII is a subset of a global network, andsince we do not have any NIIs it would be log-ical to conclude that we do not have a globalnetwork. And yet, we do, at least a de facto globalnetwork: the Internet. We will examine the Inter-net in great detail in another chapter and wewill see that it was not the consequence of anygrandiose careful design or any design at all.Also, it has no formal organization for its mainte-nance and control. It certainly was not expectedto carry the traffic for businesses, though you cando some business on the Internet and can orderflowers in one country to be delivered in another.However, you would be wise not to make anylarge financial transaction on the Internet, as itis not yet safe or secure for monetary transac-tions. We have e-mail, but we do not have uni-versally accepted e-money (electronic money).If we had, we could cruise the souks of Istan-bul and the bazaars of Bombay. We could drop-in the Victoria and Albert Museum in London

or the Louvre in Paris for an interactive lookwhile still sitting in our chairs in Australia orNew Zealand. We could get sick in one coun-try and have lab results sent to another countryfor an international expert’s opinion. We couldorder a rare book, a best seller or a professionalbook available anywhere in the world and read itin the comfort of our homes. What would thatdo to global levels of education and develop-ment? What would that do to our paradigm ofhigher education? How will telecommunicationsaffect the development of developing countries?Will it improve communications and bring theworld together or will there be a gap betweenthe information rich and the information poor?How will that affect our life-style and stan-dard of living? We shall examine these impli-cations and the applications of telecommunica-tions in four chapters later in this book. In thischapter we will be concerned only with appli-cations that have an important global dimen-sion. We will make no value judgements; weare more interested in the technological feasibil-ity and prerequisites of these applications. Forexample, the solution of the international pro-tection of intellectual property rights is a prereq-uisite to having free access to books in a digitallibrary.

It is these subjects that we will address in thischapter. We examine one important consequenceof global networks, that of global outsourcing,and then discuss some outstanding issues inglobal networks, namely transborder flow of data,the protection of intellectual property and theviability of a secure means of making transac-tions on the global network. First, however, wewill take an overview look at global networks.

171

Page 185: Telecommunications and Networks

Telecommunications and networks

Global networks

A global network connects PCs, workstationsand data centres around the world through fibreoptic, satellite and microwave links. There werein mid-1990s, over 70 countries with full TCP/IPInternet connectivity, and about 150 with at leaste-mail services through IP or via more limitedforms of connectivity. Many of these countriesin the 1990s are seeing the convergence of theindustries of telecommunications, informationand entertainment, along with the rise of mul-timedia traffic across their national borders.

Some applications are mission critical, wheretimely data is crucial, as with the Adameus reser-vation centre in Germany, but the ‘killer’ appli-cations are e-mail and messaging. Global sys-tems for information systems processing is notyet viable partly because of the lack of standardsin important areas, because of the absence ofconsistent end-user interfaces, because there islittle well trained personnel (administrative andend-users) to absorb the technology, because ofthe narrow spectrum of services offered globally,and because there is no global predictable legaland regulatory framework for fair competitionand incentives for investment in telecommunica-tions and networking. This list implies that thereare prerequisites for a successful global network.More specifically, the prerequisites include: tech-nological infrastructure, solution of internationalissues, government support, control of content,and international standards. These will now bediscussed below.

Technological infrastructure

What is needed for a telecommunications infras-tructure is a reliable technological infrastructurethat includes a uninterrupted power supply, atrained cadre of technical personnel, and expe-rience in computing. A physical telecommuni-cations systems is necessary so that it can becoupled with at least grassroots connectivity likethe Fidonet, which has simple protocol for storingand forwarding messages and a strong error check-ing mechanism for ensuring correct transmissions.

The technological infrastructure includesequipment for telecommunications. Their exis-tence varies greatly in different countries withvarying priorities. In Russia, there are $40 bil-lion in projects to lay fibre optics. In Thailand,the emphasis is on cellular phones. In Vietnam

the declared desire is to go digital. In Hungary,the drive is towards an advanced cellular digi-tal communication system. And in China, it isestimated that they need over $50 billion includ-ing the costs for an increase in its telephonesby some 80 million. For some countries, back-wardness may be an asset. When a country goesfrom little or no infrastructure to the latest tech-nology, it will leap-frog entire stages of develop-ment. This happened with the telephone industryin France in the 1970s and the 1980s.

International issues

An organization with a global network mustunderstand and follow the many differentregulatory standards concerning transmissionof data across national borders. It must alsoobserve working hours that allow for the differenttime zones and be familiar with the differentlanguages used.

Government support

The success factor for any information systemsapplication is the support of top corporate man-agement. For global systems it is necessary tohave the support of the government and itsPT&Ts where much of the power resides. Theymay have to build and maintain a national back-bone and help provide gateways for other nations.The attitude of the governments relating to opensystems and privatization of the telecommuni-cations industry is important. Though there aresome 40 countries that have some privatizationand liberalization, there are only two, the UKand the US, that have a liberalized telecommuni-cations industry. The deregulation of AT&T (inthe US) in the 1980s saw the rise of MCI andSprint and the plummeting of rates and increasein services in the US.

Not all countries recognize that privatizationis important or even relevant for them. Manycountries attach low priority to privatization andclaim that attending to the more basic needs likefood and hunger, clean water and shelter, povertyand debt payments, are more important that thedistribution of data and information. Poor coun-tries with high unemployment assign a very lowpriority to labour saving devices and computingand telecommunications fall into that category.

Some countries think that free distributionof information can be more detrimental than

172

Page 186: Telecommunications and Networks

Global networks

beneficial to their environment, especially thefree movement of pornographic or violentmaterial. Other countries acknowledge theneed for free information flow for theirindustrialization and development and may evenhelp them to leap-frog stages of development.

Despite the contradictory advantages and dis-advantages of regulation and privatization, manycountries have found a balance in favour of atleast privatization and universal access despiteits PT&Ts having the monopoly of 100% of itsvoice service market. A good example of this isSingapore that has made many moves towardsmaking the Internet more accessible to its citi-zens despite its strong concerns about the impor-tation of pornography. Other examples in Asiaare Indonesia and Malaysia, with Australia andNew Zealand coming close to being in the zoneof ‘no-regulation’. Europe is not too far behinddespite the slowness in the adoption of the Euro-pean Commissions proposals to pull down bar-riers to privatization and liberalization of theirtelecoms.

The monopoly of telecom companies often givethem power over pricing, and so prices cited byinformation providers may vary up to tenfoldacross countries. In Pakistan and the Philippinesprices for telecommunications are raised in orderto attract foreign exchange.

Control of content

Many countries question the value of the contentof much that may roar down the global net-works. To them, the advantage of 2000 News-groups or 500 channels is irrelevant when theyhave a very high signal-to-noise ratio anyway andbandwidth is very expensive. They argue thattelecommunications are important to high techpersonnel but of marginal value to countries withlow tech people where ‘small is beautiful’. Theyalso argue that telecommunications will produceadvertisements for products of developed coun-tries which not all countries can afford or need.They do not necessarily want to be influencedby the life-styles and cultural values of the send-ing country. This is an old argument that hasbeen around the UNESCO for years and runsinto the philosophical argument of censorshipand freedom of speech and expression and theright of a government to determine what is bestfor its population. ‘Many countries believe that

data pertaining to economic potential and struc-ture is a national resource and that their govern-ments should be able to exercise control over itscollection, use and distribution.’ (Woody et al.,1991: p. 36).

Standards

Acceptance of international standards is impor-tant for interoperability of telecommunicationdevices and hence interconnectivity. The accep-tance, however, can come only after internationalstandards are agreed to and this is where the bot-tleneck often appears. Getting different countriesto agree to international standards is often not atechnological but a political issue which requiresthe harmonization of conflicting national policiesand local regulations, a process that is slow andfrustrating at best.

Consequences of global networks

The satisfaction of prerequisites for global net-works can result in many applications. These arediscussed in four chapters in the next part ofthis book. The most common application globallyis e-mail and file transfer. These are very end-user friendly. Putting a ‘home-page’ on the Inter-net requires the use of a simple language calledHTML which defines the look of the page, sothat it can be read by anyone on the Internet irre-spective of their hardware and OS configuration.However, some applications cannot become com-mon until some outstanding issues concerningglobal networks are resolved. For example, wecannot have the advantages of digitized library(despite what this could mean to education anddevelopment) without resolving the problem ofprotection of intellectual property. Likewise wecannot see businesses using global networks verymuch unless data and funds can be transferredsecurely and safely across national borders. Theseissues will be examined later in this chapter.

Important consequences of global networksare: the impact of telecommunications on devel-oping countries, global outsourcing, transborderflow and the protection of intellectual property.We start our discussion with the consequences oftelecommunications on developing countries.

173

Page 187: Telecommunications and Networks

Telecommunications and networks

Telecommunications anddeveloping countriesTelecommunications can help developing coun-tries (and developed countries) through increasedtrade including that with multinational compa-nies. However, for some developing countries thismay result in a negative balance of paymentsbecause the imports may tend to be larger than theexports resulting from telecommunications. How-ever, if the imports (of raw materials and capitalgoods) were judiciously selected then they couldimprove productivity and gross national productin the long run. Developing countries could alsobenefit from businesses and industries now pos-sible because of telecommunications. One exam-ple is financial services. Telecommunications alsofacilitates all business dealings and their conclu-sion through EFT, electronic fund transfer.

Another very important use of telecommuni-cations is for faster and more reliable commu-nications through e-mail and file transfer usingglobal networks like the Internet. This is partlybetween administrators and managers in develop-ing countries who once studied in developed coun-tries and can still benefit greatly by maintainingcommunications with their contacts, teachers andprofessional mentors in developed countries. Tra-ditionally, the contact was through mail, telephoneand even diplomatic pouch. But at best this is slow(even on the phone where getting an internationalconnection is not fast or easy even after allowingfor changes in time) and not always reliable.

Also, there are many consultants in developedcountries who worked in developing countriesand have much knowledge and even dedicationto these countries but all this is lost because toa lack of communication. Now these consultantscan keep in touch with their contacts in devel-oping countries by giving advice and even down-loading materials and computer programs on theInternet as and when needed. Now communica-tion with experts, consultants and professionalmentors can be fast and even interactive or atleast on-line on the personal and professional lev-els. These contacts can refer the query to otherswhen necessary and provide information of thelatest state-of-the-art technology.

The process of entering the global communityis as much one of entering a new culture asit is one of learning about and exploiting thenew set of technologies . . . future generations

of information services and products that liveon the network, including intelligent user agentsand knowbots, are not likely to find prenet ana-logues as easily. To understand the electronicnetwork culture and be most effective in exploit-ing it, one must live in and explore it. (Sadowsy,1993: p. 46).

To achieve electronic communication, thedeveloping country must have a basic telecom-munications infrastructure. This infrastructure ispartly technology which will include availabil-ity of adequate and reliable bandwidth, reliabletelecommunication media, connectivity betweencritical network links, a fairly uninterruptiblepower supply, supply of spare parts and accessto maintenance services.

This infrastructure is capital intensive and maynot be affordable by countries that have to con-sider the needs of foreign exchange for more mun-dane but essential imports for daily living likefood and medical supplies. These basic needshave a higher priority than the desire for interac-tive communications, interconnectivity and bet-ter navigational tools for the Internet, even if thismeans access to an information-rich environment.

The need for a telecommunications structurecannot always be justified on the platform of com-munications. Not all developing countries wantunfettered communications on the Internet forvarious reasons. Some do not want material thatthey consider pornographic and what developedcountries may consider artistic. Some countriesmay also want to restrict information that theyconsider harmful to their society or may object tomaterial that they consider subversive. For exam-ple, in 1995, Chinese dissidents in the US sent asea of anti-governmental material on the Interneton the occasion of the anniversary of the Tianan-men Square incident. The government was caughtby surprise and may well prevent that from hap-pening again. But there may be other surprisesin store. As with security, you can build a ‘fire-wall’ but there are many creative people aroundthat want to break that wall for purposes thatmay be ideological or even just the fun of break-ing the rules. Controlling content is also a prob-lem in developed countries where parents wantto control the material that their children see onthe Internet. This is possible by software that canprevent material with obvious screen names anda history of sending certain types of material. A1996 Telecommunications Law in the US requires

174

Page 188: Telecommunications and Networks

Global networks

a V-chip that allows control over content. Butcontrolling content world-wide even with interna-tional agreements may be very difficult.

Discriminating information coming from MIT(Massachusetts Institute of Technology) as towhether it is technological or ideological wouldbe difficult.

The electronic infrastructure also includesthe knowledge and ability to navigate, locateand retrieve information over the world-wideglobal networks. Also, you need the equipmentto send and receive electronic information. Insome developed countries there is a PC on mostoffice desks and homes and the typewriter is nolonger in local production. In many developingcountries, they do not even have typewriterson every desk. And when they do, it is verymuch part of their way of communicating.Changing from the typewriter to the PC fordaily communications requires a sea change inattitudes and life-style.

Global outsourcingOutsourcing is the subcontracting of services. Inthe early days of outsourcing, it was mainly inthe preparation of data, But data collection hassince been largely automated. However, the needfor outsourcing has not diminished since it hasnow shifted to programming. In such services thedeveloping countries have a comparative advan-tage. Yet all the advantages, risks and limitationsof outsourcing within national borders appliesto outsourcing abroad plus much more. It is theextra additional advantages and limitations thatis the subject of rest of this section.

The most obvious advantage of global outsourc-ing is the cost advantage. The MIPS (millions ofinstructions per second) cost to labour cost ratio ina developing country is almost 10 50 times thatof a developed countries (Lu and Farrell, 1990:p. 290). This is partly because the cost of com-puting equipment required (numerator of ratio)is high in developing countries and because of thehigh cost of transportation, import tariffs and themany middle men involved. The other reason forthe high MIPS cost ratio in developing countries isthat the cost of labour (denominator of ratio) is lowin developing countries even for skilled computerspecialists. However, these costs may drop overtime partly because of the large increasing pool ofcomputer specialists. In India alone, there are overa quarter of a million new graduates per year that

are technically trained that speak English and are‘abundantly available at very low salaries’ (Apte,1990: p. 293).

Good programmers and analysts are not thepreserve of developed countries. One Ameri-can hiring programmers in Austria and EasternEurope in the 1970s was asked why he was look-ing outside America when Americans had thebest departments of computer science and werethe leading manufacturers of computer equip-ment. The answer was that many programmersoutside America have fewer resources of com-puter memory and computer run time and sothey have to work harder and write more effi-cient programs. This statement may well be trueof many a developing country in the 1990s. How-ever, many of the professionals that are involvedin outsourcing of computer services are familybusinesses with little or no experience of largecomplex information systems development.

There are other advantages of global outsourc-ing of computer development projects by devel-oped countries. These include having access toa pool of professionals in developing countrieswithout having to pay generous fringe benefits orhaving to contend with the unions of developedcountries. Also, global outsourcing offers knowl-edge and access to entry into the market of thevendor country.

Countering the advantages of global outsourc-ing are the many risks, disadvantages and limi-tations involved. The risks include political andsocial instability of the vendor country, changingnational laws regarding foreign investments andnational political ideologies that are sometimeshostile to collaborations with foreigners.

The disadvantages of global outsourcing arethat in developing countries it requires a muchgreater degree of monitoring (and control) ofquality, process and time of completion. Thereare also additional costs of telecommunications,travel and the costs of training vendor person-nel to the organizational and national culture ofthe host country. There could be potential prob-lems with foreign exchange, time zones, and thenegotiation of legal contracts for the outsourc-ing. There is also the problem of finding appro-priately trained information systems personnelwith an adequate infrastructure of telecommuni-cations and other required computing resources.Moreover, there may be resistance among sys-tems professional in the developed countries,

175

Page 189: Telecommunications and Networks

Telecommunications and networks

especially in times of recession when outsourcingpersonnel may become potential competitors.

One important obstacle in global outsourcingto developing countries is the lack of experiencein many types of information systems develop-ment projects in developing countries. Akinladeof Nigeria, has the following observation:

In a typical developing country, several applica-tion generators are available so that developingnon-trivial software systems from scratch is notonly unnecessary but is also uncommon . . . thesoftware engineer in a developing country maynever be able to put into practice, by workingon real-life, large, complex systems, the theorythat he has learned . . . his working environmentdoes not motivate him to play an active part inthe country’s evolution of this very exciting dis-cipline. (Akinlade 1990: pp. 71 2).

There is one important problem arising partlyfrom global outsourcing: it causes a brain drainfrom the vendor country to the developed coun-try. There is already a brain drain resultingfrom students of developing countries that donot return from developed countries where theygo for their advanced education. The developedcountry does not always discourage this braindrain because in many cases they pay for the edu-cation through scholarships and jobs. To add tothis, there is the brain drain resulting from globaloutsourcing. This, argue the developing coun-tries, is unfair and even an act of stealing by thedeveloping countries. The professionals involved

are mostly educated in the developing countriesand trained on local problems. They go to thedeveloped countries to meet their clients in out-sourcing and are then hired by their clients. Forthe developed countries these resources are rela-tively marginal, but for the developing countriesthey are crucial for their economic development.Some developed countries do not help the prob-lem by liberal immigration policies. A skilledprogrammer or analyst can get an immigrationvisa to the US, and there are firms in the US thataggressively import computer programmers andanalysts and facilitate their immigration, whichonly increases the brain-drain problem.

Given all the drawbacks and limitations ofglobal outsourcing, it is still attractive to manydeveloped countries to have their software con-structed abroad, including networking softwareand even all network services. The basic playersand content of global outsourcing are summa-rized in Figure 16.1. One can expect global out-sourcing to grow vigorously in the late 1990s andearly 21st century. What greatly inhibits greaterglobal outsourcing are the constraints in trans-border flow and the lack of protection of intellec-tual property. These topics we will now discuss.

Transborder flowChanges in the world political scene offeropportunities for trade and communicationsacross national borders. Large masses ofpeople in counties like the former USSR,Eastern Europe and China have joined market

OUT-

SOURCING

VENDOR

F

I

R

M

Raw data

Machine readable data

Program specification

Working programs

Specification for services

Services delivered

contract

Figure 16.1 Types of outsourcing

176

Page 190: Telecommunications and Networks

Global networks

economies. Other countries, like India, that weremarket-oriented economies, but unfriendly toWestern companies in computing, are now moreopen and less afraid of foreign domination. TheGATT agreement of 1994 has reduced tradebarriers and acknowledged intellectual rightsof the computing industry. If these rights arerespected by the important trading countries,then this contributes towards the opening ofnew markets of information-based services andwill lead to globalization of IT. This growthwill come partly from multinational companiesthat will prosper and the international firmwill be the norm rather than the exception.Information systems will share data across themany barriers of borders and geography whilstovercoming the risks of foreign exchange andnational political stability. There are, however,still problems posed by cultural differencesbetween managers, end-users and developers andthe wage differential between different countries.

Since transborder flow is the concern of mul-tiple countries, it has been the subject of dis-cussions in international bodies. In 1985, the24 countries of the OECD (developed coun-tries) adopted a Declaration on TransborderData Flows, in which they agreed to:

1. promote access to data and information andrelated services, and avoid creation of unjus-tified barriers to the international exchangeof data and information;

2. seek transparency in regulations and poli-cies relating to information, computer andcommunications services affecting transbor-der data flows;

3. develop common approaches to dealing withissues related to transborder data flows, and,when appropriate, to develop harmonizedsolution;

4. consider possible implications for othercountries when dealing with issues related totransborder flows. (Gassman, 1992: p. 204)

Part of the problem of seamless transborder dataflow is the concern for the security and privacy ofdata that flows across national borders. Sixteenout of the 24 members of the OECD countrieshave passed guidelines in 1988 which state eightprinciples (Gassman, 1992: p. 204):

1. collection limitation,2. data quality,3. purpose specification,

4. use limitation,5. security safeguards,6. openness,7. individual participation,8. accountability.

Telecommunications and networking could bringinstant international communications acrossnational borders though there may still berestrictions on transferred flow of data andinformation. International telecommunicationshas helped global outsourcing to the advantageof developed countries for it has made developedcountries more accessible. But there is a downsideto it: developed counties are attracting computerpersonnel from the developing countries and sothere is a brain-drain in developing countries.Furthermore, developing countries may export(by design or accident) computer viruses acrossnational frontiers. Such a virus made in Pakistandevastated billions of bytes of data all over theUS. Also, telecommunications across the oceanshas encouraged unauthorized access to computersystems such as the espionage for the former SovietKGB by hackers in West Germany. The threat ofespionage from the former Soviet countries mayhave decreased but international sabotage is stilla threat. There is a story that the software boughtby Iraq for the control of its air defence systemcontained a computer routine that was activatedremotely and destroyed the Iraqi control andcommand system just before the Gulf War. If thisstory is true it may never be confirmed. But it doesraise grave possibilities such as the ‘bugging’ of acountry’s software system by its enemies or eventerrorists. There is also the danger that espionagewill shift from the political to the economic stage.And so the need for security is a very importantconcern of all managers of development, not justthe concern for international violations of securitybut even violations of a firm’s database. This isnot only of concern to multinational firms butall firms that have secret and confidential dataand are vulnerable to unauthorized access. Firmsmust also be concerned with their proprietarysoftware despite the GATT agreement of 1993 94that protects copyrights of software, because thereis some uncertainty about the observance andenforcement of the protection of intellectualrights. Every year billions of dollars’ worth ofsoftware is pirated and development managersmust continually try to secure their systemsagainst such piracy.

177

Page 191: Telecommunications and Networks

Telecommunications and networks

Protection of intellectual propertyTransfer of technology can be inhibited by thelack of protection for intellectual property whichincludes patents, copyrights and trade secrets.From the point of view of the management ofdevelopment of information systems, copyrightsare more important. A copyright under theCopyright Act of 1976 (in the US) classifiescomputer programs and data as non-dramaticliterature works deserving copyright protection.Software piracy is the unauthorized copying ofcomputer programs for the purpose of selling theillegal copies or for unauthorized commercial oreven private use. Programs that would sell for $600are being sold by software pirates for as little as$15 in shopping malls in Hong Kong and Seoul.Computer dealers download unauthorized copiesof software to the hard disks of computers that theysell, or offer pirated software free as an incentiveto buy hardware. Thus there is little incentive forcustomers to buy programs legally constructed bysoftware houses in developed countries.

It is estimated that in 1993 the US softwareindustry lost $2.1 billion in Japan and about$10 12 billion world-wide in developing coun-tries because of software piracy (Gwynne, 1992:p. 15). The BSA (Business Software Alliance),operating in the US, estimates that 90% of allsoftware in use in Asian countries is pirated(Weisband and Goodman, 1992: p. 87).

What makes software piracy different fromother thefts is that its occurrence is not detectableor obvious as with say the theft of jewels. Also, soft-ware piracy is cheap, easy and efficient. There areways to protect unauthorized copying but thereare also ways that the pirate can get around it byopening up the programs and rewriting them.

The other problem is that software piracy isoften not considered wrong or ethical. For some,it is like driving a car at 60 or even 80 in a55 m.p.h. zone. Some think that it is a transferof technology to developing countries that is longoverdue. Whatever the motivation or justification,the temptation to pirate software is great.

The developed countries, however, object tothe ‘free ride’ especially when the software pro-duction world-wide is expected to grow from$36.73 billion in 1989 to around $340 billionby 1996 (Schware, 1990: p. 101). The US is thesource of much new and innovative software andis very concerned. It is proceeding at all levels:international, bilateral and unilateral.

At the international level, the US demandedthe strengthening of the protection of intellectualproperty lacking under the multilateral BerneConvention, and this was an important objec-tive of the Uruguay round of GATT in 1993. Itwas achieved and over a hundred countries havesigned the agreement. But there are countriesoutside the GATT agreement and the adequateenforcement of the agreement, from the point ofview of developed countries, is somewhat inade-quate or even in doubt.

At the bilateral level, the US has agreementswith many countries, but not all of these agree-ments have been enforced. For example, theagreement with South Korea took four yearsbefore the government brought their first copy-right action. In China, there was no mechanismto enforce the copyright protection law agreed toin June 1992, a source of great friction betweenthe two countries between 1994 and 1996.

If the effort at the international and bilat-eral levels fails, the US government can act uni-laterally. Section 301 of the US Trade Act wasamended to toughen penalties for countries thatdo not protect American intellectual propertyrights. Then, in 1993, the US government sus-pended the duty-free status for Cyprus on thegrounds that Cyprus had imported almost fourmillion blank cassettes, possibly to copy and thenexport both software and other intellectual prop-erty like films and music. This action againstCyprus is intended to be a strong signal that theUS will no longer allow intellectual piracy whichhas existed with impunity for many years.

Outside the government, there are associationsfor software protection like the BSA, SoftwarePublishers Association in the US; CAAST, Cana-dian Alliance Against Software Theft in Canada;FAST, Federation Against Software Theft in Ger-many; and INFAST, INdian FAST in India. Inthe US, trade associations like the InternationalIntellectual Property Alliance and the BSA basedin Washington have their own crusade againstsoftware piracy. For example, they have identi-fied Ong Scow Peng based in Singapore with ille-gal programs worth several million dollars. Thestated goal of BSA for Mr Ong is ‘incarnation’.Thus far the attempt at incarnation is confinedto individuals, but a national incarnation, evenby a software industry that acts independently ofits country, can be quite harmful to developingcountries. In our world of ever-changing com-puter technology which emerges and becomes

178

Page 192: Telecommunications and Networks

Global networks

obsolescent very rapidly, it is important fordeveloping countries to have good cooperativerelationships with the leaders of computer tech-nology in developed countries.

The actions by the US (governmental and pri-vate) is very bad news for developing countriesfor they may now have to pay for the software(and copied hardware) that they use. But in thelong run, control of software piracy is good newsfor developing countries. It will increase invest-ment and licensing by developed countries andwill strengthen the computer industry (especiallythe software industry) of developing countries. Itwill also increase outsourcing by developed coun-tries for they will be less afraid of their softwareand data/knowledge being pirated. So, in thelong run, developing countries will benefit. Butparaphrasing John Maynard Keynes, the softwarepirates may say: in the long run we are all deadand gone anyway, so why not make money whilewe can. And there are other ways to make money:doing business on the Internet.

Global network and businessThe combination of processing text and imageson the Internet makes it a good target for use bybusiness. A business can pay a fee of around athousand dollars a year and have a ‘home-page’ ofadvertising material that is theoretically accessedby around 25 million people connected by over20 000 computer networks run by universities,businesses and governments. The customer canaccess the system, for example the ISN (Inter-net Shopping Network), by entering a commandas simple as http://shop.internet.net. The cus-tomer can then access the ISN server and browsethrough the ISN catalogue which has up to 20 000computer hardware and software products. If anorder is placed, the customer gets confirmationmessages via the Internet and receives the goodswithin two days if in the US. But, there are prob-lems with shopping in the electronic shoppingmall. Businesses face computer fraud and cus-tomers are afraid of giving their credit card num-bers on the Internet. To solve some of these prob-lems, there is a consortium of large electronicfirms called CommerceNet which has $12 mil-lion in state and federal funds that is workingtowards making the on-line business easier andsafer to use. Meanwhile, there is the netiquettethat disciplines those that take advantage of thesystem. For example, spamming (flooding the

system with the same posting) is looked downupon as is the regular repetition of the same bla-tant advertisement. However, netiquette, as wellas the protocols on the Internet, is changing allthe time and any details stated here need to beupdated to be valid anymore.

The greatest interest to business lies in theconfluence of PCs, commerce and money. How-ever, for business to benefit globally from thisconfluence and global telecommunications, therehas to be a way to transfer funds globally. Wealready transfer funds by EFT (electronic fundtransfer) but that is between banks. What isneeded is a means of transferring money by indi-viduals to any other party. Credit cards are sucha vehicle but transferring funds by credit cardglobally on the Internet is asking for trouble.A secure transfer system must be able to con-firm tens and hundreds of transactions a day.Software houses and credit companies are nowworking on ways to encode credit card numberson-line without any danger of misuse. But thismay take time before the system is thoroughlydebugged and certified as easy and safe to useglobally. Smart cards with stronger safeguardsagainst misuse are being developed by Mondex,a consortium of British banks. Another approachis ‘money tokens’ which are prepaid smart cards.But what if you wish to spend more than whatyou prepaid for? What is needed is e-money,electronic money, that can be transferred elec-tronically, safely, quickly and cheaply.

The concept behind e-money is to provide anelectronic signature (like a watermark on somebank notes) with the recipient checking with theissuer. Each issuance is identified uniquely as is abank note by a serial number. If copied, not muchharm can be done for the copied signature is onlygood for one transaction and large transactionswill be verified before being accepted.

Various approaches to e-money are being pur-sued by firms like DigiCash in Holland andCyberCash, an American firm working out of Aus-tria. The electronic payment system may makeit easier for money launderers and tax dodgers.Thus, the issues raised are not just for banks, butfor national agencies trying to prevent such prac-tices especially from going across their borders.

E-money can be called digital money becausethe code for the money and the parties involvedare often digitized. Digital money does providea medium of exchange but it does not solve thesecond role of money, that is to be a store of

179

Page 193: Telecommunications and Networks

Telecommunications and networks

value. Digital money can be refused as legal ten-der. In our traditional monetary system, peopledeposit money in a bank and, against it, takeloans, draw credit or even demand cash. For thisto happen with digital money, we will need tohave legal money in the real economy depositedfor each unit of digital money. Actually this iswhat CyberCash proposes to do. They will holdreal money in escrow for its equivalent in digitalmoney. But this raises other questions. Shouldthe equivalent of real money earn interest and ifso, for what rate? Can digital money be used forcollateral for a loan? Can it be exchanged for for-eign currency and, if so, at what rates? Answersto these questions now rest with national centralbanks and national monetary policies. Nationalgovernments may well hesitate to abdicate thesedecisions to cyberspace or cyberpunks.

Digital money has a demand because tradi-tional money as bills can be copied by high qual-ity copiers. Guarding paper money is expensive.ATM (automatic teller machines) are popular butnot always safe from muggers. Credit cards arealso popular but not secure against fraud or vio-lations of privacy. The problem with credit cardsis that they make it possible to construct a dig-ital profile of the customer and trace a person’scard transactions with ‘decidedly discomfortingintimacy’. The memories of George Orwell’s BigBrother looking over your shoulder immediatelycomes to mind. Privacy is under threat. Whatis needed is not just a ‘digital signature’ whichrequires establishing authenticity that can betraced back for undesired purposes, but a ‘blindsignature’ that is easy to use and has the eleganceof anonymity of paying cash and yet retaining thestore of value. This approach will protect some(especially the money launderer even across theborders) but will conflict with the government’sdesire to control such illegal transfers.

The problem is one of balancing the legiti-mate needs of privacy protection against viola-tions of privacy and potential surveillance; theunderstandable desire for security against fraud,and safeguarding the national monetary systemagainst fraud and misuse as in money launder-ing. There is also a conflict between the needs offinancial institutions like banks and individuals.Individuals are concerned with privacy and secu-rity being compromised, while banks are con-cerned with a system that is cheap and fast touse for transactions across borders.

What some people think we need is digitalmoney. Actually, many transactions today areconducted with digital money if you consider allthe EFT transfers which are digitized in crypo-tographically sealed streams of bits of data mov-ing between government clearing houses. Whatis not digitized satisfactorily yet is the last milein the money transactions, that of payments incash. In addition to e-money and digital money,there is also the possibility of the electronic wal-let or electronic purse, which is a palm-heldcalculator-sized reader of smart cards that canbe used at petrol stations, toll booths, retailers,fast food restaurants, convenience stores, schoolcafeterias and places where cash is now essential.

You’ll download money from the safety of yourelectronic cottage. You will use . . . cards intelephones (including those in the home), as wellas electronic wallet, disgorging them wheneveryou spend money, checking the cards on thespot to confirm that the merchant took only theamount that you had planned to spend. The sumwill be automatically debited from your accountinto the merchant’s. (Levy, 1994: p. 176).

Summary and conclusions

Information and telecommunications technolo-gies play an important role in world trade, help-ing to define markets, design and manufactureproducts, sell goods, transport commodities, andexchange payments. At each stage of the tradecycle, data are recorded, stored and processed byIT, with output delivered to managers, workersand clients largely by telecommunications. To becompetitive in world markets business people inall countries need access to information technolo-gies. For this we need transborder flow of infor-mation, a secure means of monetary transactionsand protection of intellectual property.

A definitive statement of the issues of globaltelecommunications may never fully be achievedbecause information and telecommunicationstechnologies are continually changing throughresearch and development. In addition, thenumber of applications is expanding. Finally,issues of global telecommunications are resolvedin bilateral and international forums whereworld views and national interests continuallyshift, and where power politics complicatenegotiations. IT managers must remain alert

180

Page 194: Telecommunications and Networks

Global networks

to these interests and power shifts and adaptto them.

There is a high correlation between poortelecommunications and many developing coun-tries. The information infrastructure in develop-ing countries is often inadequate for a robustand sustained support of networking activities.Developing countries that have a weak capitalstructure also have weak planning for invest-ment in infrastructure and a poor payoff fortelecommunications. Electronic network connec-tions have a potentially higher payoff for devel-oping countries because of the lack of reliablealternative delivery mechanisms in the develop-ing world. (Goodman et al., 1993: p. 43).

The 1990s will be a period of flux for inter-national issues of IT. All countries may agreeon the need for global telecommunications, buthow they go about it will differ. The developingcountries will want to leap-frog the growth curveswithout giving up any of their existing privileges.The developed countries will want to concentrateon integration and consolidation. The consolida-tion will include the assertion of their interna-tional rights, especially as regards the protectionof intellectual property including compute soft-ware. There will be more confrontations like thethreat by the US to impose a stiff tariff on selectedgoods worth over a $1 billion on China if Chinacontinued to refuse to satisfactorily protect Amer-ican intellectual property rights including those ofcomputer programs. There was a last minute com-promise and there will be more. Both sides seem tobe preparing for test of strength to see who blinksfirst. Hopefully, global telecommunications willnot suffer from such blinking competition and willcontinue to grow and prosper in the future.

Case 16.1: Global outsourcing atAmadeusAmadeus Global Travel Distribution SA (Madrid)is a computerized reservation system with itscentral processing done at Helder, Germany. Itserves more that 104 000 PCs through its twolarge centres in England and the US and through10 outsourced access networks at BT/Concert(UK); the Spanish travel industry; French Travelindustry; SITA access network; Scandinaviantravel industry; X.25 networks in Belgium and theUK; and the German travel industry. In addition itserves airlines of SAS in Scandinavia; Iceland Air

in Reykjavik, Iceland; Thai airlines in Bangkok,Thailand; Iberia in Madrid, Spain; Airinter inParis, France; Air France in Valbounne, France;Lufthansa airlines in Frankfurt, Germany; andFinn Air in Helsinki, Finland.

Amadeus owns and manages most of the back-bone equipment at the two centres in Atlanta(US) and London (UK). They are connected byseparate 384 kbps transatlantic lines to Erdingin Germany. They are configured with uninter-ruptible power supply and routine monitoringof traffic coordinated with the telecommunica-tions and processing centre in Germany which isequipped with a cluster of IBM mainframe com-puters. These centres as well as the other networksoffer services not just to the travel operators forairline tickets but also for reservations for the-atres, car rentals, hotels and other travel relateditems for agencies that own PCs. Amadeus col-lects a small fee for each booking which in 1994was around 135 million.

Evaluating the experience of running such alarge outsourcing processing network the lessonsto be learned were less technological and moreof an organizational nature. Such lessons weredividing the outsourcing between at least twooperators so that the competition will driveprices down; measuring availability from end-to-end including access lines; not including theSLA (Service Level Agreement) in the contractso that no adaptation to changing conditionscan be made without a battery of lawyersgoing into play; and recognizing that the ‘PTTinfrastructure monopolies make outsourcingmission critical international backbones toorisky at present. And the PTT alliances nowbeing struck throughout Europe may make iteven tougher for providers of managed networkservices to offer international outsourcing.’(Heywood, 1994: p. 80).

Source: Peter Heywod. Global Outsourcing:What Works, What Doesn’t. Data Communica-tions, Nov. 21, 1994, pp. 75 80.

Case 16.2: Telstra in AustraliaTelstra is the largest integrated telecommuni-cations carrier in the Asia-Pacific region and‘provides a comprehensive range of telecom-munications solutions, including internationaldata/voice network solutions, network man-agement and service performance.’ Telstra in

181

Page 195: Telecommunications and Networks

Telecommunications and networks

1994 was the 16th largest telecommunicationscompany in the world with a telecom revenueof US$ 9.8 million and 8.9 million main lines.

Telstra’s MobileNet digital network is one ofthe largest in the world in terms of geographiccoverage, including international roaming in 150cities in 27 countries, a two-way paging and voicemail computer messaging service, and a fax anddata service. Telstra also has an analogue cover-age which is one of the world’s most extensive,reaching 89% of Australia’s population. The ana-logue coverage is planned by the government tobe phased out by the year 2000 and taken overby its digital system.

Telstra is also part of the TINA consortiumformed in 1993 with about 40 of the world’smajor network operators as well as switchingand computer manufacturers. The consortium isattempting to develop technology that will enableinformation systems that make up a network tocollaborate more effectively with each other byintegrating databases and providing services.

Source: International Herald Tribune, Oct. 4, 1994p. 16.

Case 16.3: Telecom leap-froggingin developing countriesITU estimates that 4 billion out of the 5.7 billionpeople in 1994 did not have the basic telephoneservices. Developing countries have a great fearthat they will never catch up with developedcountries. One approach to leap-frogging thetelecommunications gap is to use fixed wirelessthat provide customers service from a radio sta-tion to antennas in homes and offices which per-form like regular phones. It is expected that themajority of the 800 million subscribers in 2000will have such a service. They are already beinginstalled in Brazil, Chile, Columbia, Finland,Germany, Ghana, Malawi, Mexico, Russia, Spain,Sri Lanka, Vietnam and Zambia.

The GSM, Global System for Mobile digi-tal cellular system, is being installed in EasternEuropean countries and is planned for extensioninto 100 countries.

Some of the reasons for the inability of devel-oping countries to achieve telecommunicationsparity with developed countries are:

ž old infrastructures;ž know-how deficit;

ž red tape, high tariffs and restrictive govern-mental policies;

ž high costs: for example, a 64 kbps connectioncould cost around US $8000 with equipmentcosting around US $30 000;

ž lack of funds: not all developing countriesreinvest their profits into the network butspend it elsewhere. Consequently they cannotafford the telecommunications infrastructurethat they so desperately need. Without theinfrastructure they cannot integrate into theglobal market. And without integration theycannot pay for the infrastructure a catch-22situation.

In 1995, the 24 countries of the OECD(Organization of Economic Cooperation andDevelopment) generated 85% of the world’stelecommunications service revenues. The highincome countries which account for 15%of the global population have 71% of theworld’s telephone lines. To overcome thisdisparity, the World Bank has estimated, willrequire $55 billion (about 10% of the world’sannual spending on telecommunications) everyyear over a six-year period to finance thenecessary development in developing countriesthat include the former Eastern Europe bloc.

Source: International Herald Tribune, Oct. 5, 1995,p. 16, and May 17, 1995, p. 19.

Case 16.4: Slouching towards aglobal networkGuenther Moeller, Director General of EURO-BIT, the European Association of Manufacturerson Business Machines and Information Technol-ogy has a list of six principles for a GII (GlobalInformation Infrastructure):

1. protection of intellectual property rights,2. universal access to networks,3. privacy and security,4. access to research and development,5. interoperable systems and applications,6. new applications.

‘Looking globally, we see a patchwork ofincompatible communications networks markedwith high costs, low quality services and verylittle interoperability between systems. What we

182

Page 196: Telecommunications and Networks

Global networks

need is a common worldwide infrastructure tocommunicate information at reduced cost.’

Source: International Herald Tribune, Oct. 8, 1995,p. 11.

Case 16.5: Alliance betweenFrench, German and UScompaniesIn 1995, a consortium between French Tele-com, US Sprint and Deutsche Telecom wasannounced. French Telecom is the telecommu-nications arm of the French nationalized PT&T;Sprint is the third largest American long distancecarrier; and Deutsche Telecom claims to havethe most advanced ISDN, the densest networkand the most extensive cable network in Europe.The alliance of these three large companies isexpected to produce a serious competitor in theglobal telecommunications market.

Supplement 16.1: World-widesoftware piracy in 1994Total losses to international piracy US $8.08 bil-lion that include (in billions of US dollars):

Japan 1.31US 1.05France 0.48UK and Ireland 0.24Others, especially China, Russia,

Thailand, India and Pakistan 5.0

Large gainers from international pirated softwareas a percentage of their total software used:

China 8Russia 95Thailand 2India and Pakistan 87

Source: Fortune, July 10, 1995, p. 121.

Supplement 16.2: Index of globalcompetitivenessUsing 1991 Performance data, a telecompeti-tiveness index was calculated using 43 specific

factors measuring critical areas of telecommuni-cations. Performance in 10 categories were mea-sured and converted on a scale of 1 10, where 10is the best score. A few such categories measuredand the overall index of telecompetitiveness (T-C) appear below:

Infrastructure Productivity Penetration

Canada 7.1 6.3 9.5 8.2France 7.8 3.0 3.7Germany 3.4 4.3 5.6Japan 6.0 6.3 4.2Singapore 7.9 6.3 5.3UK 4.7 4.5 4.8USA 7.0 7.1 8.8

Quality R&D Index

Canada 3.8 6.4France 4.7 2.5 5.9Germany N.A 5.2 4.7Japan 6.7 5.8 5.9Singapore 8.0 2.2 6.6UK 4.4 1.6 4.9USA 7.9 5.1 6.2

Source: Stentor, Dec., 1995, p. 3

Supplement 16.3:Telecommunications media forselected countries in 1994

Country Phone lines PCs Cable TV(units per 100 people)

Argentina 4.1 1.7 13.2Australia 49.6 21.7Brazil 7.4 0.9 0.3Canada 57.5 17.5 26.9China 2.3 0.2 2.5Czech Republic 20.9 3.6 5.7France 54.7 14.0 2.8Germany 48.3 14.4 18.0Greece 47.8 2.9India 1.1 0.1 1.1Indonesia 0.3 1.3Israel 39.4 9.4 13.3Hong Kong 54.0 11.3 0.6Japan 48.0 12.0 8.3Korea (South) 39.7 11.8 5.8

Contd.

183

Page 197: Telecommunications and Networks

Telecommunications and networks

Country Phone lines PCs Cable TV(units per 100 people)

Malaysia 14.7 3.3Mexico 9.2 2.3 2.2Netherlands 50.9 15.6 37.5Portugal 35.0 5.0Russia 16.2 1.0Singapore 47.3 15.3Sweden 68.3 17.2 21.9Switzerland 59.7 28.8 32.3South Africa 9.5 2.2Taiwan 40.0 8.1 14.1Thailand 4.7 1.2Turkey 20.1 1.1 0.4UK 48.9 15.1 1.6USA 66.2 29.7 23.2Venezuela 10.9 1.3 1.0

Source: Extracted from IEEE Spectrum, 38(1),Jan. 1996, p. 40.

Supplement 16.4:Telecommunications end-userservice available in regions of theworld

Lines

COUNTRY ATM Frame relay ISDN Private

Africa None None None UrbanAustralia None None None Urban

& RuralEasternEurope None None None RuralN.America Limited

UrbanUrban& Rural

Urban& Rural

Urban& Rural

Pacific Rim LimitedUrban

Urban Urban Urban& Rural

Rest of Asia None LimitedUrban

Urban Rural

S.America None None Urban UrbanW.Europe Urban

& RuralUrban& Rural

Urban& Rural

LimitedUrban

Bibliography

Apte, U. (1990). Global outsourcing of informationsystems and processing services. The InformationScience, 7, 287 303.

Denmead, M. (1994). International carriers team upfor global reach. Data Communications, 23(17),85 88.

Derollepot, F. (1995). Software piracy. Personal Com-puter World, Vol. 404 413.

Gassman, H.P. (1992) Information technology devel-opments and implications for national policies.International Journal of Computer Applications forNational Policies, 5(1), 1 7.

Goodman, S.E., Press, L.I. Ruth, S.R. and Ruthows-ki, A.M. (1994). The global diffusion of the Internet:patterns and problems. Communications of the ACM,37(8), 27 31.

Gupta, U.G. (1992). Global networks. Journal of Infor-mation Systems Management, 9(4), 44 50.

Guynes, J.L. (1990). The impact of transborder flowregulation. Data Communications, 7(3), 70 73.

Gwynne, P. (1992). Stalking Asian pirates. TechnologyReview, 95(1), 15 16.

Levy, S. (1994). E-money: that’s what I want. Wired,2(12), 174 177, 213 215.

Liebermann, L. (1997). Website traffic management:Coping with success. Internet, 2(1), 59 66.

Malhotra, Y. (1994). Controlling copyright infringe-ments of intellectual property: Part 2. Journal of Sys-tems Management, 45(7), 12 17.

Sadowsky, G. (1993). Network connectivity for devel-oping countries. Communications of the ACM, 36(8),42 47.

Strauss, P. (1993). The struggle for global networks.Datamation, 39(17), 26 34.

Weisban, S.P. and Seymore, E.G. (1992). Internationalsoftware piracy. Computer, 25(11), 87 90.

Woody, C.V. and Fleck, R.A. (1991). Internationaltelecommunications: the current environment. Jour-nal of Systems Managment, 42(6), 32 35.

184

Page 198: Telecommunications and Networks

Part 3

IMPLICATIONS OF NETWORKS

Page 199: Telecommunications and Networks

This Page Intentionally Left Blank

Page 200: Telecommunications and Networks

17

MESSAGING AND RELATEDAPPLICATIONS

Within three years, e-mail will become the critical application at virtually every Fortune 500company. . . As e-mail goes mission critical, the downsized data center may return as the full sizedmail center.

Daniel M. Gasparro, 1993Money is really information, and as such can reside in computer storage, with payments consistingof data transfers between one machine and another.

James Martin

IntroductionEarly messages were conveyed by telephone orby mail. The telephone was mostly a one-to-onerelationship and, when it had to be a many-to-many relationship, we had teleconferencing and,with more advanced technology, we now havevideo-conferencing and multimedia conferencing.Conferencing is useful in bringing peopletogether on-line and in real-time consultation,but to conclude transactions, especially inbusiness, agreements have to be formalized inwriting and then sent by mail. But traditionalmail is slow and with computers it becamepossible to send mail electronically. This isknown as electronic mail, or e-mail for short,as distinct from all other mail, which isreferred to as ‘snail mail’ by e-mail enthusiasts.E-mail was unstructured and unformatted forpersonal and informal mail but, for businesstransactions, the messages were structured andoften special forms were required. Such messageswere sent electronically by EDI, ElectronicData Interchange. With telecommunications andnetworks business, transactions were conductedwith the necessary messages transported swiftlybetween buyers and sellers wherever they mightbe and wherever the price was right. But forthe transaction to be completed, money hadto be transferred. Funds were then transferredelectronically between computer systems andover distances using telecommunications. Thisrequired another set of messages to be transported

and is known as Electronic Fund Transfer (EFT).Meanwhile, there was cooperative processingwhere telecommunications and its networking wasused by people working together and sharingcontributions to the same data/knowledge-bases.These approaches to messaging had some overlapbut were not mutually exclusive and so theycoexisted but needed to be coordinated. Thecoordination could done by groups in cooperativeor collaborative processing as in a client serverenvironment as discussed earlier. And whethercollaborative or not, it was desirable for allmessaging in an enterprise to be integrated.Approaches to such integration is referred to as theMHS, the Message Handling System. MHS is stillevolving but some directions are becoming clear.We shall examine these approaches in this chapteralong with some of the messaging approachesmentioned above, as well as teleconferencing. Wewill not discuss all types of messaging especiallythe telephone and the post office because theyare not electronic. Nor do we discuss e-mailwhich is electronic. This is not because e-mail is unimportant. On the contrary, it is tooimportant and a discussion of it will make thischapter unduly long. Therefore we shall defer thediscussion of e-mail to Chapter 19 and discussit in the context of its greatest use, that byknowledge workers and teleworkers. We also deferthe discussion of video-conferencing to the nextchapter where we discuss it in the context ofdistributed multimedia processing.

187

Page 201: Telecommunications and Networks

Telecommunications and networks

TeleconferencingTeleconferencing is a method of electronic com-munication that permits interactive exchangebetween two or more persons through theirclients or terminals. With electronic mail, a mes-sage is delivered instantaneously to an electronicmailbox, but there may be a delay before theintended recipient collects it. Teleconferencingeliminates this time lag since participants arewaiting to respond to messages that are sent tothe screens of linked workstations/desktop PCs,or between computers in conference rooms as farapart as one or more continents. A teleconferenceprogram handles the logistics of this communica-tion and determines who has the ‘floor’ when sev-eral users want to ‘speak’ at the same time. Somesystems include a gag feature to prevent a singleperson from monopolizing communication chan-nels. Often the terminals of conference partici-pants serve as electronic flip charts enabling theparticipants to access specially prepared confer-ence materials from a database. Teleconferencinghas been used extensively in discussing productdevelopment and product production, therebyhaving greatly reduced the product developmentlife cycle.

Teleconferencing saves travel time and travelcosts. However, many professionals like the traveland believe that interpersonal relationshipsare important when conducting business, soface-to-face communication is preferred overteleconferencing. The cost of teleconference

equipment is high, another disadvantage. How-ever, when the executive’s time is too valuable tobe spent in air travel and frequent business tripsover great distances are necessary, the investmentin teleconferencing is often a business choice ofpreference.

Electronic data interchange (EDI)

EDI is a direct exchange between computer-to-computer of separate organizations of standardsbusiness documents such as purchase orders,invoices, bills of lading and related business doc-ument necessary to perform specific transactions.The transactions are recorded on specific stan-dardized forms that allow for specifying con-tent as well as for checking of content and cer-tain errors. The EDI thus differs from e-mailwhich is primarily text that may or may notbe formatted and structured. Also, while e-mailoften includes personal correspondence EDI isdesigned for business information exchange.

Process of EDI

The process of EDI as compared to traditionalprocessing is shown in Figure 17.1. Traditionally,inquiries were made by telephone and thenconfirmed by mail. With early computerprocessing, the paper hard-copy agreementswere converted into machine readable form forcomputer processing and then the transaction

BUYER Traditional Processing ModesSELLER

EDI link

EDI Link

Buyer

Policies +Procedures

Policies +Procedures

SellerCalculates inventory levelComputes when reorder is neededCalculates reorder quantityIssues purchase order

Sends itemsordered + billShipping dept.prepares order

Routes invoiceto shipping dept.

Processes order

Buyer's Computer System Seller's Computer system

Telephone

MAIL

Figure 17.1 Traditional messaging vs. EDI

188

Page 202: Telecommunications and Networks

Messaging and related applications

VENDER CUSTOMER

DirectEDILink

CUSTOMER

EDI link EDI link THIRD PARTYEDI PROVIDER

Figure 17.2 Alternative EDI links

was consumed. Under EDI, the process is greatlyautomated eliminating the conversion from hard-copy. The buyer’s computer systems governedby policies and procedures of the buyer (asembodied in a computer program) determineswhen to buy. This determination is made byan inventory control computer program whichdetermines the reorder point given expecteddemand and inventory-on-hand. The necessaryquantity to be ordered is specified in a purchaseorder and sent to the seller of the inventoryordered. The seller’s computer system acts onthe order by rules embedded in its computerprogram. The order is processed and instructionssent to the shipping department which then shipsthe necessary goods to the buyer. At the sametime, an accounts receivable is issued and sentto the billing department which then collects themoney, may be by a cheque mailed to the seller,or the money is transferred between the banksof the buyer and seller through EFT. Thus theprocess is greatly automated, reducing costs ofdata entry (and correction of errors that do occur)and of processing as well as the time requiredfor both data entry and processing. Automationalso reduces transcription and conversion errorsthat would otherwise occur during the manualconversion of data from hard-copy to machinereadable form. Such a system can give the sellera comparative advantage over a seller that doesnot have EDI and could ‘lock’ the customerfor convenience, lower costs and reliability. Thespeed of processing can reduce the inventoryotherwise held by the buyer and reduce inventoryholding costs.

EDI between two trading partners does requirethat the trading partners have computers and acommunication EDI link. This may not seem to

be a problem at first sight because most busi-nesses have computers. If their systems are com-patible, then a direct link is possible as shown inFigure 17.2. Unfortunately, however, many com-puter systems are not compatible in hardwareand its operating systems software. They thengo through a third party provider that servesas a go-between clearing-house converting thetransaction from one system to being acceptableto the other systems. This third party can alsoserve as a store-and-forward point as does a postoffice, where messages are stored, sorted and sentwhen desired to the destination. The third partyintermediary function can cause extra cost andtime and so large businesses demand that theirsupplier have compatible computer systems andlarge suppliers wanting the business will comply.

This discussion enables us to now specify theconditions necessary for a successful EDI:

The two trading partners must have compati-ble computer systems (hardware, software andprocessing procedures).There must be an EDI transmission link,either direct or through a third-party provider.Transaction documents and product identifi-cation must be standardized between the twotrading partners for format.Mailbox facilities must be available for storing,sorting and forwarding messages.

StandardizationWe have mentioned standardization on twooccasions: one, standardization of documentformat; and two, the standardization of EDI

189

Page 203: Telecommunications and Networks

Telecommunications and networks

transmission. The former is usually settledbetween firms and corporations that are tradingpartners. This includes format for purchase orderforms and packaging slips. The other type of stan-dardization, that for EDI transmission, is moredifficult because it involves not just telecommu-nications vendors but also many countries wheninternational commerce is involved (Gordon,1993). With the increase in global outsourcingand multinational firms, the exchange concernsare global. Initially, it was the exchange within afirm. In Europe, where EDI is at around one-fifthof the level of the US, there were around 5000 EDIsites in 1990. For an international EDI, there aretwo competing standards. One is the X23 devel-oped by ANSI (American National StandardsInstitute) in the US, which covers over 80 busi-ness or transaction sets that are mostly genericwith some that are industry specific (Trauth andThomas, 1993). In contrast, there is the morerecent European standard, EDIFACT (FACT isthe acronym of ‘For Administration, Commerceand Transport’). It is very different in approachand philosophy to the X23. EDIFACT is central-ized, systematic, anticipatory, a tool of industrialpolicy, as well as being responsive to govern-mental direction and national policy. In con-trast, the X23 is distributed, pragmatic, reactive,entrepreneurial and individualistic, and attemptsto maximize the role of the private sector. Forthe X23, international standards could be only aguide whilst the EDIFACT is a single Europeanstandard adopted by all European trading part-ners. Whilst EDIFACT is attempting to make itsstandards international with or without Ameri-can support, European firms doing business withAmericans must follow the X23 standard. Butas multinational trade in IT increases betweendeveloped and developing countries, the need forcompromise may move us towards a truly univer-sal international standard. Global standards otherthan for EDI are necessary not just for global out-sourcing but also for transborder flow of data andthe globalization of IT.

Despite the problems of standardization, EDIhas great potential both nationally and globally.EDI will be used increasingly to automatetransactions between computers without humanintervention and the output of one application forone trading partner becomes the input for anothertrading partner. In one study in the eastern statesof the US, it was found that over 25% costs ofprocessing was for reentry and that over 70% of

Table 17.1 Summary on EDI

PREREQUISITES OF EDIBoth trading partners have computer systemsThe two computer systems must be linked, either

directly as with compatible systems or through athird-party EDI provider

Both parties must have a ‘mail-box’ capabilityBoth parties must follow standards for document

formatEDI transmission

ADVANTAGES OF EDI (over traditional modes oftelephone and mail)

FastLower costs of data entry and processingCertain operations are automatedErrors especially in data entry are reducedReliable and well established systemThere are many spill-over services including those in

outsourcing

DISADVANTAGES AND LIMITATIONSClosed systemNot universally usedReal-time but can be ‘slow’Cost for software installation

computer output became the input for anothercomputer system, thereby eliminating the costs ofreentry of data which is an important componentof the total costs of processing. This is importantas trade and commerce increase and becomesmore global. EDI is often associated with businesstransactions between buyer and seller. But thereare many other transactions that are performedby EDI.

A summary of the above discussion on EDIappears in Table 17.1.

Electronic transfer of fundsIn 1965, Thomas J. Watson, then chairman ofIBM, made the following prediction:Ł

In our lifetime we may see electronic transac-tions virtually eliminate the need for cash. Giantcomputers in banks, with massive memories,will contain individual customer accounts. Todraw down or add to his balance, the customer

Ł Reprinted with permission from IBM.

190

Page 204: Telecommunications and Networks

Messaging and related applications

Point-of-saleterminal

Point-of-saleterminal

Authentication AuthenticationBankdatabase Bank1

computer

Bank2database

Bank2computer

ACHdatabase

Bank1Automaticclearinghouse

Bank2

Transactions Service Transactions Service

Customern Customern

Figure 17.3 Components of an EFT system

in a store, office or filling station will do twothings: insert an identification into the terminallocated there; punch out the transaction figureson the terminal’s keyboard. Instantaneously, theamount he punches out will move out of hisaccount and enter another.

As Watson predicted, this same process,repeated thousands, hundreds of thousands,millions of times each day, now occurs. Billions‘change hands without the use of one pen, one pieceof paper, one check, or one green dollar bill’. Anetwork of terminals and memories extend acrosscity and state lines for electronic funds transfer(EFT), although we have not altogether eliminatedpaper money and coins. Watson’s prophecy hastoday become a reality. Billions of pounds sterlingare moved daily from one set of accounts toanother using computers and telecommunicationswithout any currency exchange or paper torecord and process transactions. Informationon cheques (payee, amount, account number,cheque writer, depositor, institution) is convertedinto electronic impulses and transmitted througha telecommunications channel to the nearestautomatedclearinghouse (ACH),acomputerized

version of the traditional cheque clearing house.(The cheque itself, that is the paper on which theinformation is written, is not physically movedfrom one location to another.) The ACH thentransfers the amount to banks where the intendedrecipient has an account. (See Figure 17.3 forcomponents of EFT.)

The US Treasury Department uses EFT forrecurring payments, such as the transfer offunds for government employees’ life insur-ance programs. In the private sector, manycorporations deposit weekly paycheques byEFT and preauthorize account debts such asmonthly interest payments and insurance pay-ments. International systems like CHIPS (Clear-ing House Interbank Payments System) andSWIFT (Society for Worldwide Interbank Finan-cial Telecommunications) have provided EFTbetween banks for the past twenty years.

However, paper-free banking is still far frombeing realized. The reasons for this are describedbelow.

Float

When payments are made by cheque there isusually a two- to seven-day period between the

191

Page 205: Telecommunications and Networks

Telecommunications and networks

time the cheque is written and the time that it iscashed. The amount of money in transit but notyet collected is called float. Float is, in effect,an interest-free, short-term loan to the chequewriter.

Instantaneous EFT payments eliminate floatand the possibility of earning interest on theseshort-term loans. In fact, a company that paysbills by EFT but receives payment by conventionalcheques stands to lose money. Loss of float is aprincipal reason why cash managers oppose EFT.

On the other hand, if all expenditures andreceipts were EFT transactions, the loss of floatmight be balanced by early receipts. In addi-tion, the fast processing of transfers and exactinformation on the status of funds might enablefirms to reduce demand deposits and mobilizeidle money. The conversion of liquid holdingsinto ‘near-money’ assets such as Treasury billsand commercial paper would increase profits.

Cost

Many banks do not make charges to people whowrite cheques provided that they maintain agiven balance in their cheque accounts, whereasa service fee is charged for EFT. That is to say,bank customers are given no financial incentiveto switch from cheques to direct deposits. In maybe because the cheque system works well andcheque processing is low in cost that bankersappear more interested in maintaining the sta-tus quo than in promoting EFT and building aworkable low-cost ACH mechanism. The transi-tion to a chequeless society is unlikely until bothbanks and their customers save money with EFT.

Lack of public confidence

Most people have had personal experience ofcomputer errors and are wary of computer relia-bility. The public is not yet willing to eliminatethe paper backup of cheques.

In addition, the ease with which financialrecords can be accessed by EFT raises concernsabout the security of financial data processedby EFT. When identification cards are requiredfor EFT access, as in ATM transactions, thedanger exits that stolen cards will be used tomake fradulent fund transfers. Password codescan be breached by determined thieves, andsystems security based on voice, hand forms andfingerprint recognition systems, more reliable

than passwords, is too costly for widespread useat present. Many people, too, object to these typesof system on moral and political grounds.

The vulnerability of financial data duringtransmission can be protected by codingand security measures like packet switchingdescribed in Chapter 5. Fewer incidents ofattempted theft are reported from EFT than fromconventional cheque systems, but the potentialloss per incident is much greater.

Another concern of many critics is that EFTmight lead to an erosion of privacy rights. Theproblem arises because EFT records contain dataon spending patterns and an analysis of theserecords can reveal personal data such as travelmovements, drinking habits, debts, health status,and so on. Acceptance of EFT may have to waituntil the public is assured that the privacy ofindividuals who use EFT is safeguarded.

Preference for cash

Many people like the feel of cash; it gives thema sense of satisfaction. Although efficient, EFTwill not win public support until human needsare addressed. Perhaps user-friendly features canbe added to make the use of a terminal forfinancial transactions outweigh the pleasure ofhandling cash.

EFT spin-offsPoint-of-sale terminals linked to banks and creditagencies by telecommunications is an electronicapplication in retailing that is technologicallyfeasible but not yet widespread. Electronic shop-ping and home banking, much touted, are like-wise spin-off applications of EFT that have hadonly limited success to date. The basic problem islack of high transaction volume necessary to payfor these services. We next explore the promise ofthese systems and reasons that public receptionhas been poor.

Point-of-sale terminals

Electronic point-of-sale (POS) terminals thatrecord sales transactions are found in manyretail establishments. Most transmit sales datato a store computer which processes the datato produce accounting and inventory reports

192

Page 206: Telecommunications and Networks

Messaging and related applications

for management. Less common is the use ofPOS terminals to monitor charge accounts withcredit limits within a given store. The saleof merchandise recorded on the POS terminaldebits the customer’s credit limit by the amountof the sale.

POS terminals can also be used for chequeand credit card authorizations. In this case, theterminals are part of a telecommunications net-work linking them with banks and credit agen-cies. Some POS systems immediately debit theamount of purchase from the customer’s bankaccount and deposit the money in the store’saccount. This use of EFT requires a telecommu-nications link from the store to the customer’sbank and the ability of that bank and the store’sbank to handle electronic financial transactions.

Home banking

For a fee some banks provide home banking forpeople with a personal computer and modem,another EFT spin-off. Not only can bank cus-tomers access account information and pay billsthrough interaction with the bank’s computer,but they can review bank statements, transferfunds between accounts, open new accounts, andbuy or sell securities from home.

A good idea, perhaps, but home banking ser-vices have few subscribers. The problem appearsto be that individuals, like their corporate coun-terparts, are reluctant to give up the float. Mostpeople who own PCs lack modems: to purchaseone costs from £80 to £240 an investment thatmost people do not seem to think is worth thecost. Besides, home banking does not eliminatepaper cheques: it merely shifts cheque writing tothe bank, which writes and mails its own chequeto the payee after payment has been authorizedthrough home banking.

Home banking is not dead, however. Manybanks are shifting their electronic bankingstrategies to small business customers andoffering financial planning software for addedvalue. Other banks are forming consortia andjoint ventures to help pay for the cost ofdeveloping and marketing home banking. Thechallenge to home banking suppliers is todemonstrate that the services they provide savethe customer money and are more convenient touse than traditional cheque accounts.

Home shopping

With EFT, home shopping (also called teleshop-ping) becomes feasible. Selection of goods isthrough catalogues or newspaper advertisementscalled to the computer screen; payment is autho-rized on the terminal by EFT. Although thepleasure of window shopping and the ability tohandle merchandise is lost, many home shoppingsystems allow viewers to rotate items displayedon the screen or view them in close-up.

As with home banking, home shopping has notbeen well received and many videotex informa-tion services that featured home shopping havefailed. But entrepreneurs seem confident thatonce the technology improves and a broad cus-tomer base is established which will lower costs,the idea will be profitable. In the future it maybecome possible to key descriptions of wanteditems on a home terminal and let the computersearch for stores with the items for sale.

If such home shopping systems ever take hold,marketing patterns will change drastically. Mer-chants will have to index their goods and providethese indexes to information utilities. Shoppingcentres will lose customers, shop assistants willbe displaced, and retail space will be replaced bywarehouses that transact business electronically.

Before teleshopping becomes more wide-spread, however, there will probably be asurge in electronic marketing. Special computerprograms will merge computer tapes of creditbureaux, listings in commercial and private usertelephone directories, licence and associationlists, mailing lists, census and other data toobtain a master customer list. Marketers willthen call a large test sample from the listin order to create buyer categories, and thentry to determine ‘buying windows’ or eligibilityfactors that are required for a good buyer/productmatch using computer software to evaluatecurrent customer-need profiles against benefitelements of available products and services. Thepurpose of this marking approach would beto identify groups that might be responsiveto advertising for particular products so thatadvertising campaigns can be directed to them.

Smart cards

An innovative use of information technology thatwill affect the computer, retailing, banking andcredit card industries, among others, is an inte-grated circuit charge card, called a smart card.

193

Page 207: Telecommunications and Networks

Telecommunications and networks

Smart cards resemble credit cards but instead ofhaving a magnetic strip on the back with creditinformation about the card holder, most containan embedded microcomputer chip and permanentmemory that does not lose its information whenthe power is shut off. (An alternative technologyis a laser card which stores credit informationas tiny black dots burned onto the card surface.)Whenapurchase ismade, thesalesassistant insertsthe smart card into a card-reader module that istied to a host computer which decides whether thepurchase should be authorized based on accountparameters, such as the card-holder’s credit limitwhich is stored in the card’s memory. (Unifiedcards add battery power, a two-line display screenand a keyboard device to a smart card so that infor-mation can be entered or read from the card with-out a computer terminal. Midland Bank is using acontactless card that transmits signals to the cardreader so that no direct contact is needed.)

Another use of smart cards is as follows. Atthe beginning of each month, a certain amountof money is transferred to the card’s accountaccording to an agreement reached between thecard holder and his or her bank. The first timethe card is inserted in a reader each month, theamount is automatically added to the previousbalance recorded in the card’s memory. Whenpaying for goods, the card will record the dateand value of the transaction and a new balancewill be calculated by the microchip.

An advantage of smart cards is their mem-ory size (8, 16 or 32 kilobytes) which makesthem able to store much more financial infor-mation than magnetic strip credit cards (e.g.health insurance information, telephone num-bers, appointments or other records). Smart cardsare used at universities to store student’s recordsand timetables, and by the US Army to replacedog tags. In France, they are used to pay for callsfrom public telephones and to pay highway tolls;at Loughborough University in England, to payfor purchases in campus shops and restaurants.(In these applications the cards represent elec-tronic money. The card is purchased for a givenamount and each transaction is recorded on thecard and debited until the given amount is spentand the card becomes invalid.) With their abilityto store digitized fingerprints, photo and voiceprints, smart cards have attracted commercialinterest for application in building security. Thehealth industry is promoting the use of the cardsfor medical records which will carry warnings

about medications, allergies and chronic illnessof the card holder plus name and telephone num-ber of the family doctor. In case of accidents,such cards might save lives provided, of course,the card’s microchip is not damaged.

Besides their capacity and versatility forinformation storage, smart cards are favoured forfinancial transactions because they authenticatetransactions, saving retailers the time and cost oftelephoning for credit authorization. Bad debtsshould shrink because the chip on the card keepsa running tab on purchases and will not approvethose that exceed the holder’s credit authorization.

Although a sizeable investment would berequired to install the readers necessary forwidespread retail use, industry experts claim thatthe cost would be defrayed by reduced creditcard fraud (smart cards are difficult and costlyto duplicate). MasterCard, which states that thebig selling points for the cards are convenience,protection and flexibility, has begun testing thecards and talked of a five-year changeover frommagnetic strip credit cards. Visa, more cautious,is waiting to see whether smart cards will saveenough money to justify investment in them: thecost of a smart card is more than three times thatof an ordinary credit card.

Smart cards are not a new concept a key patenton the idea was granted in 1974 when the semi-conductor technology was barely two years old.But the idea, like home banking and home shop-ping, has been slow to catch on. However, if Mas-terCard starts mass distribution of the cards asplanned, market pressure may force other finan-cial service companies to issue smart cards, whichwill in turn force businesses which rely on creditcard payments to install the card readers. At thatpoint, use of smart cards will gain momentum.

Cooperative processingIn our discussion of the client server system inChapter 10 we saw that it was originally designedto facilitate communications between end-user(client) and the computer processor (server). Buthow about communications between end-usersusing the same database(s) and server(s)? This con-figuration could improve processing horizontallybetweencolleagues,customers, suppliersandotherrelevant personnel in organizations. It could alsohelp decision-making within a firm or corporationby making the decision-making process on-linewhilst sharing enterprise resources of computing

194

Page 208: Telecommunications and Networks

Messaging and related applications

as well as data/knowledge-bases. Such processingis called collaborative processing or groupwork.Theprocessinginthisenvironment is facilitatedbysoftware known as groupware and the processingknown as cooperative processing, or group writ-ing, orgroupscheduling.Suchprocessinginvolvesmultitasking instead of the single-task processing.It utilizes client server computing, but facilitatesinteraction among end-users in addition to the dis-semination and routing of the necessary shareddata/knowledge from groupware servers as wellas application and file servers. Groupware bindsthe separate activities of end-users and decision-makers intoanensembleofcooperativeprocessing.It retains the advantages of distributed processingand downsizing.

Groupware is possible because of the manyelectronic tools used on a LAN that may ormay not use a server but facilitate horizontalcommunications. These tools include:

ž Group calendering and scheduling, whichenables the scheduling of meetings andtravel, thus facilitating concentration onother productive tasks.

ž Group document handling, including group-work software utilities and development tools.More intensive use of e-mail messaging.

ž Group meeting and teleconferencing. Thisenables some eye-to-eye contact and instanta-neous reactions, which, unlike video-telecon-ferencing, can be spontaneous, taking placeany time and at any place.

ž Group decision-making and problem-solvingsupport through downloading of some deci-sion-making from the boardroom to the desk-top. Cooperative processing captures the flowof ideas (and discussions in teleconferenc-ing) and creates a continuous database andknowledge-base of all information relevant todecision-making and problem-solving.

Personal productivity programs such as e-mail,spreadsheets and word processing can still beused along with a DSS and EIS but integratedwith group work seamlessly and without anyspecial commands and procedures.

Cooperative processing will not be the bestsubstitute for interpersonal relations of face-to-face meetings and the telephone, but it can reacha much larger audience responding to specificquestions. It can overcome the barriers of timeand place and offer continuous communicationat one’s need and convenience.

Cooperative processing has many advantagesdiscussed and implied above, but it also has orga-nizational implications that can be far-reaching:it can affect the structure of traditional decision-making and lead to adhocracy.

Cooperative processing compared to tradi-tional processing is summarized in Table 17.2

Message handling systems(MHS)There are other message handling approacheslike the Bulletin Board Systems (BBS) and the

Table 17.2 Cooperative processing compared with traditional versus client server processing

Traditional processing Cooperative processing

Tasking: Single Multiple tasking with human interactionArchitecture: Open ClosedSoftware: Software on client and server are

independentSoftware on client and server are

integratedApplications: Distributed Host-based and enterprise-wideControl: Resides with workstation Resides with host or server computerSecurity and integrity A problem because of the distributed

controlHas better integrity and security control

because it is centralizedPath Hierarchical Many cross-currents leading to problems

of cooperation and collaborationTime and place Same time and same place, i.e.

face-to-face meetings. Same time anddifferent places, i.e. video-conferencing

Same time and different places. Differenttimes and different places

195

Page 209: Telecommunications and Networks

Telecommunications and networks

Newsgroups. These are often services (alongwith e-mail, electronic shopping, etc.) offered byInformation Service Provides and will be dis-cussed in Chapter 19. In this concluding sectionof this chapter we are concerned with the inte-gration of message handling systems that includee-mail, BBS, Newsgroups and workgroups of col-laborative processing.

There are four main approaches to the inte-gration of message handling. The most popularand successful in the mid-1990s was SoftwareNotes by the firm that produced Lotus. Thiswas in direct competition with the other con-tender in this field, IBM. Then, in 1995, IBMmade a hostile and successful takeover of Lotus.Whether the creative programmers and manage-ment responsible for the success of Notes willfind the large umbrella of IBM and miss theirinformal and ‘small’ environment of Lotus willnot be known for years. This leaves two othermain messaging vendors in the field: Microsoftwith its Exchange system and Novell with itsCollaborative Computing Environment. All theseapproaches are basically extending messaginginto discussion databases, group scheduling andelectronic forms within a overall and overreach-ing strategy for its deployment throughout theenterprise. Many vendors have acknowledged thecorporate needs for downsizing and using theclient server platform for the enterprise left themainframes behind. All the vendors incorporatemultimedia processing as part of the broad-band approach to information transfer. Elec-tronic messaging is thus fast becoming an infras-tructure providing much of the services that thetraditional operating system provided. It is oneof the technologies that manages, synthesizes andcommunicates information.

Despite the commonalties between the mes-saging vendors, there are some differences inthe architecture of the systems. IBM had a mid-dleware platform with a multiprotocol backbonefor scheduling, workflow automation and othermessage-based applications. Lotus had a back-bone for routers, an interenterprise connectiv-ity, and a tight integration of its e-mail programand Notes. Microsoft had a back-end server andfront-end client with an integrating platformfor enterprise-level communication. And Nov-ell offered cross-computability with any serverthrough the application components provided byits groupware (Bragen, 1995).

Summary and conclusionsMessage handling has seen an evolution in tech-nology and organization. The earliest messagingsystems were centralized both in the private andpublic sector. The public sector technology con-sisted of the telephone and the mail. This wasthe first generation of message handling thatlasted till mid-1980s. Then the second genera-tion that lasted till 1990 saw teleconferencing,e-mail, EFT and EDI. The third generation oftechnology saw video-conferencing, cooperativeprocessing and the developing MHS, the inte-grated massage handling system. This evolutionis summarized in Figure 17.4.

EDI can be characterized as an application-to-application or computer-to-computer processingusing teleprocessing and networks that automatedmany an office and business process. Theseprocesses included price generation, productavailability reporting, purchase ordering, invoiceprocessing, shipping and receiving of goods,accounts receivables and payable, and responsesto FQAs (Frequently Asked Questions).

However, EDI was appropriate for only highlystructured and well defined processes and hencedid not include many of the processes in a typi-cal office and business. It could not handle prob-lems that needed a human dialogue and is notconsidered a very end-user friendly system forclients and suppliers. There are also problemswith standards especially for the transmission ofEDI documents. Despite these difficulties, EDIis very cost-effective and has a fast response timefor applications where it can be used.

EFT is concerned with the transfer of fundsonce the paperwork has been satisfactorily com-pleted by EDI. One advantage of EFT is thata large number of transactions involving largesums of money can be processed quickly andefficiently. The accelerated income velocity ofmoney under EFT has a side-effect. Corporatemangers must contend with the loss of float andthe possibility that security and privacy of finan-cial transactions may be breached. Also, there isresistance to EFT which can be traced to the factthat people like to have cash literally ‘in hand’.Not all groups of potential end-users have thesame reaction to EFT because they are affectedby the advantages and limitations in differentways as summarized in Table 17.3.

Technologically, EFT and EDI may lead us toa cashless society. The application of information

196

Page 210: Telecommunications and Networks

Messaging and related applications

ThirdGeneration1990−

SecondGeneration1984−90

FirstGeneration1978−83

MHSCooperative processingVideo-conferencing

EFTEDIE-MAILTeleconferencing

Centralized Private Public Telephones Mail/Post

Figure 17.4 Evolution in message handling

Table 17.3 For and against EFT

For Against

Corporations: Better cashmanagement

Loss of float

Security problemsRetailers: Quick credit

approval andtransfer

Capital costs

Financial: Lowers transactioninstitutions

Costs of ATM

Fear of monopolyConsumers: Direct payments/

receiptsLoss of float

Home transactionsnow possible

Invasion ofprivacypossible

technology and telecommunications to homebanking and home shopping is moving slowerthan many had expected. One reason is thatATMs (Automated Teller Machines), POS (Pointof Sale) terminals and the consolidation of back-office automation in banks have high priorityand are consuming the scarce technical resources.The cost of EFT and EDI is also a drawback. Thereplacement of plastic cards with smart cardsmay help reduce the use of cash and cheques ifthey can be secured, especially when the transac-tions are between individuals and businesses asis the desire on networks such as the Internet.We will return to this topic in Chapter 20 on theInternet.

Case 17.1: GE bases globalnetwork on teleconferencingGeneral Electric Corporation (GE) has launcheda two phase, three-year plan to use video-teleconferencing to promote information sharingbetween its US operations and its internationaljoint-venture businesses. Implemented by AT&T,France Telecom and British Telecom, the networkwill link 20 offices with offices in 25 othercountries, providing seven-digit dialling for GEoffices world-wide. The network which will bemanaged by GE’s network-control centre inPrinceton, New Jersey (US), will comprise manyT-1 links and three intelligent nodes at Princeton,London and Paris with the Rembrandt series ofCodes by Compression Labs Inc., as the computersystem driving the video-teleconferencing. (A T-1 link is a telephone line, leased from a commoncarrier and can carry both voice and digitalmessages at 1.54 megabits per second.)

Teleconferencing will help GE customize itsproducts for specific markets in different coun-tries and implement new production and engi-neering techniques. The firm will be able to usea few dozen internal video-conferencing rooms tohold meetings with 30 to 40 customers and sup-pliers around the world. It hopes this technologywill help the firm solve day-to-day problems asif the factory were in the same building ratherthan thousands of miles away.

Source: Information Week, June 5, 1989.

197

Page 211: Telecommunications and Networks

Telecommunications and networks

Case 17.2: Electronic dataexchange (EDI) in the UKOver 2000 companies in the UK are engagedin EDI, sometimes known as ‘paperless trading’.They use networked computer-based systems toprocess orders, invoices, freight and forwardnotices, and customer declarations, in order tosave manpower, reduce errors and minimize theamount of paperwork their offices handle. EDIappears to be a major facet of the proposed PoliceNational Network. Slashing paperwork shouldhelp reduce the time that prisoners are held onremand before trial by speeding the time it takesto process and exchange information between thepolice, customs and Home Office.

Two factors contribute to Britain’s lead overother European nations in EDI activity: theliberalization of British telecommunications andthe ease with which value-added network supplierscan enter the market. (The latter coordinateEDI networks.) Nevertheless, less than 1% of allbusiness transactions in the UK are via EDI.

Sources: Financial Times, July 19, 1989, p. 18 andComputer Weekly, Dec. 1989, p. 8.

Supplement 17.1: Costs ofmessage handling and relatedprocessing

1974 1994

Telex $25CFax $10 20Electronic messaging $1.70Processing (MIPS) $90 $20CStorage (gigabytes) $10 $3Sending 1 megabyte of data

across the US $0.95 $0.20

Source: Fortune, July 1995, pp. 119, 121.

Cost of EDI in 1994 (with an Internet serviceprovider

Start-up costs $50Telecom costs $14/monthAccess costs $45/month(based on 500 2000

documents/month)

Source: Information Week, Oct. 2, 1995, p. 70.

Bibliography

Aldermeshain, H, Ninke, W.H. and Pile, R.J. (1993).The video communications decade. AT&T TechnicalJournal, 72(1), 2 6.

Bragen, M.A. (1995). Four paths to messaging. PCMagazine, 14(8), 139 151.

Coopock, K. and Cukor, P. (1995). International video-conferencing: auser survey.Telecommunications,26(4),33 34.

D’Angel, D.M., McNair, B. and Wilkes, J.E. (1993).Security in electronic messaging systems. AT&TTechnical Journal, May/June, 7 20.

Ellis, C.A., Gibbs, S.J. and Rein, G.L. (1991). Group-ware: some issues and experiences. Communicationsof the ACM, 4(1), 38 58.

Emmelhainz, M.A. (1990). Electronic data interchange:a total management Guide. Van Nostrand Reinhold.

Friend, D. (1994). Client/server vs. cooperative process-ing. Information Systems Management, 11(3), 7 14.

Gordon, S. (1993). Standardization of information sys-tems technology at multinational companies. GlobalInformation Management, 1(3), 5 15.

Journal of Information Technology, 9(2), 1994, 71 136.Special issue on ‘Organizational Perspectives onCollaborative Processing’.

Labriola. (1995). Desktop videoconferencing: candidcamera. PC Magazine, 14(8), 221 236.

Morley, P. (1992). Global messaging: the role of publicservice providers. Telecommunications, July, 19 30.

Muiznieks, V. (1995). The Internet and EDI. Telecom-munications, 29(11), 45 48.

O’Keefe, R. (1995). Do the business. The Internet Mag-azine. net, Issue 10, 64 66.

Riaczak, K. (1994). Data interchange and legalsecurity signature surrogates. Computers and Secu-rity. 13, 287 293.

Richardson, S. (1992). Videoconferencing: the bigger(and better) picture. Data Communications, June,103 111.

Senn, J.A. (1992). Electronic data interchange, Infor-mation Systems Journal, 9(1).

Thuston, F. (1992). Video teleconferencing: the stateof the art. Telecommunications, January 63 64.

Trauth, E.M. and Thomas, R.S. (1993). Electronic datainterchange: a new frontier for global standards pol-icy. Journal of Global Information Management, 1(4),6 17.

Wright, T. (1992). Group dynamics. Which Computer?15(7), 38 48.

198

Page 212: Telecommunications and Networks

18

MULTIMEDIA WITHTELECOMMUNICATIONS

In an age when computers will be voice, touch, joysticks, keyboard, mice and other devices; televisionis inherently passive, a couch potato medium.

George Gilder in Microcosm, 1989

Have nothing in your house that you do not know to be useful or believe to be beautiful.William Morris

Introduction

Multimedia is nothing very new conceptuallyspeaking; however, there are many applicationsof multimedia. A list of current and potentialapplications appears as Figure 18.1. In somePCs, with most workstations, and interactive TV,multimedia comes as a standard capability. Thenthere are specialized devices with the ability tocompose music and do computer-aided learning(CAL). But all these devices do not necessarilyrequire telecommunications. Some applicationsdo use telecommunications like telecommuting(using computers for work at home), e-mail that delivers multimedia documents,teleshopping, telenews, telebanking and financialapplications, cooperative processing, and games.But multimedia in these cases have onlyenhanced existing applications that existedwithout telecommunications. Then there areapplications that are not feasible withouttelecommunications such as video-on-demand,the digital library, video-conferencing, distancelearning and telemedicine. It is these latterapplications that we will examine in this chapter.But to appreciate multimedia applications weneed to understand the nature of multimedia,its characteristics that are relevant to computerprocessing, and the resources needed forthese applications. This is where we startthis chapter.

Multimedia and distributedmultimediaMultimedia is an extension of the traditionalmedia of numeric data and text. Later therewere graphics added on but that did not havethe resolution and quality of pictures. The addi-tion of graphics, images and pictures, as well asaudio, animation and full feature films, are whatis often referred to as multimedia. The extensionof multimedia processing is when it is availablefrom a distance through networks and telecom-munications. Then we have access not just tothe local library of multimedia but to all thelibraries strewn across the country and indeedaround the world.

Multimedia not only adds to knowledge butalso to the friendliness of the computer inter-face. It is so much easier to communicate with allthe senses of sight, sound and touch (of the key-board). The sound of one’s name and a welcomemessage or the sound of ‘YOU Have Mail’ is oftenmuch more refreshing than just seeing the mes-sage on the screen. Perhaps we will get used tothe novelty of voice messages but then there willbe other multimedia approaches to make you feelwelcome and happy working with a computer.

Multimedia systems include multiple sources ofvarious media either spatially or temporally tocreate composite multimedia documents. Spatialcomposition links various multimedia objects

199

Page 213: Telecommunications and Networks

Telecommunications and networks

STAND-ALONE

Music compositionComputer-Aided Learning (CAL)

ENHANCED

DISTRI-BUTED

Interactive TV

TelecommutingCompound document e-mail

TelenewsTelebanking/Financial transactions

Collaborative processing

Games

Teleshopping

Video

FilmsPassive

Interactive

On-DemandDigital Library

Video-conferencingDistance educationTelemedicine

Home PC / Workstation

Figure 18.1 Applications of multimedia

into a single entity . . . dealing with objectsize, rotation, and the placement within theentity. Temporal composition creates a multi-media presentation by arranging the multime-dia objects according to temporal relationship. . . (Furht, 1994: p. 53).

Multimedia can also be viewed as a set ofobjectives and data entities which are a multi-dimensional array of numbers derived from vari-ous sensors that record images, audio and videos.These are filtered and then available for brows-ing and sometimes even manipulation. Doingthis using networks enables one to do collabo-rative processing using this vast and rich baseof data and knowledge in all its many manifes-tations in different media. This enables richervideo-conferencing, more meaningful learningand training even from a distance, and evenbetter diagnostics by a doctor who is otherwisenot available on-hand. It will contribute to theway we work, learn, communicate and even play.It could improve one’s enjoyment of films andvideo, not only by increasing the range and vari-ety, but by making them available whenever andwherever they were wanted. When interactive,multimedia could help in our work by improvingcommunications and increasing dissemination ofinformation and knowledge.

Requirements of multimediaThe distribution of multimedia makes manydemands on computing resources. To better

understand these demands and needs it isnecessary to look at the basic characteristics ofmultimedia transport which creates these needs.One basic need is the timing of the multimediacomponents and the need for low latency, whichis best appreciated by looking at Figure 18.2.

The reduction of delay is important in real-time applications such as in video or the show-ing of a film. Say that you are seeing a close-up of a singer. Would you not be unhappy ifthe sounds were not precisely synchronized withthe corresponding movements of the lips? Thissynchronization is referred to isochronous pro-cessing and refers to real-time communicationthat ensures minimum delay by synchronizingto a real-time clock and establishing a virtualchannel. Thus isochronous processing is simi-lar to point-to-point circuits rather than packetswitching where synchronization is not crucialin say e-mail delivery where the processing isactually not isochronous and data is moved inpackets around possibly different routes at dif-ferent times with small delays hardly percepti-ble. That is why packet switching is feasible formany text-based and applications-based informa-tion including e-mail say on the Internet thatuses the TCP/IP protocol.

The second basic and important characteristicof multimedia is that it is very memory inten-sive. This has two consequences. One is that morestorage space is required at the server and clientreceiver end; and secondly, large capacities arerequired for the transfer of the multimedia mes-sages which means more bandwidth. Networks

200

Page 214: Telecommunications and Networks

Multimedia with telecommunications

Access time

Latency Transfer time

Instantat whichcontrolunitinitiatescall

Instantatwhichdeliveryis completed

Figure 18.2 Latency defined

designed and optimized for data and textual traf-fic are not suited for multimedia traffic like videoand films.

There are many techniques of overcomingthe memory intensive problem. One is the useof techniques for economizing in memory. Forexample, in image processing of a film it is notnecessary to capture and store all consecutiveimage stills in sequence but instead to captureand store a base image and then only the changesto that base making the computer processor dothe construction of each changed image. Eventhen, storage is a problem and then one mustresort to compression.

Compression over the years has greatlyimproved as shown in Table 18.1. However,despite compression and high compressionratios, storage can still be a serious problem asdemonstrated in Table 18.2 where different typesof multimedia are compared.

Once compressed, multimedia must be effi-ciently stored, transported, retrieved and man-ipulated in large quantities and at high speeds.

Table 18.1 Progress in compression over the years fordata rate acceptable for video

Year Bandwidth(Bits/s)

1978 60000001980 30000001982 15000001984 760000 A factor of1986 224000 75 over1988 112000 24 years1990 1120001992 80000

Source: Adapted from Joseph Braue (1992). Therise of enterprise computing. Data Communica-tions, 21(12), 27.

Table 18.2 Storage requirement for typical applications(in megabytes)

Text of 500 pages (normal standard) 1.0100 Fax line images uncompressed 6.410 minute animation (15 : 1 compression) 100100 colour images (15 : 1 compression) 50010 minute digitized video (30 : 1

compression)550

1 hour digital video (200 : 1 compression) 1000

Source: Adapted from Borko Furht (1995). Multimediasystems: an overview. Multimedia, 1, (1), 48.

Different media have different needs of band-width depending on the nature of the applica-tion as shown in Figure 18.3. For example, videoand audio stream playback and teleconferencingare both real-time applications but with differentlatency and delivery requirements with telecon-ferencing requiring very small latencies but play-back applications requiring guaranteed deliveryof real-time messages.

Resources needed for multimediaprocessingLarge bandwidth needs of multimedia requireswitching capacities for large megabyte andgigabyte transfers and will need equipment likeATM (Asynchronous Transfer Mode), SMDS(Switched Multimegabit Data Service) andSONET (Synchronous Optical Network) dis-cussed in earlier chapters.

The need for large bandwidths, low latenciesand real-time delivery makes many demands ofcomputing resources. These resources have tobe managed along all the paths between theend-user at a client device and the server, the

201

Page 215: Telecommunications and Networks

Telecommunications and networks

Bandwidth

100 Mb/s

10 Mb/s

100 kb/s

10 k b/s

1 Mb/sIntegrated video-conferencing

Digital videophone

Desktop video-conferencing

Visualization

Analogue phoneTime

Figure 18.3 Demands for bandwidth. Not to exact scale

MultimediaStorage

Network

VIDEO

AUDIO

Multimedia

Server

HARDWARE

Figure 18.4 Multimedia via telecommunications

server itself, and the client. An overview of themain components of a multimedia system isshown in Figure 18.4. We will discuss these inturn starting with the server end.

ServersThe servers are specialized processors designedfor handling large masses of data. They will varywith the type of multimedia being processed butwhatever the media the server has to be fast

and powerful. It must also be supported by largeand often very large storage capacity. The magni-tude of storage space required for multimedia hasbeen mentioned before. Now multiply the stor-age for one delivery with the large number thatwill soon be in demand. Even a modest choice offilms and video can quickly run storage into giga-bytes. These storage devices must be fast enoughto serve the high volume of multimedia withoutbeing a bottleneck in processing. For simultane-ous access, servers must also be equipped withmultiple disk drives with video files for each

202

Page 216: Telecommunications and Networks

Multimedia with telecommunications

drive. Storage can be done by storing multimediaas objects in the object orientation methodology.Objects are also embedded in applications but canbe retrieved by OLE, Object Linking and Embed-ding methodology from Microsoft, which is fastbecoming the industry standard for large PCs andworkstations. The objects could then be data froma spreadsheet, text from a word-processing pro-gram, or a fully fledged film or video.

NetworksWe have mentioned the need for bandwidth ear-lier. One approach to the problem of bandwidthis microsegmentation where users are assignedto segments of the LAN. This reduces the usersper segment and better serves the users in termsof bandwidth available to them. Microsegmenta-tion, however, does rely on better switching aswell as its integration with bridging and rout-ing especially if a hub is being used. Intelligenthubs, it is hoped, will be able to do this and pro-vide bandwidth-on-demand as needed and pro-vide virtual switched circuits.

Besides bandwidth, there is a problem of archi-tecture with multimedia. It corresponds to theOSI layer architecture discussed earlier and nowcompared in Figure 18.5. Multimedia needs anew applications layer to accommodate multi-media applications and the transport protocolneeds an additional control capability to prevent

congestion at the destination and at the routers.Also, the transport protocol needs the means topackage timing traffic with audio and video toprepare them for transmission. Most upgrades tothe transport and other layers required by mul-timedia can be handled by software upgrades.

Multimedia needs the addition of a base layerand a node layer. The base layer is added to theOSI model in order to coordinate the data accessand make the data generated by these queriessystems independent. The node layer, residingover the transport layer, represents the logicalstorage of multiple forms of data contained inthe multimedia information . . . By providingan organized method for computers to logicallycommunicate, the layer model ensures data canbe transferred independent of systems platform. . . The interoperability ensures that differentplatforms can equally access multimedia infor-mation distributed over networks regardless ofthe incompatabilities in their operation. (DometIII et al., 1994: pp. 42, 41).

ClientsThe client processor has to be large and pow-erful though not as powerful as the servers.Early clients were the 386 or 486 processor.Nowadays, the Pentium is recommend for anyrespectable performance for multimedia process-ing. Besides the CPU requirement there is the

Application

Presentation

Session

Transport

Network

Link

Physical

Application

Presentation

Session

Node

Network

Link

Physical

Base

Transport

ISO MULTIMEDIA

Figure 18.5 ISO vs. multimedia architecture

203

Page 217: Telecommunications and Networks

Telecommunications and networks

need for the audio and visual coder and decoder.Some systems come with a decoding board or acoder/decoder called the codec which needs tobe equipped to both the client and the server.The codec is the multimedia equivalent of themodem (modulator/demodulator) we visited ear-lier. Many codecs also perform compression.Some codecs achieve 175 000 pixels/frame whichcomes close to high quality TV which is between190 000 200 000 pixels/second.

The client could also have a set-top box, alsocalled a home communication terminal, that hasinteractive program guides, movie-on-demandcapability, robust graphics, full user interactivityand personalized information channels.

StandardsMultimedia needs its standards and there areplenty of them. The most prominent is theMPEG series, where MPEG is the acronym ofthe organization that developed the standards:Motion Picture Experts Group. MPEG dealswith moving images while JPEG standards dealwith still images. There are other standardsorganizations involved in developing multimediastandards, at least seven in the US alone. TheMPEG-1 standard defines a video resolution of352-by-240 at 30 frames per second. This gives aquality comparable to the videos on rent in manystores assuming a 150 Kbyte/ps bandwidth.

The MPEG standard is the best known sinceit is an international standard approved by theISO though it does not carry the ISO prefix.The MPEG is being watched by the professionbecause it is an exception to the rule that youneed a stable technology before a standard isdefined. Here we have a standard before the tech-nology has stabilized. MPEG will test the ruleof which comes first: the chicken or the egg, thestandards or the technology. The widely acceptedstandard may encourage the industry to bring outnew products but it might also inhibit innova-tions and freeze the technology of multimedia.

There are also special standards being devel-oped for codecs such as the H.261 by CCITT.It defines display formats with specific resolu-tions and provides processes for reading, decoding,video information and the organization into blocksand groups of blocks. H.243 defines exchang-ing a unique encryption key at start of compres-sion. There are other standards like H.221, H.230,H.261, H.242 and the umbrella standard of H.320.

Applications of distributedmultimediaEven without discussing the more exotic multi-media application of telemedicine and digitallibraries, we can see that the implementation ofdistributed multimedia is going to be neither easynor cheap. It is therefore safe to assume thatimplementation will come in stages and will comefirst in the office rather than the home becausethe offices will be able to afford it and make gooduse of it. Also, implementation will be first in theareas where existing applications do not yet havemultimedia and need it. Examples are telecom-muting, e-mail and retrieval of multimedia infor-mation for decision-making and problem-solving.But even then there will be a high cost since theseapplications will be very storage-intensive. Forexample, an e-mail containing a one-minute full-motion video will range between 10 Mbytes forVHS quality. Other combinations of e-mail arealso storage (and bandwidth) intensive like textaugmented by audio, text with images, and imageswith audio. Sending video mail in the store-and-forward mode does not require isochronous pro-cessing but video mail does demand minimumdelays and consume prodigious bytes of storage.Eventually we will have compound document e-mail with text, voice and video that can be sent inone form and read in another. It is already possi-ble to send an image document and review it astext (using an optical character reader) and havetext read aloud (using voice synthesis).

Video-conferencingAnother important application of multimedia inthe office is video-conferencing. It may be consid-ered an extension of existing teleconferencing butit has many important characteristics that make itcomplex to implement. These characteristics are:

1. It has bandwidth needs that are on-demandbut an unknown size demand unlike say thedownloading of a known sized full-featurefilm or video. There is some uncertaintyabout how many participants there will beand how much each will participate.

2. It is a two-way interactive communicationwith both ways possibly containing multi-media unlike the downloading of a film orvideo which is a one-way downloading.

204

Page 218: Telecommunications and Networks

Multimedia with telecommunications

3. It is real-time and hence isochronousprocessing.

Multimedia conferencing utilizes the conceptof shared virtual workspace which describesthe display at each client station and whereeach participant can send or receive data, voiceor video as part of collaborative processing.Furthermore, the functions performed includereal-time control of audio and video, dynamicallocation of network resources, synchronizationof shared workspace, multiple call set-up, andgraceful degradation when faults occur (Furht:1994: p. 58).

The basic process of video-conferencing isshown in Figure 18.6 where the object to betransmitted over the networks is found storedat a server, compressed, and then communicatedthrough the layers of transport, network andlink to the network. On reaching its destinationclient, it flows through the layers of link, networkand transport and is then decompressed beforeappearing on the screen as the desired imagewith its multimedia package. The process for theopposite direction is the process in reverse. Forboth cases we do not show all the layers nor allthe components involved and stick to the basicsto keep things simple.

One of the unresolved problems of multimediaconferencing is the optimum communicationarchitecture for composite streams of audio,video and images in transmission. Otherimplementation problems concern hardwareinterrupts and interoperability problems and theglobal adoption of international standards onvideo-conferencing quality despite the existenceof many standards for video-conferencing likethe H standards from the CCITT concerninginteroperability: H.261, H.221, H.230, H.242 andthe umbrella standard H.320. What we need isa high resolution video and audio overlayeredwith spreadsheets, engineering drawings, textand other relevant multimedia information.

Despite these difficulties, there are some dis-tinct directions that video-conferencing is tak-ing. It will include the upgrading of telephonesand the downgrading of meeting rooms lead-ing to desk video-conferencing; improvementsin size of image, quality, picture and coloursupport; the initial use of the PC with livevideo in a window of the screen; the ability ofparticipants of videoconferencing to take notes,change memos and diagrams, jot annotations,and manipulate data on spreadsheets, all duringthe video-conferencing; and the development ofa library of all relevant multimedia objects thatare easily and quickly retrievable.

Grab frame andencode

SERVER STATION

Link andNetworkAdapter

Communicationtransport andnetwork layer

Compress

USER STATION

Link andNetworkAdapter

Communicationtransport andnetwork layer

Network

Decompress

Figure 18.6 Video transmission

205

Page 219: Telecommunications and Networks

Telecommunications and networks

Video/film-on-demandThis is where full-length films or videos areavailable on demand and delivered to the officebut most likely in the home and on the homeentertainment device. Conceptually, there is lit-tle difference between the processing of video-conferencing and the very common teleconfer-encing. The main technological differences lie inthat the multimedia in video/film-on-demand istransported only in one direction and that thestorage at the server end has to be very large incapacity and very fast in its retrieval. The greatadvantage that both video and film-on-demandhave is that they can be viewed, stopped andreplayed, rolled-back, zoomed for any frame, runin slow motion or at fast speed, all under the con-trol of the end-user while in the relaxed comfortof the library or bedroom at home.

We have already discussed the storage consid-erations but it is important to note that what isstored is mostly entertainment and that bringsthe entertainment industry into the picture. Theybecome important players in the market for theycontrol the content. However, they also want tocontrol the distribution and so there is a con-frontation between the holders of content and thedistributors which is resolved in the market-placethrough mergers, alliances or takeovers. We shalldiscuss these relationships later but only after weidentify another group of players, the educationalsector, for they also generate and own content.This group includes the digital libraries and edu-cational institutions that offer distance learning.Before we examine these applications, we look atanother application which is somewhat similar tovideo-on-demand, that of telemedicine.

TelemedicineTelemedicine is the delivery of medical care toa patient by an expert (or institution like ahospital) separated by distance but connectedby network and a multimedia communicationsystem. A package of information on the patientis multimedia in that it may contain CATor MRIscans and other computer generatedpictures; entire medical case history in textand graphics; and a verbal opinion or set ofquestions by the patient’s doctor to the expert.The expert looks at the multimedia package andmay ask questions and a dialogue may ensueafter which a diagnosis is made. The multimedia

package may be sent only one way but thedialogue is a two-way dialogue and maybe notbetween many people as in a video-conferencingsituation but between at least two people onan on-line real-time basis for time could becrucial in many a situation. Thus telemedicineis not as complicated as video-conferencing inthat multimedia is often not exchanged buttransported one way. Also, this package is notgrabbed from a server as in video/film-on-demand but is created in real-time with currentpulse rate and heart beat transmitted as theyoccur. This may require a lot of sophisticatedequipment for data acquisition at the patient’send but it may be the report of a nurse ona helicopter escorting a wounded person in acar crash (or in a war zone). This raises manyimportant issues in telecommunications andnetworking along with moral and ethical issuesof who has the priority in traffic congestion.

We will now define telemedicine as being thecollection of data on a patient in digital formto be transmitted remotely to another point foranalysis or diagnosis. It could be used for gettingan expert opinion or any opinion because of theurgency involved such as in the case of injurieson a battlefield (incidentally, telemedicine wasfirst supported by the US military) or for gettingmedical attention where it is not possible such asin rural areas or when travelling away from thefamily doctor.

The collection of data would include the record-ing of data such as pulse, blood pressure, etc.; theobservation of physical conditions like colour ofthe eyes or face through imaging devices; observa-tions made by special cameras that can go into ori-fices like the ear, nose and throat; testing of bloodsamples; collection of data from special instru-ments; and the past medical records of the patient.All this is sent by telecommunications using awideband telecommunications system capable ofhandling multimedia traffic.

The impact of telemedicine is on both thepatients and the doctors. One impact will be onthe patient who will no longer have to go to afamily doctor but a medical facility equipped fortelemedicine and have a diagnosis made by a doc-tor or a specialist. In some cases, treatment mayinclude an operation. Some operations have beenperformed remotely by a surgeon on a dummywith all the movements being transmitted elec-tronically to a robot working on the patient. Andwhat if the patient lives in a remote area of the

206

Page 220: Telecommunications and Networks

Multimedia with telecommunications

country, or is remote from home, or just lives ina developing country without good medical facil-ities? Telemedicine may then be the only goodanswer available.

The impact on the family doctor will be accessto expert opinion instantly but a loss of patientsover time. The family doctor will no longer havea monopoly over the local patients but may haveto start seeking remote patients. Many issues areraised by telemedicine, such as:

How is the cost of the visit for telemedicinesplit between the local doctor and the remoteexpert?Given that medical practice has to be licensedby each country and even each state (orprovince or county) in each country, at whatlocation should the doctor be licensed? At thelocation of the patient, the expert, or both?Who is responsible if there is a misunder-standing on ‘this’ or ‘that’ artery, limb ororgan resulting in the wrong actions beingtaken and an ‘injury’ to the patient.What happens if a document or messagefor one patient gets switched to anotherpatient? Who is responsible? The doctorwith the patient, the telemedicine facility,the transmission equipment manufacturer,or the telecommunications carrier?How is the telemedicine technology to beregulated?What is the liability of the providers of thetechnology that makes telemedicine possi-ble, including that of the telecommunica-tions and network provider?

Digital library

In the 1960s and 1970s there was much talk ofan electronic library. John Kemeny, the designerof the programming language BASIC, envisionedone library in all of the US receiving querieson a book or article. The end-user could browsethrough the index and even glance through thecontents or the book itself on a terminal and find-ing what is wanted would have the article or bookdownloaded to the terminal. There was much talkabout the distribution of royalties and then therewas a long lull. Then in the late 1980s there was arebirth of the idea the term digital libraries sur-faced. A grant from the NSF in the US in 1993 tosix universities for a digital library initiative did

not hurt. But the transformation to digital wasperhaps largely the result of the emergence ofthe ISDN (Integrated Services Digital Network)and the promise of B-ISDN with high capac-ity bandwidth designed for multimedia transmis-sion. The interest was also sparked by the fallingprices of costs of digital storage relative to thecosts of library shelf-space and the increasingcosts of the labour-intensive activity of catalogu-ing, checking-out, checking-in and reshelving ofbooks in a library. (The operating costs of theBibliotheque Nationale de France costs approxi-mately $260 million for its 5 kilometres of com-puter controlled belts that deliver books to some150 points.) And then there is a capital cost (theBibliotheque Nationale that opened in 1995 hadcost around $2 billion). Also, advances in net-working made transmission of book material fea-sible and even affordable.

But there are many unresolved problems andissues. One concerns the conversion of all thematerial in our libraries into tangible objects withelectronic digital representations such that theycan be easily accessed and retrieved, copied andthen distributed. This will change the printingand publishing of books and magazines as wenow know it. There is also the issue of the pro-tection of intellectual property rights. This is anold issue and is not getting near to any solution.In fact, things are getting worse since network-ing and the availability of holdings in the digitallibraries may soon be accessible anywhere in theworld by countries (especially in the Far East andthe Pacific Region) that have not agreed to theadoption or enforcement of international agree-ments to protect intellectual property rights. Also,there is some philosophical and legal argumentabout the relevance of intellectual property law tosoftware protection. Some in the software indus-try believe that ‘existing property laws are fun-damentally ill-suited to software. The problemsare rooted in the core assumptions in the law andthe mismatch with what we take to be importantabout software’. (Davis et al., 1996: p. 21).

Books in a library on some scientific and hightech subjects get outdated sometimes even beforethey are printed. One relies therefore on journalswhich also have a long lead-time for publication.Using a digital library (or the Internet), a doc-ument can be available immediately after it iswritten and put in machine readable form. Herebooks and articles can be searched by content,

207

Page 221: Telecommunications and Networks

Telecommunications and networks

title or author. Special hyperlinked versions willfacilitate search and use.

What will be the impact of a digital library onlevels of education and the acquisition of knowl-edge? How will it affect the delivery of educa-tional services? Do students need to go to thelibrary or can retrieve what they want from aroom in the dormitory or from their homes? Inaddition to books being delivered electronically,why not also have lectures delivered electroni-cally? Why can a student not learn from the bestinstructor and researcher in the country or theworld, just as a patient get the best advice frommedical experts wherever they are? Why can anystudent not have quick and easy access to thelectures and research findings in Yale, Stanford,Harvard, Oxford and Cambridge? How will thisaffect international education? Will the digitiza-tion of information make it less accessible to aclass of information-poor who must then competewith the information-rich class? How will dig-itization contribute to economic development?How will this affect productivity and competi-tiveness (nationally and internationally)? Will itimprove our life-style and increase our standardof living?

Whatever the answer to all the above possi-ble questions, there is a general acceptance thatdigital libraries will contribute to formal learn-ing, informal learning and professional learningand may even contribute to digital schools. Itwill greatly support distance learning which isthe subject of our next topic.

Distance learningDistance learning is education in a place that isremote from the sources of learning whether this beinstructors or teaching materials. This is achievedby transmitting educational material like course-ware designed for remote learning in multimediapackages to the point where they are needed.

The process of distance learning can bevisualized as in Figure 18.7. The instructor usinga multimedia workstation will instruct coursewareto be sent through a network to a pupil or studentwho also sits at a multimedia workstation toreceive the material. Also accessed are materialsfrom data/knowledge-bases (which may includemultimedia dictionaries, encyclopedias, atlases,etc.) either by the student directly or on therecommendation by the instructor, or both.This mode of education that blends networkingwith educational materials (including appropriatecourseware with varied media providing richerinsights) not only implies different modes ofinstruction but also different modes of learning.

Distance education along with digital librariesif at all accepted as a viable concept will have aprofound impact not only on the way we educatethe next generation but also on the infrastructureneeded for such changes.

In technological and social changes there isalways an interplay, a tension, between theforces of conservation and innovation. Culturesand communities do not, and should not, let go

Students at their multimedia stations

NETWORK

Data/knowledge−

base

Multimediaworkstation

InstructorCOURSEWARE ON

VIDEO SERVER

Figure 18.7 Distance education

208

Page 222: Telecommunications and Networks

Multimedia with telecommunications

lightly structures and practices in which they areinvested heavily. The task in the years ahead willbe to decide which existing practices and struc-tures to let go and which to retain, and whichinnovations to reject or adopt. (Levy and Mar-shall, 1995: p. 83).

Multimedia electronic publishingElectronic publishing is the publishing ofmaterial by electronic means like a computerwith a fast laser printer. In the early 1980swe had a special computer program TeX thatelectronically published books for computingwith symbols used in mathematics and science.The turn-round time from a finished manuscriptto the bookshop was reduced from what wasthen around 1 2 years to 1 2 months. Andyet electronic publishing is not very common.Partly because there is a high cost of capitaland conversion and partly perhaps publishers arethinking of multimedia publishing. A contractfor a book in 1990s often had an additionalclause giving the publisher the copyrightfor use of the material for all multimediapublications to be done electronically if deemedprofitable. Publishers are thinking in termsof making all publications multimedia andeven interactive: a fiction book would havean appropriate background music for eachchapter; a text in chemistry would have a filmor video on an experiment being performed;and an encyclopedia entry on space travelwill have the film clips of the actual space

launch. With the click of a mouse, the user,even a child, can get more information (oraction) from any of the objects on the screen.Such interactive multimedia can add greatlyto learning especially to electronic distancelearning. It will also greatly reduce the costsof books once book publishing is integratedwith distribution through telecommunicationsand networks. This could be done using a digitallibrary or through communications media, be itcable TV or by telephone to a computer screenor to a combination device. This explains thegreat interest in communication carriers and thetelecommunication industry by the publishingindustry and the many alliances and mergerstaking place not just in the publishing andentertainment industry but with carriers and thecomputer industry.

Integration of electronic publishing wouldhave many advantages such as:

Reduction in current prices of publishingand distributionReduction in the time required for produc-tion and distributionEase and speed of updates, revisions, andlater editionsEasier access of search and retrieval by dif-ferent identifiers using computer indexingand hypertext methodologyThe entire publishing industry will be envi-ronment friendly and save the cutting downof trees for making paper currently used inpublishing.

Table 18.3 Comparison of modes of delivery

Telephone Cable TV Networked Multimedia(Internet) networked

Main users: Everyone Homeowners Anyone EveryoneContent: Low Low/medium Medium/high High1/2 WAY: 2 way sequential 1 way only 2 way sequential 2 way interactiveBandwidth

needed: Very low High Low/Medium High to very highSecurity: Good Not needed Not good Poor/goodUsage: Very easy to use Very easy to use Not so easy EasyAccess: Excellent So so Not so good Not so goodTopology: Bus/star Trunk & branch Routed StarSwitching: Circuit switching Unswitched Packet switched Switched/unswitchedProtocols: POTS Proprietary TCP/IP ISDN, analogueBilling: Per unit used Per connection Poor facility Good facility

Source: Adapted from Furht (1995: pp. 28 9).

209

Page 223: Telecommunications and Networks

Telecommunications and networks

Integration requires the consideration of differ-ent modes of delivery which are compared inTable 18.3

We have thus far discussed the publishing ofbooks and magazines and journals. But the pub-lishing industry includes the publishing of news-papers too. If its publication and its distributioncan be integrated then why not have one officedo all the editorials, collect all the raw data, com-pose the news stories, add art work and colouredpictures, and then transmit this to the regionaland local points where local or regional news canbe added and printed off fast electronic print-ers? Wouldn’t this be cheaper and more efficient?But would this also lead to a concentration ofpower assuming that information is power? If so,it that all that bad? If yes, then can the dangers ofconcentration of publishing power be containedand curtailed?

There is no question that telecommunicationsand networks make many an application nowpossible but along with it there arise many socialand ethical questions and issues that we may notbe ready to handle and cope with. There may beorganizational problems too.

Organizational implications ofdistributed multimediaFor all the many applications of distributedmultimedia, we have examined the technolog-ical implications but not the organizational

implications. The important implication resultsof many large and important firms and indus-tries in the distribution of multimedia. We men-tioned earlier that the entertainment industryhas become most interested in the distributionof multimedia films and videos-on-demand. Thisincludes the publishing industry with Goliathslike Simon & Schuster in the UK. We have alsomentioned the end-users implicitly as includinglibraries, the health-care institutions like hos-pitals, and the government who are interestedin the welfare of their citizens and subsidizesome organizations like libraries and hospitals.Another group interested is the computer indus-try which generates the products used for mul-timedia. This group include manufacturers ofcomputers, software houses, and manufacturersof peripherals and supporting telecom equip-ment. And finally, there is the group of providersof communication services, which include thecompanies that offer telephone, cable TV, wire-less, satellite and the Internet services. There isfierce competition amongst some of these firmsespecially in the communications provider indus-try. For example, the telephone and cable TVeach think that they can produce one device thatwill do away with, if not marginalize, the other.This then could be a battle for survival. Each,however, has a comparative advantage as shownin Table 18.3. It is likely that all will survive butonly after much restructuring and realignments.

The industries involved in multimedia com-munications face each other in the market-place as depicted in Figure 18.8. Most of the

Film Publishing OtherVideo

Entertainment Industry

Providers

Telephone

Cable TV

Internet

Satellite

Wire-lessHard−waremfrs.

Soft−warehouses

Telecomsuppliers

Others

Govt.

Libraries

Education

Health−care

Others

Computer Industry

EndUsers

Figure 18.8 Players in the multimedia market

210

Page 224: Telecommunications and Networks

Multimedia with telecommunications

Table 18.4 Evolution of multimedia computing.

1st Generation 2nd Generation 3rd Generation 4th Generation1989 91 1992 94 1995 96 1996

Media: Text Moving, still & colour-bit images Full motion On-demand video/filmB/w graphics Motion video HDTV Multimedia video-Bitmap images conferencing

Technology: JPEG Motion JPEG MPEG-2 WaveletsMPEG-1 MPEG-3 C

MPEG-4 ????

Base platform: 25 MHz 386 50 MHz 486 100 MHz Pentium ??????????40 MB disk 240 MB disk 2600 MB disk C Risc processor

Authoring: Hypertext OO multimedia Integration of OO Integration withHypermedia with systems corporate KBIS

operating system

Network: Ethernet FDDI (100 Mb/s) FDDI (500 Mb/s) Multi-MegabyteToken ring ATM ATM

Isochronous Ethernet

interrelationships are between the four groupsshown but some interrelationships exist withingroups like the government supporting morethan one library and hospital or more than oneeducational institution at the same time. Theseinterrelationships if drawn on Figure 18.8 wouldbe one jumble of lines with many being in thenorth-west corner between the communicationservice providers and the entertainment indus-try. This is where the large and very large multi-billion dollar companies are trying to stake outtheir share of the market. They will either do itthrough fair competition, through friendly merg-ers and alliances, or through unfriendly take-overs and buyouts. This will be decided in thenear and far future and so will be discussedlater in the chapter entitled ‘What Lies Ahead?’.Whatever happens, it will certainly be exciting.

Summary and conclusionsMultimedia has come a long way since the firstgeneration of applications in 1989 90. Tech-nology is improving, especially with a trillionbps transmission, to the point that one can pre-dict (with some caution) that we will soon beable to see any full feature film or video, orbrowse through any data/knowledge-base or surfanywhere on the Internet, all from the comfortof a home theatre. One could also do video-conferencing from the desktop and, like the hometheatre, have access to knowledge wherever it

may be through personalized information chan-nels. This evolution is summarized in Table 18.4.

There are few if any technological roadblocksfor the above scenario. However, test beds showno great enthusiasm for such a scenario. There isthe fear that costs of delivery will be so loaded withbells and whistles (features) that are not all neededthat it may not be economically viable for many anoffice and family. The end-user also needs confi-dence in being able to control the content of whatcomes gushing down the information highway.

Multimedia applications can be classified inmany ways. One way is shown in Figure 18.9.

There are two crucial characteristics of multi-media: one, high sensitivity to delays and lack ofsynchronization in transmission; and two, highvolume and size of transmission. These factorsvary with applications. Virtual reality applica-tions will have high sensitivity as well as high vol-ume running into the gigabit zone. On the otherside of the spectrum of computer applicationsare the non-multimedia applications that havelow sensitivity to delays and low to high volume.In between are the multimedia distributed appli-cations especially video/films-on-demand, digitallibraries, and telemetry and other data neededfor telemedicine. They have medium to high sen-sitivity and medium to high volume sometimesgoing into the gigabyte zone. These applicationsare shown in Figure 18.10.

Multimedia applications are easier to use andeven absorb than many traditional computer

211

Page 225: Telecommunications and Networks

Telecommunications and networks

END−USERto

END−USER

END−USERto

COMPUTER

END−USERto

DOCUMENT

COMPUTERto

END−USER

INTER−ACTIVEACCESS

BROAD−CAST

OBJECT−ORIENTEDMANIPULATION

• Conferencing• Training• Education

Graphical User Interface

Hypermedia accessto documents

Real-timefilm/video/shop/bank/medicine

Presentation Information kiosk

Newsletter Distance Learning

Collaborative processing

Data/Ooknowledge−base

Document e−mail

Tele−medicineDistance learning

Figure 18.9 A classification of multimedia

DelaySensitivity

High

Low

VIRTUAL

REALITY

Multimedia

distributed

applications

ZONE

Traffic sizeand volume

HighLow

Not multimedia applications e.g. e-mail, file transfer

G

I

G

A

B

I

T

Figure 18.10 Data sensitivity and traffic size

applications but more difficult to produce. Theyhave special resource requirements that aresummarized in Figure 18.11.

The applications of multimedia examined inthis chapter are no ‘killer’ applications like saye-mail was in the early 1990s. But the appli-cation of telemedicine may be a killer applica-tion in that its absence may kill people. Alsoimportant is the application of the digital libraryand distance learning for they may well savepeople from remaining illiterate and living on

the periphery of society. The value of spread-ing knowledge and increasing learning cannot bemeasured. Nor can you assign a monetary valueto the saving of lives. In this sense the value ofmultimedia applications will be infinite even ifthey never become killer applications but becomemainstream applications. And what is the valueof mass entertainment on demand? This is some-what questionable because that application has adownside of sometimes getting what you do notwant either as violence or pornography or as being

212

Page 226: Telecommunications and Networks

Multimedia with telecommunications

Network

ProtocolsArchitecture

Hardware

Standards

Organizationalarrangement

ATMFDDISONETHigh thruput rate (compression)High transfer rate (bandwidth)Isosynchronous serviceTIOSI extended

TCP/IPISDN

Workstation (high performance)Servers (special purpose)PeripheralsStorage (high capacity)ATM

H seriesMEG-I-encoded audio

CD qualityMEG-2-encoded videoHDTV-quality videoIPEG Merger

AllianceTakeover

NTSC quality video

Figure 18.11 Resources needed for multimedia

undesirable for children. This poses a problem fortechnology: should technology be responsible forthe control of content or should it be the owner ofthe system (parent in the case of children usingthe system)? The problem is less a technologicalthat a moral and ethical problem and the responsewill vary greatly among societies around the worldwhich will make the global use of multimedia allthe more difficult to define, much less resolve.Businesses will benefit greatly from distributedmultimedia applications. It is very possible thatevery worker will have the capability to video-conference from the desk without the elaborateequipment now available only in special roomsequipped for video-conferencing. Document pro-cessing through e-mail will improve business com-munications and teleshopping using multimediamay even increase sales and commerce. Businesseswill have to devise new strategies for advertisingand selling, while consumers will have to adjustto new ways of buying and receiving the productsthey want as well as the entertainment they desire.

Case 18.1: MedNetMedNet is a medical collaborative and consulta-tive system that was in its third phase of operationsat the University of Pittsburgh Medical Center.

MedNet started in 1985 and is on-goingon a daily basis at seven hospitals andmultiple diagnostic and research laboratories. Amajor challenge in implementing MedNet was‘the development of techniques for processingand display of real-time multimodel medicalinformation . . . One area where a collaborative

system can substantially improve patient care isclinical neurophysiology, a consultative service ofdiagnostic and monitoring service of diagnosticand monitoring techniques used to assess nervoussystems functions . . . During brain surgery,monitoring techniques help prevent damageto nervous system structures by continuouslymeasuring activity receded directly from thebrain in real time. MedNet provides real-time monitoring and multiparty consultationand collaboration during brain surgery forapproximately 1600 cases per year.’

MedNet has also developed communicationcontrol strategies for a wide variety of dataincluding audio, video and neurophysicologicaldata that arise in a medical environment.

Source: Robert Solomon. Multimedia MedNet,IEEE Computer, 28 (5), May 1995, p. 65.

Case 18.2: Electronic publishingat BritannicaEncyclopedia Britannica and Wide Area Informa-tion Services (WAIS) Inc. of the US agreed tojointly develop improved information retrievalfor the Internet. The new search-and-retrievalengine optimizes the original WAIS engines pro-ducing efficiencies of up to 20% with easierand faster access to the 44 million word baseof the Encyclopedia Britannica. Systems Britannica,the publishing platform that incorporates the newsearching technology serves as an infrastructurefor Britannica’s electronic products including nat-ural language searching which allows the end-user

213

Page 227: Telecommunications and Networks

Telecommunications and networks

to enter a question like ‘Why does the moon loomlarger on the horizon?’. Also possible is an elabo-rate systems of hypertext links that make it easy tonavigate among the related entries in the database.

The Britannica’s Home-Page on the Internet is:http:/www.eb.com.

Source: Information Today, 1995, p. 2.

Case 18.3: The access projects atthe British LibraryThe British Library, with a collection of over18 million volumes, published a commitment inits Strategic Objectives for the Year 2000 of ‘provid-ing maximum access to the collection using dig-ital and networking technologies for onsite andremote users’. Some of the projects include:

The Patent Express Jutebox holds over34 million patents. Software searches andprints high-quality copies for users in undertwo minutes.The Electronic Beowolf holds the uniquemanuscript of the 11th-century Anglo-Saxonepic. ‘Letters and words erased by theoriginal scribes, damaged by the fire in1731, or hidden by 19th-century restoration,are now discernible and test images havebeen mounted on the Internet to allowinternational scholars to see the progress ofthe project.’Electronic Photo Viewing System has a majorphotographic collection including Victorianspiritual photographs is accessible by subjectand provides a hyperlink to the descriptivetext which accompanies each image.The Network OPAC have over 6 millionbibliographical records. ‘Beta sites are nowbeing selected in the US to work with thelibrary on establishing communications linksand usage requirements for a future interna-tional Network OPAC.’

Source: Jonathan Purday. The British Library’sinitiatives for access projects. Communications ofthe ACM, 38(1), p. 65.

Note: For digital library projects at StanfordUniversity, University of California at Berkeley,Alexandria in California, University of Illinoisand at Michigan University, see Ibid., pp. 59 64.

Case 18.4: Video-conferencing intelemedicine at BerlinVideo-conferencing is part of the Desktop Con-ferencing (DTC) used extensively in the BermedProject at the German Heart Institute andthe Rudolf Virchow University Hospital inBerlin. Advanced multimedia and communica-tions technology support all areas of medicinefrom computer tomography and magnetic reso-nance imaging to intensive care monitoring unitsand administrative support systems. Diagnosticinformation comes from a variety of imagingtechniques and formats as well as reports, graphs,charts, reference books, handwritten notes, filmand video sequences, audio recordings, and con-versations with patients and colleagues. However,systems do not operate optimally and many sys-tems operate in isolation with little or no inte-gration with other related systems.

A self-evaluation concludes: ‘The use of multi-media DTC has shown the need for video, highquality audio, gesture support and overall sim-plicity. The future success of multimedia DTC inmedicine critically depends on interoperabilityand solutions to security problems. Final accep-tance in routine medical practice, however, willdepend not only on technological factors but alsoon the many social and organizational issues,including changes in work practices to obtainmaximum benefits.’

Source: Lutz Kleinholz (1994). Supporting coop-erative medicine: the Bermed project. IEEE Mul-timedia, 1 44 52.

Case 18.5: Telemedicine inKansasIncreasing in popularity in the US is the useof telemedicine as in the case of the KansasMedical Center where the patient is 300 milesaway and the doctor, Gary Doolittle, a 37 yearold oncologist, talks to the patient and uses two-way television along with stethoscopes, X-raytransmission and lab tests that are performedremotely using telecommunications. The cost ofsuch an equipment configuration is currently$50 000 but is expected to drop by over 50%as technology improves and it becomes easierto transmit large blocks of data by wire andfibre-optic cable. Many insurers will not covertelemedicine of this type and many lawyers worry

214

Page 228: Telecommunications and Networks

Multimedia with telecommunications

about malpractice and privacy. The importantresistance comes from local physicians who saythat they are afraid of out-of-state quacks thatare not governed by state licences but actuallyfear the economic threat to their practice ofclients that can now use telemedicine to goacross the state borders (and maybe acrossnational borders) to seek medical treatmentor at least get a second opinion througha televideo telemedical appointment with aworld famous medical specialist. Telemedicineis thus becoming another form of electroniccommerce much like teleshopping, telebankingand securities trading, by eliminating themiddlemen and bringing new competitors intopreviously insular markets.

Arthur Caplan, professor of bioethics at theUniversity of Pennsylvania, argues that ‘techno-logically, telemedicine is already here’, but thatovercoming the economic barriers ‘may take 20more years . . . as medicine moves into cyberspace,we need a new system of checks and balances’.

Source: Bill Richards, Doctors can Diagnose Ill-nesses Long Distance to the Dismay of Some,Wall Street Journal, Jan. 17, 1996, pp. Al and A10.

Bibliography

Akselsen, S., Eidsvik, A.K. and Folkow, T. (1993).Telemedicine and ISDN. IEEE CommunicationsMagazine, 31(1), 46 58.

Aldermeshain, H. Ninke, W.H. and Pile, R.J. (1993).The video communications decade. AT&T TechnicalJournal, 72(1), 2 6.

Bleecker, S.E. (1995). The emerging meta-market. TheFuturist, May/June, 17 19.

Chang, Y.H., Coggins, D., Pitt, D., Skellern, D. Tha-par, M. and Venkatraman, C. (1994). An open-systems approach to video on demand. IEEECommunications Magazine, 32(5), 68 79.

Coopock, K. and Cukor, P. (1995). Internationalvideoconferencing: a user survey. Telecommunica-tions, 26(4), 33 34.

Davis, R., Samuelson, P., Kapor, M. and Reichman, J.(1996). A new view of intellectual property andsoftware. Communications of the ACM, 39(3), 21 30.

Deloddere, D., Verbiest, W. and Verhille, H. (1994).Interactive video on demand. IEEE CommunicationMagazine, 32(5), 82 88.

Domet III, J.J., Rajkumar, T.M and Yen, D. (1994).Multimedia networking. Information Systems Man-agement, 11(4), 39 45.

Furht, B. (1994). Multimedia systems: an overview.Multimedia, 1(1), 47 58.

Furht, B., Kalia, D., Kitson, F.L., Rodriguez, A.A andWall, W.E. (1995). Design issues for interactive tele-vision. IEEE Computer, 28(5), 25 39.

Gemmel, D.J. Vin, H.M., Kandlur, D.D., Rangan, P.V.and Rowe, L.A. (1995). Multimedia storage servers:a tutorial. IEEE Computer, 28(5), 40 49

Grosky, W.J. (1994). Multimedia information systems.Multimedia 1(1), 12 23.

Levy, D.M. and Marshall, C.C. (1995). Going digital:a look at assumptions underlying digital libraries.Communications of the ACM, 38(1), 77 84.

Lippis, N. (1993). Multimedia networking. Data Com-munications, 22(3), 60 69.

Little, T.D. and Venkatash, D. (1994). Prospects forinteractive video-on-demand. IEEE Multimedia, 1(3),14 23.

McQuillan, J.M. (1992). Multimedia networking: anapplications portfolio. Data Communications, 21(12),85 94.

Nahrestedt, K. (1995). Resource management in net-worked multimedia s1562 ystems. Computer, 28(5),52 63.

Ozer, J. (1995). Shot by shot. PC Magazine, 14(7),104 163.

Peleg, A., Wilkie, S. and Weiser, U. (1997). IntelMMX for multimedia PCs. communications of theACM, 40(1), 24 30.

Reisman, S. (1994). Multimedia Computing: anoverview. Journal of End-User Computing, 6(4),26 30.

Richardson, S. (1992). Videoconferencing: the bigger(and better) picture. Data Communications, 24(9),103 111.

Rabina, L.R. (1995). Towards vision 2001: voice andaudio processing considerations. AT&T Journal,74(2), 4 13.

Strauss, P. (1994). Beyond talking heads: videoconfer-encing makes money. Datamation, 10(19), 38 41.

Snider, J.H. (1996). Education wars: the battle overInformation Age. Futurist,. 30(3), 24 33.

Tobagi, (1993). Multimedia: the challenge behind thevision. Data Communications, 22(2), 61 64.

Tolly, K. (1994). Networking multimedia: how muchbandwidth is enough? Data Communications, 32(13),44 52.

Thuston, F. (1992). Video teleconferencing: the stateof the art. Telecommunications, January, 63 64.

Wiederhold, G. (1995). Digital libraries, value, and pro-ductivity. Communications of the ACM, 38(1), 85 96.

Woolfe, B.P. and Hall, W. (1995). Multimedia peda-gogues. Computer, 28(5), 74 80.

215

Page 229: Telecommunications and Networks

19

TELECOMMUTERS, E-MAIL ANDINFORMATION SERVICES

Networks connect people to people and people to data.Thomas A. Stewart

The general idea was that the industrial revolution had taken people out of their homes, and nowthe telecommuting revolution will allow them to return.

Tom Forester

IntroductionKnowledge workers handle knowledge, not nec-essarily in the sense of Artificial Intelligencewhere knowledge includes heuristics and infer-ences, but knowledge in the sense of the basicunits of data and information. One subset ofknowledge workers includes those that work athome and they are referred to as telecommuters.

With the coming of computers, knowledgeworkers have increased in number and impor-tance. With the confluence of computers andtelecommunications, telecommuters are increas-ing in importance and numbers. In this chapterwe examine what knowledge workers andtelecommuters do and how they do it.

All knowledge workers, and especially telecom-muters, need interconnectivity; that is, theircomputers must be connected to other com-puter systems. This interconnectivity can belocal within an organization, or regional withina community, or international connecting thewhole world. International connectivity is cur-rently achieved through the Internet and e-mail.The Internet we examine in the next chapter; inthis chapter we discuss e-mail.

TelecommutingKnowledge workers work mostly in offices butsome work at home using telecommunications tokeep in contact with their corporation instead ofbeing in the office physically. These workers are

called teleworkers, or telecommuters. There isa common perception that many teleworkers aremothers who stay at home and join the work-force to work around feeding their children anddoing their household chores. But this perceptiondoes not hold for the teleworker, at least not atRank Xerox, where all the teleworkers are men.Rank Xerox in the 1980s called it networking.There are other names for this type of worker athome: electronic homework, virtual work, distanceworkplace, flexiplace and electronic cottage. Theemphasis is variously on work being independentof place and time. But why is teleworking a rel-atively new phenomena? Because it has at leastfour prerequisites that have to be satisfied: one,a cheap, fast and robust computing device that issatisfied by the PC or workstation; two, a connec-tion of this device with the corporate office, cus-tomers, suppliers and other business players whichis supplied by telecommunications and networks;and three, a worker at home who is at ease andcomfortable with a computer and telecommunica-tion. This third condition is largely satisfied withmany in the workforce who are computer literateand comfortable with the PC which is end-userfriendly, though perhaps not as end-user friendlyas some would like it to be. The fourth condi-tion for telecommuting is that there are profes-sions where telecommuting is feasible, viable anddesired by corporate management. This conditiontoo is often satisfied, largely on economic grounds:corporations do not have to pay for the high over-head of an office buildingnor for parking and other

216

Page 230: Telecommunications and Networks

Telecommuters, e-mail and information services

facilities for its telecommuting employees. Thereis also greater flexibility in hiring because telecom-muting adds to the labour supply. Telecommuterapplications can be stored in the database andcalled upon when needed as part-time employ-ees for the duration needed. Thus, telecommut-ing offers corporate management flexibility in thehiring of workers when needed, whilst droppingthem when demand drops, all without any cost orfiring or any hassle with the unions. This part-timearrangement has tax advantages in some countriesand is an opportunity for people who want to stayat home and still be part of the productive work-force adding to the GNP (Gross National Product)of their country.

Telecommuting is the substitution of telecom-munications and computers for transportation tothe office. It was first talked about during theArab oil embargo in the mid-1970s. It was thenrecognized that roughly 5% of the US oil wasfor commuting and that a savings of 190 000 bar-rels of oil a day could be gained if 20% of thecommuters stayed at home. A few corporationsdid experiment with home-workers at the time,but not till the 1980s did the idea come of age.By that time reports on the success of experi-mental alternative work-site programs were incirculation and the benefits of telecommutingwere better understood. In addition, the personalcomputer was on the market and at an afford-able price, another factor contributing to the eco-nomic feasibility of working at home.

The percentage of time an employee telecom-mutes varies from one organization to another.Some companies have no main office at all. Allwork is done at dispersed workstations joined by anetwork. Informal meetings take place in low rentneighbourhood centres or at the homes of employ-ees. Sometimes a company will establish a smallsatellite office with an electronic link to the mainoffice. This type of alternative work style is classi-fied as telecommuting since the time and cost ofcommuting to an office is avoided. Other organi-zations design work around assignments so thatemployees who operate out of their homes spendone or two days each week at the main office.

Telecommuting reduces the costs of carry-ing employees not needed all the time and thecost of fringe benefits that must be given tothe full-time employees. When teleworkers arenot union members, the unions are unhappy inlosing membership and the clout and power asso-ciated with large memberships. Thus, the unions

are unhappy with telecommuting and are oftenactively opposed to it. Unions argue that tele-workers are exploited with low wages and nofringe benefits compared to what is paid to full-time employees for the same work. The tele-worker argues that in return for not being paidlike full-time employees that work in the office,the teleworker does not have to dress up for workand drive to work. They can work when andwhere they please, though when not in the officethey miss the socializing and time off they some-times get in the office.

Implementation of teleworkingThe equipment required by a telecommuter couldbe quite modest: a terminal with a modem. Forcomplex tasks it may be necessary to have a work-station equipped with peripherals. Often though,the peripherals run by the corporate computersystem can be used. Once connected to a corpo-rate LAN, the telecommuter can not only haveaccess to expensive peripherals but also to themain computer system, its servers and the cor-porate database. Such a system is depicted inFigure 19.1. If the corporate LAN is connected tothe Internet (which is a net of other nets) then thetelecommuter can access any customer or vendor(or friend and relative) anywhere in the worldand at any time desired. Such an interconnectionthrough the Internet is shown in Figure 19.2.

Careful planning is required for telecommutingto be successful. The people selected for telecom-muting must work well on their own. Their assign-ments must be jobs that can be done away fromthe office. IT personnel that include data prepara-tion clerks, programmers, and even some systemsengineers, are good prospects. Technical or docu-mentation writers and IT training specialists whomust develop educational materials are also goodcandidates for telecommuting to do part of theirjobs. Outside IT, telecommuting is viable for salespersons, architects, accountants, auditors, reser-vation clerks, some secretarial positions and otherwhite collar professions.

Persons selected for telecommuting should betrained to cope with off-site employment whilemanagers should be trained to supervise subor-dinates from a distance. Remote work does notmean all contact with co-workers is lost. On thecontrary, training should include ways to remainin contact through electronic mail, phone, fax,regularly scheduled meetings, memos, routine

217

Page 231: Telecommunications and Networks

Telecommunications and networks

OpticalScanner

ImageProcessor

Data-baseServer

Data

base

ComputerCorporateSystem

Appli-cationServer

Printer

Copier

Client forTele-worker

Video-conferencingfacility

To other LANs

PC &Work-stationclients

Corporate LAN

Figure 19.1 Possible access to equipment by teleworker

Internet

OurNational WAN

WANs abroadMANs abroad

LANsabroad

Teleworker'scorporateLAN

OurNational LAN

Figure 19.2 The Internet as a net of other nets

reporting, and so forth. There is some evidencethat persons that opt for telecommuting lookonly at the short-term benefits (less commut-ing and reduced expenditures for petrol, meals,clothing, etc.) and do not realize the problemsassociated with working alone. A training periodcan help employers evaluate whether the rightcandidate for telecommuting has been chosenand help employees gain a realistic view oftelecommuting as a work mode away from thesocial contacts of a city and an office.

Besides the economic, social and technicalconsiderations of telecommuting, there are new

problems for management. For one, how doyou evaluate workers with whom you haveno (or little) eye contact and do not observewhile at work? Work performed at home bytelecommuting can be evaluated quantitativelyand perhaps, to an extent, qualitatively also.But many evaluations are often qualitative andsubjectively judged by observing the workersattitude and behaviour at work. This is notpossible in telecommuting and new ways ofevaluation have to be devised appropriate for thetasks and personnel involved. This may involvesome face-to-face meetings, but the rest of the

218

Page 232: Telecommunications and Networks

Telecommuters, e-mail and information services

evaluation must be based on measuring andmonitoring productivity at home.

The benefits and limitations oftelecommuting

One of the primary benefits of telecommutingis increased productivity. Workers claim thatthere are fewer distractions from the centraloffice. Gains in productivity of 20 to 40% arenot uncommon among professionals. However, itshould be recognized that many persons workbest in a structured environment or when theyinteract socially with others. Not every one hasthe work habits, discipline and planning skills todirect their own activities. Careful screening ofcandidates is necessary to ensure that productiv-ity is not lost, instead gained by telecommuting.

In service-oriented organizations, telecommu-nications is a good way to extend hours ofoperations. For example, employees at homecan take orders and answer customer queries.J. C. Penney, a large retailer in the US, usestelecommuting for some of its catalogue-ordertelephone centres. Companies that have seasonalpeak volumes or certain hours of the day with

heavy demand can handle the load by telecom-muting without tying up office space.

When organizations are recruiting new em-ployees, telecommuting can help attract appli-cants. Many persons like the idea of flexiblescheduling instead of the standard 9 5, Mon-day to Friday work week. When working at homeemployees can choose their hours of work. Orga-nizations trying to hire professionals when theirdemand exceeds supply, will find that the optionof telecommuting gives them a recruiting edge.

Telecommuting is also attractive to personswith family responsibilities (a young child oraged parent) who may be unable to work at cer-tain hours of the day or cannot take the time tocommute, or who just do not like to commute.Persons with medical disabilities who have diffi-culty leaving home, can now enter the work-forcethrough telecommuting.

Telecommuting can also be an alternative tocorporate transfers. Many employees today resistrelocation. A common reason is that a spouse isemployed and unwilling (or unable) to uproot.Whatever the objection, a valued employee whomight resign rather than move can be retainedwhen telecommuting is an alternative to transfer.In addition, the employer avoids the cost of therelocation or the cost of hiring a replacement.

Corporation

Cost savingsProductivity increaseRecruitment and retention of staff

Teleworker

Society

Morale

Productivity

Opportunity

for disadvantaged/disabled

for the geographically isolatedFlexitime

Working environment more 'friendly'

Combine work with leisure

Quality improvement

Creativity enhancement

Rural development

Urban development

Energy consumption

Work-force

Unemployment

Less pollution

Less transportation

Fewer offices

Fewer car parks

Figure 19.3 Advantages of telecommuting

219

Page 233: Telecommunications and Networks

Telecommunications and networks

Table 19.1 Advantages and disadvantages of teleworking

AdvantagesOffers flexibility to corporations in the hiring of

transient personnel for peak and unexpected loads.Offers teleworkers the flexibility of working at their

choice of time and place.Provides a possible ‘creative’ environment of thehome.Provides access to ‘piece-work’ and part-time work.Expands the workforce to:

housewives,disabled people,people in remote regions and even farms.

Limitations and disadvantagesRaises problems of:

providing proper access to information needed,integrity and security of data/information,evaluating teleworker.

Raises issues with unions over:membership,work conditions,payments, direct and indirect.

Psychological and social needs of teleworkers maynot be met.

Conflicts and ‘over-crowding’ at home.Inability to separate work and pleasure.Possible ‘burn-out’ of working at home.

Finally, telecommuting is advantageous be-cause it reduces office space and furniturerequirements. With telecommuting such over-head expenditures are minimized. Furthermore,telecommuting means that space is not a con-straint when changes in the workload are pro-posed. Organizations can expand rapidly withoutworrying where to put people or retrench with-out being saddled with expenditures for unusedspace and furnishings.

The advantages of telecommuting are summa-rized in Table 19.1 and Figure 19.3.

When telecommuting?When the advantages can overcome the disadvan-tages and limitations of telecommuting, there aremany people who would want to telecommute.These people (see Ursula Huws (1991: pp. 28 9)include:

people who wish to work at home in order toavoid the commute to work;

individualists who find the large corporationso antipathetic an environment that they wishto set up their own office at home;women who cannot find child care arrange-ments;women who wish to stay with their children athome and yet want or need to work;people who feel that they are exploited andunderpaid at work;people who use computers at work and mightas well use them at home;workers who are ‘loners’ by temperament andrather work at home than in an office;people with disabilities but who can work athome;people who want an extra job;firms and organizations that find telecommut-ing a way to downsize their organization andfind telecommuting of some of their employeesan economical thing to do.

Future of teleworkers

Before making predictions about teleworkers, onemust remember that predictions of the use ofcomputers and telecommunications in the homehave been made in the past and have been notori-ously wrong. Witness the less than expected use ofhome shopping and home banking through video-tex both in the US and in Europe: Prestel in theUK, Viewtron in Florida, Keyfax in Chicago, Gate-way in California and Bildschirmtext in Germany.There were some 50 videotex’s operating in 1982in 16 countries and all of them operated belowexpectations and often below the breakeven pointof profitability. There was one exception: Minitelin France. The success of Minitel was due partlyto high end-user acceptance but also to a combi-nation of political intrigue and the deep pocketsof government support.

Most predictions in some countries went wrongpartly because the predictions were made byvendors and vendor-biased commentators andpartly because the predictions were based onpoor assumptions of the behaviour of parties athome and their ready acceptance of computertechnology.

Despite these reservations, there are some pre-dictions that can perhaps be safely made. Thehome-office of the teleworker will soon be theconfluence of computer technology, AI technol-ogy (including expert systems), networking (local

220

Page 234: Telecommunications and Networks

Telecommuters, e-mail and information services

Parallel processors

Supercomputers

Specialized computers

Mainframe

Minis

Specialized workstation

Laptop portables/PADs

Your PC/workstationas typical for a teleworker

L

A

N

To (other) LANs andthe internet

PC connected to Internet Wireless systems

Figure 19.4 Interconnectivity for a teleworker

and global), wire-less technology, signalling overfibre optics, image processing and the fast pro-cessing of massive volumes of data and knowl-edge. There will be the capability of faxingdocuments and using palmtop computers whenneeded. All these, or at least most of these,devices will be ‘smart’ because of their use ofAI programs. They will have the ability to pro-cess not just data but knowledge, not just dis-crete variables but ‘fuzzy’ variables, and not juston-line but in real time too. The telecommuterwill have access to many different peripheralsthrough a network that are not available at home.One such scenario is shown in Figure 19.4. Theseare no great predictive leaps into the future butmerely a projection of what is technologicallypossible and what is being currently done (as,for example, PADs and the Internet computer).

Access by telecommuting may even extend tocertain professions such as the salesperson whoworks out of a van or car. The electronic officeon wheels will enable the salesperson to givevideo and animated graphic demonstrations tothe clients at the client’s site and respond toqueries that will bring instant responses from thedecision-maker in the corporate headquarters.

The changing technology and changing workenvironment of the ‘electronic briefcase,’ theelectronic office on wheels or at home, willrequire organizational responses. Bureaucracieswill be reduced and hierarchies will flatten out.In many cases bureaucracies and hierarchieswill be replaced by adhocracies because thereis little time for critical information to goup and down the rigid chains of hierarchical

command and percolate slowly through themany layers of bureaucracy. There will becontinuous contact between management andcustomer and back to managers making strategicdecisions, often through the intermediary ofthe teleworker, sometimes not so. The workeris now liberalized and dispersed without setlocations or schedules. Some workers may wellbe displaced from their long-held jobs if notunemployed. For many workers, the roles andtasks may require designing, restructuring andre-engineering; new standards for work andperformance must be instituted and monitoredby corporate management; supervisory roles andleadership roles redefined; and, most importantperhaps, employees must be educated andtrained for the new work environment. Thisrequires a corporate plan for the migration ofwork that was done in traditional ways to thenewer ways using the latest technology bothefficiently and effectively.

An important resource for any knowledgeworker and teleworker is to have the ability toreceive and send messages quickly and easily.This is achieved by e-mail provided both partieshave an e-mail connection.

E-mailWe start with a definition of e-mail and com-pare it with other means of communication likethe telephone, the telegraph, the post office andface-to-face discussions. We then examine theresources needed for e-mail. For local communi-cations a local information service provider may

221

Page 235: Telecommunications and Networks

Telecommunications and networks

be sufficient. To communicate across the world,there is the Internet. We shall briefly discussthe Internet as preparation for our discussion ofcyberspace in the next chapter.

E-mail is electronic communication. You enterthe message you wish to send on a computer,provide the ‘address’ you wish to send the mes-sage to, press a few buttons, and the messageis sent to the desired ‘address’. Of course youmust follow the protocol (procedures) requiredfor the transmission of the message and you mustprovide the address of the destination accord-ing to a prescribed format. The computer sys-tem uses a numeric address but that may bedifficult for the average user to remember andso the address you provide is often alphabeticwith some special symbols and the systems pro-tocols does the translation. For example, youmay use the DNS (Domain Name Service) for-mat which translates the address provided by theuser to the numeric address used by the machine.This address you provide starts with the ‘screen’name of the receiver, then the network ID andworking towards the finest locational informationwith the ‘@’ sign and dots ‘.’ separating differentfields (parts) of the address. The e-mail addressis somewhat like the addressing of an envelopein Germany. After the name, you start with thearea code, name of the city, street and then thenumber of the house. Thus the author’s e-mailaddress is:[email protected],i.e. K. M. Hussain at (@) AOL (the informa-tion service provider) and com is the short for‘commercial service’). Some addresses have moresubdivisions separated by dots and internationaladdresses may even have a country alphabetic codeat the very end. This is just one class of addressing.There are at least three other classes but we shallnot be concerned with them except to recognizetheir existence in case they turn up sometime.

In the postal service of many countries, we gothe opposite route to e-mail: we start with thedetail name, street and then city. But in prin-ciple, e-mail is much like any post office. Bothuse the store-and-forward technology. Your e-mail messages, like your letters, are deliveredto a local sorting centre where they are sortedfor the address of the destination, and then for-warded through more sorting desks (like in postoffices) before they are delivered at the addresswhere they can be collected when convenient.The delivery point may be the home PC (or

workstation) or the office. If a printer is avail-able, both incoming and outgoing messages canbe printed in addition to the soft message onthe screen. The transmission in e-mail is swiftand very much faster especially for internationalcommunication. And it is much cheaper too, bothlocally and internationally.

Another commonality of e-mail with the postoffice is that both are asynchronous processes;that is, the two people involved, the sender andthe receiver, do not have to be available at thesame time, as is the case with a telephone call ora face-to-face meeting.

The great advantage of e-mail compared toface-to-face meetings and telephone calls is thatyou have time to think of an answer. You canbreak off at any time by simply not respond-ing. You are not on the spot (as with a tele-phone call) and under duress to express youropinion instantly and seem intelligent and pro-found. You do not stand the risk of being misun-derstood as in a fact-to-face relationship result-ing in unintended emotional responses. You cantake your time, and respond when you are ready.This is somewhat like the telegraph communica-tion which is also swift once you have composedyour message, but, unlike the telegraph, e-mailis often not charged by the length of the mes-sage, and, if it is so charged, the cost is not asprohibitive as that of the telegraph.

One advantage of e-mail is that when sent youknow it is delivered and so if for some reason itcannot be delivered, you will get a message fromthe e-mail postmaster explaining the reason fornon-delivery like an unknown computer address.There is no dead letter box for e-mail as thereis with the post office. You are never sure ofdelivery in surface mail (unless you get a reply)especially with foreign mail for you do not knowif the postman has taken a liking to your stamp.

E-mail increases secretarial performance by afactor of two to three. Retrieval and the file man-agement capabilities are particular strengths. Inaddition, the system reduces the need of officestorage since cabinets with files correspondencecan now be replaced with computer memory.

It must also be recognized that e-mail isjust one of the many modes of electronic mes-sage transfer. Mailgram and fax are alternatemethods. Fax currently is slow in transmission,taking between 1

2 to 6 minutes to transmit a doc-ument copy comparable in quality to a Xeroxcopier. The process is also costly. A document

222

Page 236: Telecommunications and Networks

Telecommuters, e-mail and information services

page in facsimile form requires 200 000 bits. Thatis 1000 times the number of bits required for atypical telegram, and 60 times that of a typicaloffice memo.

In spite of its obvious advantages, corporatemanagers tend to be ambivalent in their attitudestowards electronic mail. They like the speed,retrieval and cross-indexing capabilities, beingshielded from constant interruptions by janglingphone calls, as well as receiving and respond-ing to messages when convenient. But e-maildemands a change in style of management. E-mail often eliminates the need of voice and per-sonal contact in business. Business people, espe-cially the gregarious ones, miss the human inter-play of visual messages or a phone call or a visitto an associate.

E-mail has its drawbacks especially when com-pared to face-to-face meetings. You cannot hearthe sounds of anger or pleasure; you cannotexpress joy or sadness through a change of toneor inflection; you cannot see emotions in a blushof embarrassment or of joy; and you cannot sensethe atmosphere that may be tense or frivolous.You cannot interact. You are alone. You have nophysical clues of how others are thinking or react-ing to what you have just expressed. But you havetotal control over what you can say, how you sayit and when you say it. But for these and the otheradvantages of e-mail, you must pay a price. Youneed computing resources.

Resources for e-mailThere are three important resources needed fore-mail: a computer, a modem and a connectionto a e-mail provider (who provides you with thesoftware necessary for the connection). Let usconsider the modem though it has been men-tioned earlier.

A modem stands for modulator and demodula-tor and is used for communication by a computer(digital) over a telephone line (analogue). It isa device that transforms a computer’s electricalpulses into audible tones for transmission overthe phone line to another computer. This is themodulator part. The modem also receives incom-ing tones and transforms them into electrical sig-nals that can be processed by a computer. This isthe demodulator part of the modem. Essentially,a modem allows us to convert the digital data ofa computer into analogue data for the telephoneand vice versa.

A modem often comes built into a computer.If not, then it can be added on. Do not letthe computer buff confuse you with bandwidthsand megahertz speeds for they are significantonly if you have large volumes of data or aretransmitting voice, images and sound which youwill most likely not be doing if you need to knowthe basics about a modem.

Besides a computer and a modem, what is alsoneeded is a connection and the software to be ableto do e-mail. There are at least three possibilities:

1. Get an Internet connection through an Inter-net gateway which then enables you to doe-mail not just nationally and regionally butalso internationally. In addition you have allthe facilities of the Internet. We discuss sucha connection and the Internet in general inChapter 20.

2. Get connected to an Information Service orBusiness Communications Service providerthat offers e-mail as just one of its services.These are LAN based systems and the mostcost-effective for most users. We shall exam-ine such providers and their services later inthis chapter.

3. Build your own e-mail backbone with a dial-up access and also access to the Internet.This will be considered a private networkas distinct from the other two alternativeswhich are public backbones. This privatesystems approach is more expensive but itallows customization, better administrativeand management control, independence, andeven better performance and scaling thanthe shared public systems. But it is such pri-vate systems that only the large organizationcan afford or indeed need.

Whatever the approach to connection, onemust use a protocol for messaging. In the caseof the Internet, it is TCP/IP (Transport ControlProtocol/Internet Protocol); for the informationservice provider, there is PPP (Point-to-PointProtocol) or SLIP (Service Line Internet Proto-col) connection; and for the private e-mail sys-tem, there are several alternatives: TCP/IP, PPP,SLIP, or STMP. STMP is Simple Mail TransferProtocol. It is also used by the Internet and issimple in the sense that it handles simple mes-sages of text and numeric data. If, however, onewants to transmit pictures, sound and video, thenone needs an additional protocol, the MIME,Multiservice Internet Mail Extension. MIME is

223

Page 237: Telecommunications and Networks

Telecommunications and networks

the protocol for multimedia e-mail and can beused also with STMP.

The protocols mentioned above are most com-mon approaches for the US where there is a spec-trum of vendors and corporations that are largeenough to go their own way and to embrace a mes-sage handling system that includes e-mail as justone of the many message transport systems. InEurope, however, there is greater respect for theISO standards. The X.400 was prepared by theITU (International Telecommunications Union)and endorsed by the ISO. The X.400 is the alter-native to STMP as a mail backbone and it is alsothe standard for EDI in both Europe and Japan.

Having just mentioned EDI, it may be theright time to establish the relationship of e-mailto EDI. They lie on top of what can be lookedat as a cake with e-mail and EDI being thetop layers; the X.25, switches and leased lines,the bottom most layer; with managed data ontop of the lowest layer and information retrievalon top of that. This hierarchy is illustrated inFigure 19.5. What must be deduced from thecake structure is that you cannot always justsimply choose one layer and not the bottomlayers, at least the bottom most layer. And so theconsumer is sold a package with layers that theymay not want or need. They must thus seek thepackage that serves their needs best and is mostcost-effective.

An irritation for the consumer and perhapsthe greatest obstacle for an easier and cheapersystem is that there is no common global direc-tory or universally accepted standard for e-mailaddresses. For example, in the US, you may havean alpha address (with an ‘at’ and ‘dot’ symbol)and yet a relative or best friend may have analphanumeric and symbolic e-mail address [email protected].

In Europe, there is the X.500 that is morewidely used and is the directory standard for theX.400 (and X.435 for EDI on X.400). Creatingand maintaining comprehensive lists poses greattechnical and organizational problems when thescope of the addressing is not just national andregional but also international. There is alreadyan acknowledgment that a larger width of thefields for addressing will be necessary for anyfuture global messaging system before we canlink heterogeneous networks and synchronizedistributed directories of networks and e-mailusers. There is the need in e-mail standards forfile formats, transports, directory services ande-mail APIs (Application Programming Inter-faces), as well as better directory synchronization.This will greatly help in building the messaginginfrastructure which would facilitate not just e-mail (and fax) routing but also database access,document sharing, group work and ultimatelydecision-making, as in the DSS and EIS. E-mailmay yet become an important enabling technol-ogy for enterprise computing. For corporate man-agement, this might result not just in better andfaster communication but perhaps also in theflattening of the management, the improvementof control and tracking, faster offering of prod-ucts to the market, changes in work structures,and increased productivity.

One annoying problem faced by e-mail usersis the volume of junk mail that comes through.If you are on various lists for mail, it is easyto get 200 300 messages a day, which may wellconsume around two hours just in reading time.There are some programs around that will filteryour mail. It could check the address of the originagainst a list that you approve, or it could filterby subject matter according to key words that youprovide. The filtering and screening of e-mail can

EDI

E-MAIL

INFORMATION RETRIEVAL

MANAGED DATA

X.25, SWITCHING METHODS, LEASED LINES

Figure 19.5 E-mail in the hierarchy of layers

224

Page 238: Telecommunications and Networks

Telecommuters, e-mail and information services

control the stream of e-mail but that will greatlydepend on the updating of the lists used for thefiltering.

Despite problems with e-mail it is growingrapidly in volume and scope. In scope, it is extend-ing from local to global communications; fromsending mail to customers and suppliers to send-ing mail to friends and family; from access todatabases to collaborative applications; from busi-ness correspondence to electronic form routingand process automation; from people to people toprocess to process (virtual users); from a formalcorrespondence vehicle to a e-mail transport inthe enterprise-wide messaging infrastructure. Innumbers, e-mail in the US has risen over 60% from1992 1993 and from 5.9 million in 1992 to over 38million in 1995. And this is with only 42% of the 47million computer users in 1993. As the total num-ber of computer users increases and the percentageof users increases, e-mail will also increase. Will itincrease faster than our capacity to fully use it? Orwill there be a backlash from the inefficiency andunfriendliness of the system? Time only will tellbut the chances are that with the awareness of theproblems and attempts to overcome them, there isa good chance that e-mail will continue to growand prosper.

Information service providersThe tricky question now arising is to find an e-mail provider among the many information ser-vice providers. Such a provider is the equiv-alent of the post office that does the storing-sorting-and-forwarding of messages and providesyou with an e-mail address and the software toreceive and send messages. Early systems hada command driven language but nowadays youcan get a GUI (Graphical User Interface) whichis menu driven and has icons (graphical sym-bols that perform functions, for example a scis-sors icon to cut a selected passage). Differentproviders have interfaces with varying degrees ofend-user friendliness. But e-mail is not the onlyservice that the information service provider pro-vides. Its primary function is to provide informa-tion on subjects like

ž investing and finance,ž travel, business careers,ž news local, national and international,ž weather,ž entertainment,

ž health,ž sport,ž membership services,ž billing services,ž hobbies, leisure,ž games,ž shopping.

E-mail is only one of the services offered. Someare good at e-mail and others are not. Eachshould be evaluated for appropriateness to one’senvironment. Criteria for such a comparison andevaluation could be the following:

ž content for areas of interest,ž ability of parents to control content of what

children and teenagers may watch,ž type of ‘community’ in chat and discussion

groups and forums,ž interface,ž GUI, graphical user interface,ž features of end-user friendliness,ž installation,ž e-mail,ž ease of use,ž features such as address book and its use in

composing mail,ž ease of installation,ž Internet connectivity,ž cost, fixed or variable,ž e-mail fee, if anyž access,ž local/long distance affecting costs,ž speed of access affecting waiting-time in

queue.

Summary and conclusionsE-mail is fast, reliable and cheap if you havea connection to some service provider. E-mailcapability can be acquired through an informa-tion service provider, an Internet provider or aLAN. E-mail services are often offered througha GUI, graphical user interface, that is menudriven, often in colour, and end-user friendly.

E-mail is becoming increasingly central to theway we work in the office and at home asa teleworker. It is also central to a corporatemessaging system. One study showed that, asthe parties involved double in number, the mes-sages increase fourfold. Thus the intracompanyand intercompany messaging increases rapidly(Burns, 1995: p. 108).

225

Page 239: Telecommunications and Networks

Telecommunications and networks

By location

By organization

By equipment

at home

at site (mobile)

centralized

decentralized

satellitecentre

neighbourhoodcentre

on-line

off-line

corporate LAN

Internet PC

work-station

other

printer

Figure 19.6 Classification of telecommuting

As information processing and knowledge pro-cessing increases, there will be a great demandfor the knowledge worker. Many of these knowl-edge workers will be teleworkers. There are manytypes of teleworkers. One classification is shownin Figure 19.6.

The impact of teleworking is summarized inTable 19.2. Not obvious are the very intangiblebenefits to society in the form of better andcheaper urban development with fewer high riseoffice buildings, fewer car parks, and fewer roads,motorways and highways. The greatest impactwill be on society if it transforms working in theoffice to telecommuting and working at home.This will not only change our life-style, but couldalso affect town planning for banks, offices, carparks, transportation, and pollution.

Just as the industrial revolution has broughtthe worker out of the home, the new informationera of telecommunications and computers havebrought the worker back to the home. But this istrue only of certain professions and certainly nottrue of the agricultural, manufacturing, and ser-vice sectors of the market economy. Despite itslimited scope, telecommuting has raised manynew issues and conflicts and will need new man-agement strategies ways of resolution.

Table 19.2 Impact of telecommuting

ON THE INDIVIDUALAllows employment for part-time workAllows employment for those who do not want to or

cannot leave home easily, like mothers and thedisabled

Requires a new discipline for working at home

ON THE CORPORATIONEnables transient and piece-time employment

without the burden of fringe-benefits and workingwith unions

Reduces overhead for officeRequires new approaches to scheduling work,

evaluation and control of remote work

ON SOCIETYReduces pollution from traffic by office workersReduces urban development and transportation

infrastructureIncreases potential labor force and potential GNPMakes unions unhappy by decreasing their potential

membership.

We conclude with some thoughts by UrsulaHuws, a long time commentator on telecom-muting:

226

Page 240: Telecommunications and Networks

Telecommuters, e-mail and information services

The extent to which the electronic cottagebecomes a reality, and the specific forms whichthat reality takes, will depend on the deci-sions taken by a range of social actors largeemployers, entrepreneurs, creative individual-ists, women with dependants, and planners.These decisions will not be unidirectional, norwill they be permanent. They will interact witheach other to produce new and unexpected pat-terns; new ideas of conflict will arise and, in theresolution of these conflicts, new social formswill be negotiated. (Huws, 1993: p. 29).

Case 19.1: Examples ofteleworkers

Telecommuting at Bell Atlantic

In 1991, Bell Atlantic, a telephone company inthe US, had 100 managers working on trial asteleworkers. In 1994, the number grew to 16 000and Bell Atlantic is working with its unionsto increase that number to over 50 000. (TheFuturist, March April, 1994, p. 13)

Ursula Huws

Ursula Huws is a freelance writer and writeson telecommuting including the book Telework:Towards the Elusive Office. She works out of aroom at home that has, according to her, ‘threecomputers, two telephones, a fax machine, twoprinters, five filing cabinets, six book cases, twodesks, four chairs, . . . clothes, shoes and house-hold linen and, above it, my bed.’ (Huws, 1991:pp. 19, 21)

Thomas Hubbs

Thomas Hubbs has been called a knowledgeworker, a telecommuter and a virtual worker.He is the Vice-President of Verifone whichdominates the market for credit-card verificationsystems used by merchants. Hubb’s nominalheadquarters is in California and his CEOlives over a thousand miles away in SantaFe. They meet face-to-face every six weeksor so.

Hubbs spends 80% of his time on the roadoften getting behind the walls to link his Hewlett-Packard laptop computer to an outside line. Herarely uses his cellular phone to ship faxes ore-mail because ‘right now the costs are so highit doesn’t make sense, except in an emergency’.Verifone’s e-mail system runs on a VAX mini-computer.

At Verifone, ‘everyone from the chairman tothe most junior clerk is issued a laptop computerand is expected to learn how to use it. Internally,paper memos are banned, secretaries a rarityat the company’s offices are prohibited fromhandling an executive’s e-mail.’ (Business Week,Oct. 17, 1994: pp. 95 6)

Tom Bacon

Dr Tom Bacon, an instructor in Operating Sys-tems in California, was offered a job with a soft-ware house in the East of the US. The two partiesagreed on all contractual matters but there wasone hitch: Mrs Bacon was not in agreement. Shedid not want to go to the winters of the East, nordid she want to pull her children out of schoolsin California. So the firm made another offer. DrBacon had to come three times a year to the mainoffice for face-to-face meetings and the rest of thetime he could stay at home and use teleprocess-ing for work. All parties agreed, but Mrs Baconstill had to make a sacrifice: she had to give upthe bedroom of the fourth daughter for an office.Some fifteen years later, the Bacons are still hap-pily married and Tom is still a teleworker. How-ever, Dr Bacon now has a two-room office andlots of ‘bells and whistles’ on his office computersystem.

Case 19.2: Comments fromteleworkers

‘My professional and home life have become sointertwined that I can never get away from work.’

‘I’ve gained forty pounds. My terminal is tooclose to the refrigerator.’

‘I feel isolated at home. There is no incentive towork hard, no sense of personal accomplishment,no social contact, no feeling that I am part of ateam that is accomplishing something.’

227

Page 241: Telecommunications and Networks

Telecommunications and networks

‘Everyone interrupts me the dog, the kids, thepostman, the neighbours, even my spouse whoknows better I am just too available.’

‘There have been no raises, no promotions, sinceI’ve had a home office. I’m afraid that I’m jeop-ardizing career advancement by being a tele-worker.’

‘I’m getting claustrophobia (we simply lack thespace) and my wife doesn’t want me home duringwork hours.’

Case 19.3: Telecommuting atAmerican ExpressIn 1991, management at American ExpressTravel Services, faced a problem. Its manywoman travel counsellors wanted to work athome instead of having to brave the driving(sometimes three hours) to work and back eachday. To set them up at home would cost thecompany $1300 each but it would release $3400in each office space. The big question was theeffect in productivity. And so American Expressstarted its Project Hearth, which eventuallyinvolved 1000 travel counsellors (10% of allemployees) in order-entry at 15 locations.

In 1993, studies showed that a typical agentwas able to handle more calls at home than in theoffice resulting in a 46% increase in revenue frombooking. Supervisors retain ability to monitoragent’s calls, ensuring tight control.

Source: Fortune, 128(7), Autumn 1993, pp. 24 8.

Case 19.4: Advice fromteleworkers

1. The first and maybe the hardest is to ‘try andconvince your boss to let you try it . . . Don’tstress how it will benefit you; emphasize howthe company will profit from your time in thehome office.’

2. ‘Remind your boss that given fax machines,modems and communication technology, youcan be as hooked in at home as you are inthe office.’

3. For hardware, get a system for ‘all home andoffice work . . . Get a notebook with a dock-ing station. The docking station stays on yourdesk at the office. When you come in to work

you plug your notebook into the docking sta-tion, and it hooks up to the network, providespower, and gives you expansion capability.’

4. Get a network administrator.5. Get remote control software that allows your

modem to transfer files between computersusing phone lines.

6. Have some basic troubleshooting software onhand.

7. During working hours, check and respond toyour e-mail at least once every two hours.Send as many messages as makes sense andthis will make your colleagues think that youare working in the office.

8. Close the door of your office at home with asign ‘Do not disturb’.

9. If you are an animal lover, then there isgreat potential for distraction and destruc-tion. Cats have been known to pounce on thekeyboards and dogs have chewed up many afloppy disk.

Source: Preston Gralla, Work @ Home,ComputerLife, Jan. 1995, pp. 106 10.

Case 19.5: Teleworking at AT&TIn 1994, AT&T declared a Teleworker Day forall its professional workers. Over 25 000 peopleworked at home. Of the people surveyed, 70%said that they were more productive working athome as opposed to working in the office.

According to plans of AT&T, half of its 123 000US managers will try working from home by theend of this century.

The Link Research firm estimates that 43.2million Americans will work at home at least partof the time (although 12.7 million of that are self-employed). ‘The number is expected to grow bya whopping 15% a year, so by 1997, 56 millionpeople will be working at home.’

Source: Preston Gralla, ‘Work @ Home’.ComputerLife, January 1995, p. 107.

Case 19.6: Teleworking in Europe

As part of the RACE programme, the EU hasbeen promoting a series of studies to stimulatetransborder teleworking. One such study con-cluded that no single core of tools and plat-forms for teleworkers is realistic since there are

228

Page 242: Telecommunications and Networks

Telecommuters, e-mail and information services

different communication and network servicesrequired and as there are different applicationsfor teleworking. Further, it was observed that:

. . . the degree of technical complexity used inteleworking applications depends on a numberof factors, such as the nature of the workundertaken, pre-existing equipment within theorganization, communication costs, and theavailability of value added networks. Thusthe communication requirements adopted byteleworkers will reflect both the nature of thetask (is access to a host computer or LANrequired? Is a private e-mail service available?Are voice communications required?) and theprevailing network conditions (the extent ofnational ISDN service, the cost of the service, orthe availability of public e-mail or data services).

Source: Julian Bright, Teleworking: The StrategicBenefits, Telecommunications, Nov. 1994, p. 81.

Case 19.7: Telecommuting in theUS (in 1994)

3.9 million Americans commute to their jobseither full-time or part-time through their PCs.3.2 million Americans communicate throughthe Internet with 541 949 British, French andGermans, as well as 347 888 Australians andCanadians.Average productivity increases per telecom-muter (measured by employers) was 10 16%.Self-employed persons with a PC generatealmost $70 000 in household incomes 42%more than otherwise.85% of telecommuters are equipped to com-municate with their employer’s system milesaway.The average work time spent in office perteleworker was one day a week.The average work time increases per tele-worker per day was 2 hours.The annual savings in facility costs per com-muter was $3000 $5000.The typical system used in home-based systemscost $3000 including software bought at thetime of purchase about $1000 more than theaverage cost of a PC system purchased in the US.The average telecommuting equipment was a386 processor with a fax modem and a dot-matrix printer.

Over 2 million self-employed professionals donot use any accounting software.

Source: Jonathan L. Yarmis, Telecommuting Chan-ges Work Habits, 1994; Inc. Technology, 16(13),1994; and U.S. News & World Report, Feb. 27,1995, p. 14.

Case 19.8: Holiday cheer byelectronic mailA Christmas cheer chain-letter sent throughIBM’s internal network illustrates the benefitsand problems of e-mail. The computer programthat forwarded this Christmas tree greeting wasdesigned to rifle through each recipient’s filesin search of automatic routing lists, and thenpass the greeting to each name on that list.The result: the card boomerangged throughthe network, clogging communication channelsso that important business mail was delayed.No provision was included in the program forchecking the duplication of names on differentmailing lists so the global link was in knotsfor hours.

The problem was eventually resolved, accordingto a spokesman, by trapping the files and deletingthem. The company was to review operationalprocedures to prevent future disruptions of itselectronic mail service. For example, personnelhave been told not to execute or store any messagesif the sender’s specific purpose is unknown.

Source: Patricia Keefe, Holiday Cheer BringsIBM Net to Knees, Computerworld, Dec. 2, 1987,p. 2.

BibliographyAyre, R. and Raskin, R. (1995). The changing face of

on-line. PC Magazine, 14(4), 108 175.Barrett, T. and Wallace, C. (1994). Virtual encounters.

Internet World, November/December, 45 47.Beard, M. (1994). The mail must go through. Personal

Computer World, September, 384 387.Burns, N. (1995). E-mail beyond the LAN. PC Maga-

zine, 14(8), 102 175.Caswell, S.A. (1988). Electronic mail the state of the

art. Telecommunications, 22(8), 27 30.Engler, N. (1994). Buying e-mail is harder than ever.

Open Computing, 11(9), 88 91.Gasparro, D. (1993). Moving LAN e-mail onto the

enterprise. Data Communications, 22(18), 103 112.

229

Page 243: Telecommunications and Networks

Telecommunications and networks

Griesmer, S. Jesmajian, R.W. (1993). Evolution ofmessaging standards. AT&T Technical Journal,May/June 21 45.

Griffiths, M. Teleworking and Telecottages. The Com-puter Bulletin, 2(9), 14 17.

Hurwicz, M. (1997). E-Mail: old meets new. LAN,12(2), 87 91.

Huws, U. (1991). Telework projections. Futures, 23(1),19 30.

Huws, U. Korte, W.B. and Robinsons, S. (1990). Tele-work: Towards the Elusive Office. Wiley.

Maritino, V. and Wirth, L. (1990). Telework: a newway of working and living. International LabourReview, 129(5), 529 552.

Reichard, K. (1995). Mr.Postman@INTERNET. PCMagazine, 14(8), 111 151.

Reinhardt, A. (1993). Smarter e-mail is coming. Byte,18(3), 90 108.

Richardson, R. (1997). E-Mail detail. LAN, 12(1),57 62.

Roberts, T. (1994). Who are the high tech home-workers? Inc. Technology, 6(13), 31 35.

Runge, L.D. (1994). The manager and the informationworker of the 1990s. Information Strategy: The Exec-utive’s Journal, 10(4), 7 14.

Sproull, L. and Kiesler, S. (1991). Computers, net-works and work. Scientific American, 265(3),116 123.

Strauss, P. (1993). Secure e-mail cheaply with softwareencryption. Datamation, 39(23), 48 50.

Trowbridge, D. (1993). Now remote computer usersnever have to leave home. Computer TechnologyReview, 13(8), 1, 10 15.

230

Page 244: Telecommunications and Networks

20

INTERNET AND CYBERSPACEClearly, the Internet and its associated technologies and applications are taking us into newintellectual and business territory. There are powerful ‘leveling’ effects to be seen in which elementaryscholars and senior statesmen correspond without necessarily being conscious of age and experiencedistinctions.

Vinton G. Cerf

In a few years, there may be more people talking to each other on Internet than on the telephone.International Herald Tribune, 1994

INTRODUCTIONAt the University of Canterbury in New Zealand,a professor in MIS (Management InformationSystems) regularly assigns his students the read-ing of advertisements in professional magazines.He argues that commercials are the best predic-tor of future technology. He is perhaps right atleast in the case of the commercial by IBM in1995. In it, there are nuns walking in a monasteryin Czechoslovakia when a Mother Superior con-fesses in a whisper: ‘I’m dying to surf on the Net.’

Enter cybernuns. Welcome to the cyber-era,the cyberworld where you can find cyberpunks,cyberphobia, cyberwork, cyberbucks, cybersluts,and if you chose, cybersex. The term ‘cyber’(according to cyberwatchers) has been used inpublications at least 1205 times in January 1995,up by 621% over a period of two years. Thiscyber interest should raise questions like: Isthe cyber-era a hype or a reality? Is cyberspacepeopled by gearheads and technojunkies thatoppose any intrusion by the commercial worldand the government? Or, with its expansion, hascyberspace become more mainstream?

We shall explore cyberspace through its mostubiquitous manifestation: the Internet. Internet isan international net of local networks that con-nects lots of host computers where information andknowledge resides and can be readily accessed. Itis the fastest growing sector of the economy. In1994, Internet had over 60 billion packets trans-mitted per month starting with zero in 1988. By1997, it is predicted that some 400 million people

in North America will use the Internet at least oncea week with world-wide users approaching 80 mil-lion (Bayers, 1996: p. 128).

In 1994, Internet saw many changes: itsfinancing changed hands from an agency of the USgovernment to a firm in the information providerbusiness; there were teenagers that were seducedandranawayreceivinginstructionsontheInternet;Chinese dissidents flashed messages to China onthe birthday of the Tiananmen Square massacrethat would be banned in China; there was obsceneand pornographic material on the Internet thatpromptedtheUSlegislaturetopassabillfiningandsending to jail those who put such objectionablematerial on the Internet. But the courts may haveto decide what is and what is not objectionablewithoutviolatingthefreedomofspeechthat theUSconstitution guarantees. Even if the courts definewhat material is objectionable enough not to go onthe Internet, how can the US government ensurethat someone from outside the US will not putobjectionable material on the Internet? And sothe Internet is very much a political and socialissue. The common citizenry, including computerprofessionals, have little say on how these disputeswill be resolved. We can, however, look at thetechnological problems involved. This we shall doin this chapter. We examine the ways to connectto the Internet; the many uses of the Internet; thepotential uses by businesses; and the problems ofsecurity on the Internet.

The Internet is part of the scene of what is calledcyberspace. We then start with an introduction tocyberspace.

231

Page 245: Telecommunications and Networks

Telecommunications and networks

Cyberspace

We visited cyberspace implicitly when we dis-cussed Internet and the information highway. Inthis section, we will briefly discuss its evolutionand its possible future. In its past,

. . . there is a ghost in the machine: the traces ofthe people who created them . . . where technologyhas been the medium for the expression of humancreativity . . . working within the structure thatcreated and maintained the technology wereindividual, creative human beings . . . This largelyunofficial meeting ground of culture, populistpolitics and technology is called cyberculture . . .some humanistic values flowed back into thesociety of hackers . . . One thing that the hackersshared with the counterculture of the ’60s and’70s’ was a profound distrust of society . . . Ingeneral, they cared almost nothing for the forcesthat moved their employers, namely money andpower. (Evans, 1994: pp. 10 11)

One person who captured this thinking wasWilliam Gibson, a science fiction writer, whocoined the term cyberspace. He viewed thefree hacker in the future was of ‘Bohemia withcomputers’: counterculture outlaws surviving onthe edge of the information highways . . . ‘thestreet will find its own use of technology’ whilecyberpunks will use technology to underminethe misuse of technology. The term cyberpunkwas coined by Gardner Dozois to evoke thecombination of punk rock anarchy and high tech.Gibson in Neuromancer (1984) talks about thecyberpeople he knew. ‘They develop a belief thatthere’s some kind of actual space behind thescreen . . . Some place that you can’t see but youknow is there . . . place of unthinkable complexity. . . with lines of light ranges in the nonspace ofthe mind, clusters and constellations of data.’

Elmer-Dewitt talks about the shadowy space as:

great warehouses and skyscrapers of data. . . By 1989 it had been borrowed by theon-line community to describe not somescience-fiction fantasy but today’s increasinglyinterconnected computer systems especiallythe millions of computers jacked into the Internet. . . Cyberspace . . . encompasses the millions ofpersonal computers connected by modems viathe telephone system to commercial on-lineservices, as well as the millions more with high

speed links to local area networks, office E-Mailsystems and the Internet . . . wires and cables andmicrowaves are not really cyberspace. They arethe means of conveyance, not the destination:the information superhighway. (Elmer-Dewitt,1995: p. 4)

Internet

A network of networks is the Internet. Both publicand private networks can be part of this loose con-federation of networks. In 1993, it served almost1 800 000 nodes in some 40 countries. It was thensupported partially by the US government for itsoperating costs while the remaining costs wereborne by subscribers who paid a fee for connec-tion to local computer hosts that direct traffic tolocal access providers that tie into the hosts. Still,the costs of transmission by Internet in 1993 waswell below the costs of surface mail and airmail andespecially below the $2 costs of a comparable fax.Also, transmission by Internet is much faster, oftena few seconds or minutes as compared to hours andsometimes more than a day by fax for large jobs.

Internet allows the transmission of mail, doc-uments, basic digital services including e-mail,banking, video phone calls, movies and shopping.Eventually, Internet is expected to provide thecombined integrated services of a computer andphone at the cost of a TV today.

Internet handles queue (waiting line) controland flow control automatically through its mailprotocols. Its messages encapsulate fax, sound,video as well as many character sets of for-eign languages and multiplexed messages acrosscommon links. Group information exchange ofgroup news, mailing lists and other related infor-mation are delivered according to group spe-cific profiles to the computers of the subscribers.Internet started as a communications servicefor researchers and academics of the DefenseDepartment in the US. What was restrictedto scientific and research communications wassoon extended to libraries and many existingdatabases and accessed by information providersin both the private and the public sector.

Connecting to the InternetThere are at least three ways to connect to the Inter-net. One is to go through a corporate LAN con-nected through a router and requiring a TCP/IP

232

Page 246: Telecommunications and Networks

Internet and cyberspace

LAN

LAN

TCP/IP +SLIP or PPP

Telephone

Line

Internet

Internet Service Provider

TCP/IP

Information Service Providere.g. CompuServe, Prodigy and AOL

PC

Figure 20.1 The Internet connection alternatives

connection. The second route is through a Inter-net provider and this requires not only a TCP/IP(Transmission Control Protocol/Internet Proto-col) connection but also a SLIP (Serial LineInterface Protocol) or a PPP (Point-to-Point Pro-tocol) connection. The third way is to use thetelephone and go through a public informationprovider such as CompuServe, Prodigy or AOL.These three alternatives are shown in Figure 20.1.

Given alternatives, you have to make a choice.What is the best alternative for you? The answer

depends on your consumption patterns and whatyou may have to pay. If the organization you areassociated with has a LAN (and access to otherLANs and the Internet) and you are an occa-sional user, then a LAN connection may suffice.There may be some constraints on when and howmuch you can use the Internet and you may bemonitored but for an occasional user this may beno burden. However, if you do not have a LANconnection and need access to the Internet thenyou must explore other alternatives but you must

1. By LAN

2. By Internet provider

3. By Infor- mation provider

Local interconnectivity

Internet connection

Global shopping mall

International bulletin board

International chat groups

International newsgroups

International news & e-mail

Server sites globally

Internet connection

Local bulletin board

Local chat groups

Local newsgroups

Local news & e-mail

Local shopping mall

Figure 20.2 Uses of the Internet

233

Page 247: Telecommunications and Networks

Telecommunications and networks

now pay for the connection. One way may be con-necting to an Information Services where e-mailis part of the services offered. This may be quiteadequate and offer other services like news anddiscussion groups, shopping, and even the oppor-tunity of playing computer games. However, ifthe services that you want are abroad, then youneed the Internet. This connection will not onlygive you access to the international shoppingmalls and international discussion groups, etc.,but it will give you access to a lot of materialthat is available on international sites only acces-sible from the Internet. The Internet connectionis offered by some information service providersbut for a cost which may be worth it if youhave much intentional correspondence and a lotof global traffic downloading (receiving) and/oruploading (sending). There is a lot of freewareand shareware that is both of a technical andgeneral nature on foreign sites especially if youlive outside the US. Even the US firms benefit.One US firm is on record as downloading $64 000worth of software and documentation in just oneyear. Downloading or uploading involves a filetransfer and a special protocol called the TCP,Transfer Control Protocol.

The services offered by each alternative Inter-net connection is summarized in Figure 20.2.

Surfing on the Internet

Surfing (working or ‘playing’) on the Internet isnot currently an easy or end-user friendly pursuit,but it is getting easier by the day. Browsing is dif-ficult but easier with excellent software like theGopher and the Web. The Web is the short formof WWW, World-Wide Web. The WWW is calledthe Web for short and was initially developed byCERN (European Particle Physics Laboratory) inSwitzerland. It was later made into a commercialproduct by NCSA (National Center for Supercom-puter Applications) at the University of Illinois.

As a navigating tool for the Internet, Web pro-vides access to servers that are the repository ofinformation and documents (including computerprograms) that are of interest to computer sci-entists. The Web protocol is a superset of otherprotocols and embraces multimedia-based sys-tems that are capable of delivering data, text,voice and images. The protocols are OS (oper-ating system) and hardware independent, allow-ing Web to be used across different computer

systems all around the world. Web is based onHTTP (HyperText Transport Protocol) with ‘hotlinks’ to documents through another interface:the HTML (HyperText Markup Language). Incontrast to Web which is hypertext oriented,Gopher has a hierarchical structure.

Web and Gopher are solutions to publish-ing information on the Internet. While manyGopher servers are limited to publishing text,Web servers can publish text and graphics, andin some cases even sound and video, as well asany combination of multimedia. Both solutionsare client server based and can link documentson the same server or remote servers.

To access and use Web or Gopher, one needsan interface. For Web, a GUI (Graphical UserInterface) is Mosaic or Cello. Cello is restrictedto Windows systems while Mosaic is open toWindows as well as Macintosh machines and XWindows. Mosaic is a very user friendly graphi-cal interface for the Web allowing you to navigatefreely by merely clicking the mouse.

There are other tools for the most popularInternet functions. Archie, Veronica and Jug-head (named after comic strip characters) andWAIS (Wide Area Information Services) are soft-ware tools used to search archived files residingon FTP sites around the world, where FTP (FileTransfer Protocol) allows large files and pro-grams to be transferred between a remote server(computer) to your PC. There is also NetScape(an elegant and powerful interface to Web); Tel-net allows logins to remote Internet servers oftenanonymously; and Usenet News, one of the manynewsreader services that is like a large bulletinboard specializing in topics where group mem-bers can post and reply to messages. Usenetalso has discussion groups (10 000 of them). Allthe tools and services have a home-page thatgives you information of the services offered. Thehome-page, when well designed, has ‘hot’ linksthat allows for easy access to related services thatare relevant and accessible. Access to the Internetis summarized in Figure 20.3.

A lot of software on the Internet is free. Thisis not too surprising given the fact that much ofInternet was developed by the free contribution oftime and effort by a large number of people inter-ested in wanting such a system. Such free soft-ware on the Internet is called freeware (such asthe WSGopher configured to serve most Gopherservers) as opposed to shareware that requires you

234

Page 248: Telecommunications and Networks

Internet and cyberspace

Access theInternet

WWW(Web)

NetScape Tools

Gopher

FTP

Telnet

WAIS(Wide AreaInformationService)

Browsers

Mosaic

CelloArchie

Veronica

Jughead

Netscape

MSN

Figure 20.3 Access in the Internet

to pay a registration fee to the developer. You typ-ically would use a combination of software, one tograb your Internet mail and another to read thepostings on the Usenet newsgroups. Many of theInternet services are available through commer-cial on-line services like AOL (America on Line),Delphi ComputServe and Prodigy.

One of the problems with Internet is that itis an informal organization growing very rapidly(in 1994 it grew by 95%, adding 22 new nations

to the net making a total of 159 countries). Thegrowth of the Internet is often very ad hoc. Whatis needed are standards that allows for an orderlygrowth and yet maintains the security and pri-vacy of data transmitted. This is partly beingdone by standards like the S-HTTP (SecuredHTTP) that will secure (largely through encryp-tion) data transmitted to-and-fro on networksand through protocols for security of monetarytransactions to be discussed later in this chapter.

USES

Marketing

Selling

Advertising

Publicity

Cybercash

Digicash

E-money

Home-page

E-mail E-zines

FTP

Gopher

Figure 20.4 Uses of Internet in bussiness

235

Page 249: Telecommunications and Networks

Telecommunications and networks

Internet and businesses

The use of the Internet by business was dis-cussed earlier in Chapter 16. The uses of Inter-net in business are summarized in Figure 20.4.Internet is of interest to all users of the oldARPANET that was developed by scientist, aca-demics, researchers and the defence agency inthe US. It was not specifically designed for con-ducting business for a profit, but cyberspaceis of great interest to business and commerce.In 1994, 56 000 businesses were interconnectedworld-wide (US News & World Report, Feb. 27,1995, p. 14). Studies on the profile of users ofInternet show that it is filled with bright, welleducated, upward mobile people, a demographicpopulation most attractive to businesses that sellto such people.

Security on the InternetAn important consideration for businesses, andeven for the communications of e-mail and alldocuments with confidential or proprietary data,is security. Some of the considerations of securitythat are applicable to systems other than thoseusing telecommunications are applicable to theInternet as discussed in Chapter 12 earlier. Otherconsiderations are importants to the Internet andwe now examine them. For transactions requir-ing security on networks, there are at least fiveconditions that must be satisfied. These are:

1. Authentication, where one verifies the twotrading parties involved. The authenticationmay be of the ID of each party which willvary with countries but in many countries itis a driving licence, a national registrationcard or a credit card. Authentication consti-tutes ‘what you have’ like a badge, token orcard; ‘what you are’ such as your personalphysical characteristics; ‘what you know’ likeyour PIN (Personal Identification Number);and ‘where you are’ such as the terminal orclient identification which can be checkedfor legitimate access by you.

2. Certification of the authenticity of the par-ties involved. A common source of author-ity would be a governmental agency or thecredit card issuing authority. In the lattercase, there may well be some specification ofthe issuing authority such as the requirement

that it be a VISA card, a Citibank card, Euro-card or an American Express card.

3. Non-repudiation is the documentation of theagreement between the seller and the buyerspecifying the transaction in enough detailso that there is no ambiguity and misinter-pretation on the agreement.

4. Confirmation is where there is documenta-tion on the seller receiving the order and thebuyer receiving the goods sold.

5. Encryption, where the data on authentica-tion, certification, non-repudiation and con-firmation are all coded, so that it cannot betampered with and altered.

Satisfying all the above five conditions shouldgreatly ensure safety of any transaction though ahundred percent security is never possible. Unfor-tunately, these five conditions do not come neatlypacked in a protocol package. Various strategiesand protocols for security are available but theysatisfy only a few of the five desired conditions.

Strategies for security can be classified intotwo main types: a channel-based security, orperimeter-based security, is where the machine(‘where you are’) is secured as distinct from thedocument-based security where the documentof transaction is secured. In this latter approach,it is the person making the transaction whois authenticated by what they know (name orpassword), or what they are or what they possess(some biometric identification like fingerprint,hand shape, voice-print or eye retina scan print).

The most common are the document-basedsecurity measures which includes passwords,badges or cards like credit cards. None of theseapproaches are inviolate and their presence is noguarantee of the owner since these identificationscan be lost, stolen or even forged. Some aremore difficult to forge like a driving licencewith a photograph but then passports come withphotographs and elaborate seals and watermarks(some). What is more difficult to forge isa combination of biometric features that arenaturally unique. Some methods of verificationof identification require special equipment seenmostly in movies and not in everyday businessoperations. Instead they have smart cards andspecial cards like the PCMICA card that containa set of unique information that is difficult tocopy or duplicate. The PCMCIA Type II cardhas a broad array of data elements (includingPIN, digital signature and proof of ability to pay)

236

Page 250: Telecommunications and Networks

Internet and cyberspace

USER OFSMART CARD

SMART CARD

COMPUTER SYSTEMCHECKING SMART CARD

Initiate Checking Process

RANDOM CHALLENGE GENERATOR

Calculate solution for challenge

MATCH?YesNo

LD + PIN Desire to use smart card

1. Request session + ID

2. A random challenge made

3. Response to challenge

4a. Reject card

4b. Smart card accepted

Figure 20.5 Security for the smart card

that will allow for privacy and non-repudiationprotection. This card can be used not only forvalidating POS (Point-of-Sale) transactions butalso for logging on to the Internet through aPCMICA slot in many PCs.

The process of checking for smart cards isshown in Figure 20.5. The steps are as follows:

1. A user enters the smart cards in specialequipment that initiates the process.

2. The system offers a challenge generated ran-domly which makes the system dynamic andnot static where the user can guess.

3. The user then responds to the challenge butcan only do so with the knowledge of his/herPIN (Personal Identification Number). Thesystem compares this response with a solutionthat it calculates internally knowing the PINfrom the ID. This internally calculated solu-tion is compared with the solution of the user.

4. If there is no match then the card is rejected.5. If there is a match between the two solutions,

then the card is accepted and further trans-actions are performed.

Some smart cards can also recognize the owner,log user activity and also do privilege mapping,i.e. map user to specified data, process or programonly. The costs of such smart cards in 1995 werehigh, around £180 each. As international standards

develop and these cards follow those standards, thecosts will definitely drop. The standards for doc-ument security now being tested is the SSL, theSecurities Sockets Layer. It is far ahead of the otherstandard being developed, the SHTTP, the SecureHyperText Transport Protocol, which is designedfor channel-based security. What the world is wait-ing for is the integrated SSL and SHTTP.

Encryption is a powerful security measure forthe Internet and telecommunications in general.The two basic systems are Kerberps (developed byMIT (Massachusetts Institute of Technology) andRSA (named after the initials of the three foundersR. Rivest, A. Shamir and L. Alderman). In thepublic sector, the US has developed the ClipperChip which codes and decodes messages includ-ing e-mail and is protected from snooping by any-one except the government itself. The governmentclaims that it needs a ‘back-door’ key so that it canintercept messages from terrorists, drug dealersand mobsters. There are many who cry foul andargue that such measures violate privacy and aremerely a way to catch those who avoid paying theirtaxes.

The US has such good systems of identifyingboth the sender and the receiver that the US gov-ernment bars its easy export, although they areavailable abroad. However, people outside the UScannot use it, even though it is based on public keyalgorithms, without the risk of an infringement

237

Page 251: Telecommunications and Networks

Telecommunications and networks

suit. Encryption software in the US is a politicaltussle between its governmental agencies that wantaccess to data for security reasons and those whoconsider it a violation of privacy rights. A sum-mary of requirements for security and some of thesolutions are shown in Figure 20.6

While waiting for internationally recognizedstandards there are some channel-based securitymeasures. One is the packet sniffer that is a

computer program that runs on a computer sitethat needs protection and watches all data pass-ing by and records names and passwords thatmake transactions. The packet sniffer does notstop or even mildly restrict unauthorized intrud-ers but collects data which when analysed canidentify the loophole if not the criminal.

One approach that does control access is thefirewall. There are many approaches to building

Require-mentsforsecuretrans-actions

Authentication

Certification

Confirmation from

Non-repudiation

Encryption

What you have, e.g. key, badge

What you are, e.g. biometric characteristics

What you know, e.g. PIN

Where you are, e.g. specific client/PC/computer

Seller

Buyer

RSA

Kerberos

With PIN

Without PIN

Figure 20.6 Requirements for secure transactions

PC

Workstation

Mini

Mainframe

Supercomputer

ClientServer

Computer system

Telnet

FTP

E-mail

Web query

To corporate file

Finger

KEY:Restriction

Meditation

Two-way unrestricted

Restricted

CPU

CPU

Figure 20.7 Firewall for security

238

Page 252: Telecommunications and Networks

Internet and cyberspace

a firewall. One is packet filtering and the otheruses gateways to isolate intruders. There is alsothe hybrid approach of combining the filter withthe gateway.

One approach is to have a dual set of comput-ers as shown in Figure 20.7 that funnels all theincoming and outgoing traffic to a system. Someof the traffic, like traffic from a browser seekinginformation on a server site, may be restrictedbut allowed if the request comes from a knownand ‘safe’ site; traffic like FTP (file transfer pro-tocol) or that from finger (a service that providesdata on users) or Telnet (a tool of interactivecommunication) is mediated, allowing only cer-tain types of access and processing to be done;e-mail may go uninterrupted; and some traffic isblocked altogether, like attempts to enter a sen-sitive corporate database.

Whether channel-based or document-based,there are some simple common-sense rules thatcan be taught to users in training for Internetsecurity, for example some simple rules onchoosing a password. One study showed thatalmost 7% of passwords are names of the user.One system protects against this by outlawingall common names and even words in adictionary.

Some of the types of security on the Internetare summarized in Figure 20.8

One approach to security of monetary transac-tions of the Internet is to use the EDI and EDIVAN. The great disadvantage of the EDI VAN is

Table 20.1 EDI vs. Internet

EDI Internet

Experience: Extensive Little/NoneSupport: Good NoneStandards: Many NoneSecurity: Relatively secure Not secureReliability: Relatively

reliableNo data

availableSoftware costs: Expensive

softwareInexpensive

softwareOther costs: Substantial/finite Almost noneResponse time: High Very fast

(real-time)Relationship Yes No

required?Open or closed? Closed OpenComplex? Yes No

that it does not provide instant confirmation. Thetransactional settlement in the EDI VAN involvesa store-and-forward batch processing that is com-plex, costly and slow. In contrast, the Internet isreal-time, quick in transaction confirmation andis the ultimate in impulse shopping. The Internetis also an open system, it is free (other than theInternet connection), and it is widely used. Theseand other comparisons of the EDI with the Inter-net (and the discussion of the EDI in Chapter 17)are summarized in Table 20.1.

SECURITY

PERIMETER-(Channel)BASED

TRANS-ACTION-BASED

Firewall

Packet sniffer

Channel-based protocol (SSL)

HYBRID of SSL as SHTTP

Document security protocol(SHTTP)

Credit card

Smart card

EDI

Figure 20.8 Types of security on Internet

239

Page 253: Telecommunications and Networks

Telecommunications and networks

Despite the many good things that can be saidfor Internet security, there is still great concernfor its state of security. For one thing, there aremany weaknesses (perhaps half to three-quartersof the holes in Internet security) that are knownto hackers that are not yet openly acknowledged.No wonder that Weista Venema of the Eind-hoven University of Technology suspects that themost respected domains on the Internet containcomputers that are effectively wide open to allcomers, the equivalent of ‘a car left unattendedwith the engine running.’ (Wallach, 1994: p. 94).Another commentator on security, cautions:

. . . e-mail and other communications can bealmost tracelessly forged virtually no onereceiving a message over the net can besure it came from the ostensible sender.Electronic impersonators can commit slanderor solicit criminal acts in someone else’sname; they can even masquerade as a trustedcolleague to convince someone to revealsensitive personal or business information.(Wallach, 1994: pp. 90 1)

It can be said of Internet security, as for thesecurity of all information systems, that no abso-lute security is possible. We may not be ableto eliminate all intrusions of the Internet butwe can try to minimize the risks involved. Allsecurity measures can be thwarted by a personwho is determined and clever. The motivationmay not be money but merely the demonstra-tion of beating the systems. And there is plentyof such motivation around the world. The betterthe security measures, the greater the challenge.What makes security on the Internet so difficultis that the Internet belongs to no organization orgovernment that can control it. The Internet isautonomous (despite some change of financingto an information provider in 1995) and dedi-cated computer people who volunteer their timeand passion. Disciplining them and imposing aregime of protocols and control on them is verydifficult.

Organization of the Internet

Security of the Internet is partly due to the natureof its organization structure, or lack of it. Thereis no central authority or command to imposea discipline or a standard. No body owns the

Internet, at least till 1995, when the NSF in the USwithdrew its no-strings-attached financial supportand at least one information provider bought intoit. It is still a resolute grass-roots structure. It isopen and non-proprietary. It is rabidly democratic.It is almost lawless. It crosses national borders andanswers to no government. There is not even amaster switch to turn it off if so desired. Most of thework is done by dedicated volunteers who pridetheir work on the Internet, in what they have todisplay and share, and in providing a communitywith free communication.

There is a ‘netiquette’ on how the Internetshould be used. It is self-policed by spamming(jamming angrily) the offender. In one case, theyso deluged the Siegels couple that the Internetprovider cut the Siegels off. Siegels sued thecompany for loss of sales and vowed to keepadvertising. The Internet users took the threatseriously and included a Norwegian programmerwho has written a program to keep the Siegels offthe Internet. But the case illustrates that thereis no organizational entity responsible for whatgoes on the Internet. Some call it anarchy. Somesay that the organization has tenets:

1. All information should be free.2. Access to computers should be total and

absolutely free.3. Decentralization should be promoted.

These tenets and ethic was easy to maintainand police as long as the membership was asmall and homogeneous group dedicated to theInternet. Now the membership is larger than thepopulation of many countries in our world. Andevery year, when the universities open in the US,there is a new breed claiming membership. Theyare the students armed with a free LAN accountnumber and have time to show their worth. Thereis often a clash between these ‘newbies’ and theold guard. There is a clash of culture and a dif-ference on how the Internet should be used andwhat its content should be. How much cyberporn,cybersluts and cyberculture is healthy and is anexpression of the freedom of speech and thought?Why is this the concern of college authorities oreven the government? Why is the Internet orga-nized the way it is?

The Internet does offer a change in the paradigmof how information is collected. Traditionally,information was top-down with the editor (of anewspaper or TV programme) deciding on what

240

Page 254: Telecommunications and Networks

Internet and cyberspace

to include and what to exclude. The journalists atthe bottom followed the lead of the top and dishout information to the reader. For the Internet,the flow of information is not unidirectional nor isit a one-to-one relationship. It is a two-way many-to-many relationship.

The magic of the Net is that it thrusts peopletogether in a strange new world, one in whichthey get to rub shoulders with characters theymight otherwise never meet. The challenge forthe citizens of cyberspace as the battles tocontrol the Internet are joined and waged willbe to carve out safe, pleasant places to work, playand raise their kids without losing touch withthe freewheeling, untamable soul that attractedthem to the Net in the first place. (Elmer-Dewitt, 1995: p. 46).

The Internet and informationservices

The Internet is accessed by many informationservice providers but there are many differencesbetween the two. These differences are summa-rized in the Table 20.2.

Table 20.2 Comparing information service providerswith the Internet

INFORMATION InternetSERVICE PROVIDER

PROVIDER

Audience: Household ResearcherEducationalist

National InternationalServices: Information on

news, shopping,weather etc.

Net ofdatabases

E-mail E-mailDelivery: Prompt Can be

delayedCost: Fixed C Variable No variable

costProvider is

paidControl: By business Indirectly by

governmentsDirectory

of users:Available in most

casesPlanned

Summary and conclusionsThe Internet can be viewed as an evolution in ourmodes of computing. In the 1970s, we had shared

Internet will have around100 million servers (projected)

2000

1990s

1980s

1970s

Network Computing(with LAN, MAN, WANand the Internet)

Personal Computing(with PCs and workstations)

Shared Computing(time-sharing)

Figure 20.9 Evolution of the Internet

241

Page 255: Telecommunications and Networks

Telecommunications and networks

computing and time-sharing; in the 1980s, we hadpersonal computing with the ubiquitous PC; andin the 1970s we had network computing and theInternet. By the end of this decade, it is projectedthat the Internet may still remain the ‘GreatestShow on Earth’ and have over 100 million servers.This evolution is depicted in Figure 20.9.

On the Internet, informal information canalso be transmitted quickly between scientistsand researchers as was the original purpose ofARPANET, the precursor to Internet in 1969.It is in this spirit that on 30 October 1994 themathematician Dr Niecly discovered mistakes inhis calculations by the new Intel Pentium chipand checked with colleagues on the Internet. Thisis how the world soon got to know about the flaw.At first, Intel argued that the flaw would notaffect most users and was willing to replace thechip when it determined that this was necessary.There were delays and much paperwork andformalities involved. These experiences weretransmitted rapidly on the Internet includingto stockbrokers. The price of Intel’s stockdropped. Under increasing pressure, Intel agreedto replace all faulty chips. The Internet in thiscase was the user’s communication device to putpressure on the vendor, in this case Intel, themanufacturer of the pentium chip.

The future role of Internet is very much in thearea of speculation. Will it remain a large globalon-line real-time bulletin board and interactiveinformation service? If so, who will control thecontent? Could the content include pornographicmaterial? If there is to be pornography, who isto define pornography. Or how do you define thebroader concept of ‘obscene’ for children and forteenagers, and for adults? Can we define allowablecontent on the Internet that will not offend themany national, cultural and religious interests?Currently, the Internet ‘is so fragmented, in fact,that some fear that it will ultimately serve tofurther divide a society that is already splinteredby race, politics, and sexual prejudice. That wouldbe an ironic fate for a system designed to enhancecommunication.’ (Elmer-Dewitt, 1995: p. 10).

Predicting the Internet is a common gameamong computer scientists as well as futurists.One set of questions relates to publishing. Willthe Internet go into electronic publishing? If itdoes, how will the problem of royalties be resolved?There is also the problem of the protection ofintellectual property that is now being perhapsaddressed by ITO, successor to GATT. This issue

was the subject of conferences in the US in the1970s and is still being debated in internationalforums. Or will the Internet be a resource centre fortelecommuting and assist the growth and viabil-ity of telecommuting? Or will Internet be a chan-nel for marketing and distribution for businesses?How can Internet be managed so that it con-trols unwanted mail and blatant advertising? Howcan the Internet be maintained and enhanced, forexample, by going ATM (Asynchronous TransferMode) to make it faster and cheaper to use? Howcould Internet protect against violations of pri-vacy, security and misuse? We do not know theanswers to many of these questions, any more thanwe could have predicted the consequences of thefirst telephone or the first computer coming totown. We cannot predict the future for the Inter-net and the confluence of telecommunications andcomputers for the next ten or twenty years, letalone the next fifty years, except that the expe-rience will be breathtaking and exciting.

It is difficult to predict the future of computingand much more difficult to predict the future of theInternet. It is difficult to predict the technologyand its acceptance in the market-place, exceptto say that computing and telecommunicationsmust be very end-user friendly, robust, tolerantof human errors, as well as secure from virusesand outside interference before they become ascommon or comfortable as a telephone. What ismore difficult, if not impossible, is to anticipatecorrectly the response of the common person(especially the student) who has yet to acceptthe Internet. The common person must feelcomfortable with cruising and surfing on theInternet and using a PC (or a Internet computeravailable for $500 in 1996); prefer correspondingby e-mail instead of using the post office;and browse the Internet server sites instead ofusing the library. The human computer interfaceboundaries will tend to disappear. This mayinvolve a generation of change in life-style. Thatmay, however, come. Slowly but surely.

Case 20.1: Intrusions intoInternetInternet is a successor of ARPANET that wasdesigned by the US government to help researchersconnect with each other. Internet was not designedfor general and extensive use by individuals andbusiness organizations and hence no security mea-sures for such use are in place. Internet gateway

242

Page 256: Telecommunications and Networks

Internet and cyberspace

suppliers have maintained that anti-virus scan-ning is the responsibility of the end-user throughsecurity measures are now being implemented andinclude ‘firewalls’, electronic tokens and the Per-sonaCard. Meanwhile, numerous intrusions intothe Internet have taken place:

ž In 1986, a worm on the Internet adverselyaffected 6000 host computers.

ž In 1988, a self-perpetuating worm virus,appropriately called the Internet worm, foundits way into the University of California cam-pus system and wriggled out of control formany days infecting and corrupting thou-sands of computers on the Internet.

ž In 1993, the Lawrence-Livermore NationalLaboratory in the US, conceded that anemployee had used its computers to distributepornography on the Internet.

ž In 1993, CERT (Computer EmergencyResponse Team) at the Carnegie-MellonUniversity in the US warned networkadministrators that tens of thousands ofInternet passwords had been stolen.

ž CERT estimated that, in 1993, there were1300 ‘incidents’ on the Internet comparedwith 50 a few years earlier.

Source: Business Week, Nov. 14, 1994, p. 88.

Case 20.2: Bits and bytes fromcyberspace

Netiquette

Canter and Siegel, a law firm in Phoenix, Ari-zona, blatantly advertised one of its services onthe Internet. It was soon ‘flamed’ with letters forbreaching the ‘netiquette’ of the Internet. Therush of letters caused the server through whichthe firm accessed the Internet to crash and it waseventually kicked off the net.

Source: IHT, No. 34, 787, Jan. 2, 1995, p. 9.

Thomas site

In 1995, a Web site, Thomas, became the repos-itory of all legislative information available toanyone on the Internet, putting them on par with

legislators and lobbyists in having equally quickaccess to future legislation in the US.

Thomas is named after Thomas Jefferson, anearly champion of democracy, as well as of free-dom of speech and expression. Some commen-tators call the Thomas Web site the first steptowards a virtual democracy and cybergovern-ment. Others call the Thomas site a rich potentialfor cyberspace anarchy.

Marriage in cyberspace

Cybermen and cyberwomen can match them-selves for potential marriage by advertising them-selves on the Internet by using: alt.personal.ads.The advantages are that it costs nothing; the adis unrestricted in size so that one can advertiseoneself at any length; and perhaps most impor-tant, one can get immediate responses. The onebig problem is that most best matches live on theother side of the world.

Children in cyberspace

There is much concern for children in cyberspacebeing exposed to pornography and violence. Thiswas the subject of research of Leslie Shade, amother of two children and a doctoral student atthe University of Montreal. She comments: ‘Eventhough I might be logged in on the Internet anaverage of four hours a day, we would have toactively seek out and deliberately find somethingoffensive.’

Experts advise children (and adults) on thefollowing rules for working on the Internet: nevergive out any personal or family information, suchas numbers or addresses, and never respond toabusive or suggestive messages.

A good primer for children in cyberspace is:Child Safety on the Information Highway. For afree copy, call C1 (800) 843 5678.

Source: US News World Report, Jan. 23, 1995,p. 60.

Suicide hotline

‘The Samaritans, a British non-religous charity,offers emotional support to the suicidal anddespairing through HELP by E-mail service . . .Callers are guaranteed absolute confidentiality

243

Page 257: Telecommunications and Networks

Telecommunications and networks

and retain the right to make their own decisions,including the decision to end their life.’

Source: Internet World, Jan. 1995, p. 16.

Censorship in cyberspace

In October 1994, Martin Rimm, a research asso-ciate at the Carnegie-Mellon University (CMU)informed the university administration that hewas soon going to release his study on on-linepornography. The study was based on a collec-tion of 917 000 images. The study tracked theusage (6.4 million downloads) and the frequencyof retrieval of different types of images includingpictures of men and women having sex with ani-mals. The administration sensed the delicacy ofthe situation because pornographic images weredeclared obscene by the State Courts just a fewmonths previously.

The CMU administration determined thatthe collection in question was part of theUsenet newsgroups, with functional titles likealt.sex.rec.arts.erotica, and alt.binaries.pictures.erotica. The administration decided to ‘pullthe plug on all major ‘‘sex’’ newsgroups andtheir subsidiary sections more than 50 groupsaltogether.’

The battle lines soon got drawn over thepreservation of free speech in the new cyberspaceinteractive media. Opponents of the university’saction organized over the LAN network a ‘Protestfor Freedom in Cyberspace’. The core issueswere: ‘to what extent can the operators ofinteractive media be held responsible for thematerial that moves through their systems?. . . Publishers who venture into internationalnetworks like the Internet are particularlyconcerned about libel and slander . . . Unlesscomputer users exercise some self-restraint,control could be imposed from the outside.’Whether this control would be at the local level,the state level, or a combination of the two, isstill a matter to be decided.

Source: Time, Nov. 21, 1994, pp. 102 4.

Cyber weapon in takeover war

Canfor Corp. made a hostile bid to takeover SlocanForest Products, a lumber and pulp company inCanada. Slocan fought with the usual defencetactics of newspaper ads and press releases. Inaddition, Slocan went on the Internet and issued

detailed arguments as to why shareholders shouldwithhold stock from Canfor. Slocan also invitedquestions on e-mail and answered all of them. Itis possible that going on the Internet gave Slocanan image of a leading-edge company and it is alsopossible that the e-mail helped. It is not clear whatmade the difference but Canfor conceded defeat.

Source: International Herald Tribune, Feb. 9, 1995.

Cybercafes

The first cybercafe, Cyberia, appeared in Lon-don, but they can now be found in cities likeHyderabad in India. These cybercafes differ inthe food and drink that they offer but the ser-vices offered always include time on the Inter-net. The price is charged hourly or half hourlyat a rate of about £5 per hour. In the UK thereare discounts and commissions given to stu-dents, OAPs, the unwaged and the ‘scroungingjournos’. For more information on cybercafes,access the home-page on the Internet using theURL: http://www.hub.co.uk/intercafe/

Source: Ed Ricketts, A Day in the Life of Cyber-cafe, Net, No. 10, Sept. 10, 1995, pp. 58 63.

In the US, there is no tradition of cafes likethose in Europe. Internet acccess is available(also for a fee) in coffee houses in many US cities.

Case 20.3: Business on theInternet

ž Federal Express Corp. delivers over 2 millionpackages each day and had a serious prob-lem of tracking the packages. Now much ofthis is done on the Internet being a very cost-effective way of communicating with its manycustomers. In May 1995 alone, it tracked90 000 packages through its Web site.

ž Hewlett Packard, a computer systems manu-facturer in the US uses its Web site to dis-tribute revisions to its Unix operating sys-tem and additions to its printer software.Customers can even get software revisionsthrough the Internet.

ž Apple Computer makes software updatesthrough its ‘‘home-page’’. It often responsesto a request by downloading the necessaryprogram needed.

244

Page 258: Telecommunications and Networks

Internet and cyberspace

ž Well Fargo & Co. of San Francisco givescustomers access to their transaction historiesas well as their current balance. It was used by2000 customers each day in 1995 who wouldotherwise have called the customer servicerepresentative to discuss their accounts.

Case 20.4: Home-page forHersheysHersheys, the manufacturer of sweets andchocolates, has a 50-page home-page on Web,accessible by keying in http://www.hersheys.com,that provides marketing information on Hersheyproducts as well as corporate informationincluding the history of the popular product‘Kisses’.

The cost of the home-page for the first year was$4200 composed of the following components:

Design @ $12/hour $2400Internet account $1200Ongoing charges $12/page

Source: Computerworld, Nov. 27, 1995, p. 24.

Case 20.5: Court in France defiedin cyberspaceSoon after the death of Francois Mitterand, ex-President of France, his personal physician, DrClaude Gubler published a book Le Grand Secretin 1996. Dr Gubler asserts that Mitterand wasdiagnosed with cancer in 1981 and suppressedthis information despite the promise to keep thenation informed of his health in national bul-letins. Gubler further argues that the Presidentof France was unfit to rule in the final monthsin power.

Mitterand’s family sought a ban on the publi-cation of the book on grounds that it violatedtheir privacy and that it had contravened theright to medical secrecy. The gag order speci-fied a 1000 French franc fine for every copy soldillegally. By this time, however, 40 000 copies ofthe book had been sold and there was a hot mar-ket for the book. It was then put on the Internetby a cybercafe in Beasoncon, available to anyoneusing http://www.le.web.fr/secret/, and was soonbeing accessed at a rate of 1000 calls per hour.

Pascal Barbraud, owner of the cybercafe, wasassured by his lawyers that the ban applied toonly the printed version of the book, which has a

70-year copyright protection. Barbraud says thathe had produced the book electronically because‘the spirit of the Internet is against censorship’.He said that if there was any attempt to takelegal action against him, he would immediatelytransfer the book to a server in the United States,where anyone including the French would haveaccess to the book.

Plon, the publisher of Le Grand Secret decidedto take no action because the book was banned.Christian Hassenfratz, the public prosecutor atBeasancon, acknowledged that Dr Gubler’s intel-lectual property rights had been violated, but thatthese rights had been placed in doubt by the deci-sion to ban the book.

This case raises the important issue of thebreaching of book copyrights on the Internetand whether courts have legal jurisdictionin cyberspace for material published in theirearthly jurisdiction. Some say that this casemay be potentially as significant as Gutenberg’sdevelopment of movable type. Meanwhile,publishers are moving cautiously. John Wiley& Sons is posting its Journal of Image-GuidedSurgery on the Internet; Time Warner ElectronicPublishing is running a serial novel on theInternet with no printed edition planned; whileSimon & Schuster has hired a team of computercybercops to prowl the Internet for new ideas.

Source: International Herald Tribune, Jan. 25,1995, p. 7, and March 19, 1996, pp. 1, 7.

Case 20.6: Internet in Singapore

Mr Yao, Minister of Information and Arts inSingapore, announced that henceforth all Inter-net providers and operators must be licensedby the Singapore Broadcasting Authority. Underthe new regulations, all operators from mainproviders to outlets such as cybercafes, as wellas organizations putting political and religiousinformation on the Internet must now registerwith the Broadcasting Authority.

The Broadcasting Authority will requireservice operators to take ‘reasonable measures’against the broadcast of objectionable material,and will insist that providers block pornographicWeb sites. ‘It’s kind of an anti-pollution measurein cyberspace,’ said Yao.

Source: International Herald Tribune, March 6,1996, p. 4.

245

Page 259: Telecommunications and Networks

Telecommunications and networks

Case 20.7: English as a linguafranca for computing?

The explosive global use of the Internet hasraised the question as to which natural languageshould be used for global computing. By default,English is the language of computing, and evencountries like Malaysia that are very nationalisticabout their mother tongue are offering English asa language in order to prepare their citizens forthe information age.

The Americans are developing a universaldigital code known as Unicode that will allowcomputers to represent the letters and charactersof virtually all the world’s languages. However,there is some resistance to English as a universallanguage of computing. One Korean officialstates: ‘It’s not only English you have tounderstand but the American culture, even slang.All in all, there are many people who just giveup.’

In France and the Franch speaking part ofCanada, people are concerned about cybernautsnot being able to use the Internet if they do notknow English. In February 1996, a group of Frenchresearchers put up an all French search enginecalled Locklace (http://www.iplus.fr/locklace)that enables Francophones to find information inanyof thethousandsofFrenchlanguagesitesusingFrench only.

The Japanese, too, are concerned. They fear ‘thatif the language of computers remains English, itwill be more difficult for them to compete in theinformation industries of the future.’

Source: International Herald Tribune, March 11,1996, p. 13.

Supplement 20.1: Growth of theInternet

Year No. of hosts

1971 231974 621976 2351983 5001984 10001986 50001987 200001989 1000001991 617000

1992 10000001993 20000001994 30000001995 5000000

Supplement 20.2: Growth inInternet hosts around the world

Region Jan. 1994 Jan. 1995

North American 1685715 3372551Western Europe 550593 1039551Pacific Rim 113482 192390Asia 81355 151773Eastern Europe 19867 46125Central & S. American 7392 n.a.Middle East 6946 13776

Source: Internet Society.

Supplement 20.3: Computersconnected to the Internet in 1994

USA 3200000 (rounded)UK 241191Germany 207717Canada 186722Australia 161166Japan 96632France 93041

Source: US News and World Report, Feb. 27, 1995.

Supplement 20.4: Users of theInternet in 1994

Type of user In Europe In U.S.A.

Education 36% 22%Computer 33% 31%Professional 16% 23%Management 9% 13%Other 5% 12%

Source: Georgia Institute of Technology, Com-puter Weekly, July 13, 1995.

246

Page 260: Telecommunications and Networks

Internet and cyberspace

Supplement 20.5: Build or rent aWeb site?

One can either develop a Web site in-house orrent one. The relative costs in the US in 1995were as follows:

Developing Rentingin-house

One time costs Software:$500 1000

Design andprogramming:

Hardware:$20 000 40 000

$5000 30 000

Ongoing costs(annually)

One full-time WebMaster:$60 000 120 000

Rental fee:$2400 3600

Source: Computerworld, Oct. 16, 1995, p. 70.

Supplement 20.6: Users of theWeb for businessA total of 161 businesses in the US and Canadawere interviewed about their uses of the Webfor business purposes. Multiple responses wereallowed. The results are as follows:

Gathering information 77%Collaborating with co-workers 54%Researching the competition 46%Communicating internally 44%Providing customer support 38%Publishing information 33%Buying products information 23%Selling products or services 13%

Source: CommerceNet, Menlo Park, US; andComputerworld, Nov. 6, 1995, p. 12

Supplement 20.7: Milestones inthe life of Internet

1969 US Department of Defense commissionsARPANET for networking research withits first node at University of Californiaat Los Angeles.

1974 Robert Metcalfe’s Harvard Ph.D. thesisoutlines the Ethernet which then becamethe technology adopted by ARPANET.

1974 Vinton Cerf and Bob Kahn detail the TCPfor packet network intercommunications.

1976 First use of Usenet establishing connec-tion between two universities, Duke andUNC.

1982 Eunet (European UNIX Network) begins.1983 University at Berkeley releases UNIX 4.2

incorporating TCP/IP.1984 JUNET (Japan UNIX Network) is estab-

lished.1986 Internet worm burrows through Net

affecting 6000 hosts.1986 The NSF in the US establishes the super-

computing centre which results in anexplosion of network interconnections.

1990 ARPANET ceases to exist.1991 University of Minnesota introduces

Gopher, named after its football mascot.1992 WWW (Web) is released by CERN in

Europe.1993 Businesses and media discover the Inter-

net.1994 Shopping malls arrive on the Internet.1994 Mosaic takes the Internet by storm while

WWW and Gopher proliferate.1995 Emergence of Java and Intranets.

Bibliography

Abernathy, J. (1995). The Internet. PC World, 13(1),131 146.

Ayer, R. and Reichard, K. (1995). Web browsers: theweb untangled. PC Magazine, 14(3), 176 196.

Bayers, C. (1996). The great Web wipeout. Wired, 4(4),126 128.

Baran, N. (1995). The Greatest Show on Earth. Byte,20(7), 69 86.

Berners-Lee, R.C., Luotonen, A. Nielsen, F. andSecret, A. (1994). The World-Wide Web. Communi-cations of the ACM, 37(8), 76 82.

Bright, R. (1988). Smart Cards. Ellis Horwood.Brinkley, M. and Burke, M. (1995). Information

retrieval from the Internet: an evaluation ofthe tools. Internet Research: Electronic NetworkingApplications and Policy, 5(3), 3 10.

Bryan, J. (1995). Firewalls for sale. Byte, 20(4),99 104. Data Communications, 16(1), S2 S29.Editorial supplement on The Internetwork Decade.

Chapin, A.L. (1995). The State of the Internet.Telecommunications, 29(1), 24 27.

Elmer-Dewitt, P. (1995). Welcome in Cyberspace.Time, Spring, pp. 4 11.

Elmer-Dewitt, P. (1995). Battle for the Internet. Time,Spring, 40 46.

247

Page 261: Telecommunications and Networks

Telecommunications and networks

Evans, J. (1994). Where the hackers meet the rockers.Computing Now!, 12(3), 10 13.

Fassett, A.M. (1995). Building a corporate Web site:advice for the hesitant. Telecommunications, 29(11),33 40.

Hackman Jr., G. and Montgomery, J. (1995). One-click Internet. PC Computing, 8(7), 114 125.

Hurwicz, M. (1997). Netscape’s bridge to the Intranet.Internet, 2(1), 103 7.

Ivine, D. (1995). Internet infrastructure. NetUser,Issue 3, September, 7 19.

James, G. (1996). Intranets. Datamation, 42(18),38 40.

Kosiur, D. (1997). Electronic commerce: Buildingbusiness security. Internet, 2(2), 82 90.

Krol, E. (1992). The Whole Internet: User’s Guide andCatalogue. O’Reilly.

Levy, S. (1994). E-money: that’s what I want. Wired,2(12), 174 177, 213 215.

Lipschutz, R.P. (1995). Extending e-mail. PC Maga-zine, 14(8), 157 175.

Marion, L. (1995). Who’s guarding the till at the cybermall? Datamation, 11(3), 38 41.

Montague, A. and Snyder, S. (1972). Man and theComputer. Auerbach.

Obraezka, K. Danzig, P.R. and Li, S.-H. Internetresource discovery services. Computer, 26(9), 8 24.

Reichard, K. (1995). A site of your own. PC Magazine,14(17), 227 271.

Reichard, K. (1995). Mr. Postman@Internet. PC Mag-azine, 14 (8), 111 137.

Rheingold, H. (1991). Virtual Reality: The Revolutionof Computer Generated Artificial World and Howit Promises and Threatens to Transform Business andSociety. Summit Books.

Schmidt, R. (1994). Internetworking: future directionsand evolutionary paths. Telecommunications, 28(1),55 74.

Vetter, R.J. (1994). Mosaic and the World-Wide Web.IEEE Computer, 27(6), 49 57.

Wallach, P. (1994). Wire pirates. Scientific American,March 90 101.

248

Page 262: Telecommunications and Networks

21

WHAT LIES AHEAD?As we move from an industrial to an information society, we will use our brain power to createinstead of our physical power and the technology of the day will extend and enhance our mentalability.

Naisbitt in Megatrends 2000 (1984)

In a sense, we have automated the process of gathering information without enhancing our abilityto absorb its meaning . . . Our challenge is to process data into information, refine information intoknowledge, extract from knowledge understanding and then let understanding ferment into wisdom.

Al Gore (1991)

At the end of the century, the use of words and general educated opinion will have changed so muchthat one is able to speak of ‘machines thinking’ without expecting to be contradicted.

A. Turing (1950)

We can expect changes and improvements in com-puting technology. Even if the rate of technologi-cal growth decreases some, there will still be fasterand more cost-effective computers. The smallerend of PCs will run at supercomputer speeds andwill be simple to use. Some will be light andhand-held, battery-operated (with longer batterylife), and capable of communicating across theland and even across the world with digital micro-cellular technology. But some computers will bewire-less which will mean that they may face thevested interests of companies that have invest-ments in over 200 million lines of copper wire (inAmerica alone) currently used for transmission.International standards for such cellular technol-ogy and mobile telephone networks such as thePCS (Personal Communications Services) will beneeded. However, predictions of computer tech-nology are always dangerous. As recently as 1984,AT&T predicted that there will be 900 000 wire-less telephones by year 2000. In 1993 with sevenyears to go there are already 12 million subscribersin America alone. And the number will grow aspowerful companies merge and consolidate in theUS and abroad. The problem will be to find anIT platform that allows the convergence of thecritical technologies in IS, such as personal com-puting, office and factory automation, and net-working. These technologies will facilitate theoverlay of information and knowledge which will

enable corporate managers to pose questions thatare fundamental and relevant to them. We needa corporate infrastructure of information that iscross-functional and integrated to provide a care-ful balance between centralized coordination anddecentralized use while at the same time beingend-user friendly.

In this chapter we look at the future of telecom-munications but only in so far as it is relevantandimportant tocorporatemanagers.Oneconcernis the growth and trends in the increasing powerof computers and telecommunications. To accesssuch power at all levels of end-users we need bet-ter network management, including the develop-ment and acceptance of international standards fortelecommunications and a rational managementof the Internet. These are essential to applicationslike home-shopping, telenews, telebanking, theautomated factory, the electronic office, etc. Inintegrating such systems across all functions andacross all levels of end-users, we may approacha telematic society. The richness of such a soci-ety will also depend on advanced applications nowunder development such as the digital library, vir-tual reality, and other multimedia services.

We will conclude this final chapter with manypredictions of the future with some reflectionsand caution on why predictions in computingoften go wrong and what one may do to minimizeits dysfunctional effects if not to eliminate them.

249

Page 263: Telecommunications and Networks

Telecommunications and networks

The future in the context of thepast

Before making any predictions one should remem-ber the caution of Niels Bohr, the eminent Dan-ish physicist: It is hard to predict especially thefuture. But there are some things that one can pre-dict with some confidence. One such prediction isthat we will soon have a three-giga machine (wheregiga is a measure in billions): giga-instructions persecond, gigabyte bus and gigabytes of memory.

There will be more voice processing, biometricinput (e.g. hand recognition devices) and inter-active 3D graphics. We will also see the use ofhelmets, goggles, booms and data-gloves for achi-eving virtual reality, which will enable us to leavethe computer screen and use our body to interactwith a rich variety of virtual objects to replace thephysical world with a computer generated one.

With mass ownership of computers and withinterconnected computers, there will be comput-ers used in every place imaginable: in automobiles,in aeroplanes (already so in the Boeing 777), ingames rooms, doctor’s offices and waiting rooms,libraries, offices, factories, and so on. There mayeven be many computers that will be locationless.

Locationless selling selling that requires noretail store. Locationless inventory inventorythat needs no warehouse. Locationless trainingtraining without a classroom. Locationlessconferences conferences without meetingplaces. Locationless management managementwithout headquarters (von Simson, 1993: p. 106)

Locationless implies interconnectivity of com-puters by networks which will continue to grow. . . Between 1989 and 1993, the proportion of com-puters in America connected in networks rosefrom below 10% to over 60%. Telecommunica-tions will also see great strides in performance andthe law of microcosm will soon merge with the lawof telecosm. George Gilder, states this well:

Just as the law of microcosm essentially showedthat linking any number n of transistors ona single chip leads to n2 gains in computerefficiency, the law of telecosm finds the samekind of exponential gains in linking computers:connect any number of n computers and theirtotal value rises in proportion to n2 . . . in apeer-to-peer computer arrangement, each newdevice is a resource for the system, expandingits capabilities and potential bandwidth. The

larger the network grows, the more efficient andpowerful are its parts. (Gilder, 1993: p. 78).

The laws of microcosm and telecosm, whencombined with end-to-end digitization, may resultin what Professor Solomon of MIT predicts: ‘thepublic switched network will be transformed intoone large processor’. The computer paradigm withits high bandwidth in both fibre and the air mayreplace the telephone, cable and TV. We may wellsee the merging of some telephone, cable, TV andcomputer companies in alliances with firms inthe publishing and entertainment industry. Suchmerging, joint ventures and alliances, in parallelwith the development of an electronic informationhighway and a NII (National InformationInfrastructure), will give us access to anythingin computer storage and do so interactively.This advanced communication environment willnot only facilitate applications like video-conferencing, teleshopping and teleconferencing;but will also provide access to knowledge onentertainment, libraries, medical consultation,and public knowledge-bases where knowledge canbe interacted, selected and manipulated at will.

As a preview of such a world, consider thatin 1994 there were on-going feasibility studies ofstoring thousands of pages of text and thousandsof films (full feature films not 10 minute docu-mentaries) under the control of supercomputersand accessible to a homeowner. This will reshapeour current distribution channels and will giveaccess to the end-user that will be not just quickand easy but also comfortable. And distributionwill be much cheaper. Currently, some 30 centsof every dollar in the US goes to distribution.With the use of fibre optics, the share of theentertainment dollar going to distribution maybe five cents. With the use of a centralized super-computer the costs are expected to be lower still.

Teleprocessing has made control by remotemeans possible. This capability in conjunctionwith visual processing has greatly increased thepossibilities of monitoring personnel. This couldhave the advantage of providing more informa-tion to personnel about their own performanceand be used as a coaching device. It would alsoenable salaries to be closely correlated to per-formance. This may be good for the employerbut not necessarily for the employee because ofthe danger of personal privacy being violated.There may well be need for legislation to regulatemonitoring.

250

Page 264: Telecommunications and Networks

What lies ahead?

Trends in telecommunicationstechnology

In the 1990s, telecommunications will continue tobe concerned with the distribution of information(and knowledge). This distribution will be greatlyfacilitated by advances in the implementation ofthe top layers of the ISO/OSI architecture and bydigitizing information. In digitizing, we translateinformation (data, text, audio and video) into 0’sand 1’s to facilitate the efficient storage, processingand distribution of information.

We shall see greater use of ISDN (IntegratedServices Digital Network) and enhancements ofISDN such as B-ISDN (which uses a fixed cellsize with asynchronous transfer mode cell switch-ing technology). The transmission will be overhigh speed electronic highways. Such a highwayinfrastructure in some countries (like the US)will not be owned and maintained by the gov-ernment but developed, owned and managed bythe private sector.

The transformation of computing by telecom-munications and networks is summarized inFigure 21.1. It shows trends from the past, tothe present and to the future. It is difficult toslice time into neat dimensions of past, present,and future. There is a long gestation periodfor telecommunications technology. There is agreat overlap between developments of the past,present and future. Hence, the time horizon

for Figure 21.1 is very approximate, yet can bequite revealing. What is needed is the movementtowards systems that have an open architectureand are digital, broadband, integrated and end-user friendly. Also needed is that the vendorproviders, who may well be vertically integrated,do not unduly restrict consumer choice and flexi-bility, but instead allow for a network system thatis efficient and effective, and enables a seamlessflow of network traffic. This seamless flow shouldnot only be nation-wide through a national high-way infrastructure (also called the superhighwayor the Infobahn) but also globally through a GII,Global International Infrastructure. For this evo-lution from ARPANET and through Internet,and for the inclusion of business traffic on theGII, it would be necessary for many countries(especially the large trading partners) to changetheir national laws as they relate to the regula-tory environment (for example, low telecommu-nication costs and better privacy protection) andchange their labour laws (such as those allowingfor flexible time and telecommuting).

We need systems that support the possibility ofapplications including telebanking, telemedicine,digital libraries, telenews and telecommuting.This can be achieved partly by integratingexisting islands of computerization. It may benecessary, however, to restructure which mayhave to be a shift from a vertically integratedindustry, in which a single entity provided

Interconnectivity

STANDARDIZATION

ISDN

ATM

Human factors & Ergonomics

INTEGRATION

LAN/MAN/WANAPPANET/Internet

Proprietory systems/ Platforms/protocols/ Objects

Analogue world

Narrow bandwidth

User unfriendly systems

FUNCTIONALAPPLICATIONS

LAN/MAN/WAN/GANInfo. superhighway/Internet

OPEN systems/ Platforms/protocols/ Objects

Digital world

Broad bandwidth (Giga bits) Video communications Multimedia

End-User friendly systems

WIRED CITIESTÉLÉMATIQUE SOCIETY

Figure 21.1 Trends from past present to present future

251

Page 265: Telecommunications and Networks

Telecommunications and networks

not only a telecommunications service buteverything behind it in a network and on thecustomer’s premises, to a richly diverse, hori-zontal structure . . . Customers are not locked toa relationship established with a single providerof goods and services . . . The new diversity oftelecommunications markets . . . forces essentiallyevery player to harmonize its products andservices with those of other players in other layersat least to the extent of assuring that productsand services from the different layers can worktogether. (Heilmeier, 1993: p. 31).

Restructured and integrated or not, systemscould use supporting technologies. These sup-port technologies for telecommunications includesoftware intelligent agents (agents are programsthat when passive will monitor, when activewill perform specific tasks, and when a masteragent will customize applications like summa-rizing information from a newspaper), intranets,private networks using the Internet, applets (pro-grams with specialized functions in Java), andknowbots (robots working on a knowledge-base).There are other support technologies that are notnew and have been discussed earlier but they willappear in the future with robust enhancementsand greater functionality. These are all listed inFigure 21.2.

We will also see a restructuring of the telecom-munications industry. In the US, the telephone,TV, cable and computer industries are no longerforbidden to compete on each other’s terri-tory. Outside the US, many national PT&T

(Post Telephone and Telegraph) companies arebeing privatized. Traditionally, telecommunica-tions has been a public company, but becauseof its high cost and low responsiveness of tech-nology and consumer demand, firms started theirown private networks. The private networks havethe added advantage of strategic control overtheir operations, they are faster to implementbecause of the lack of government bureaucracyfound in the public sector, and they have uniquefeatures and services to offer. (The private net-works do not always belong to one firm but couldinclude hardware manufacturers, software housessuppliers and even public carriers). A summaryof the comparison of private and public networksis summarized in Table 21.1.

Given the history of strong PT&Ts there isresistance to privatize by governments who arefearful of losing control.

The fearful ones will try to impose the deadhand of regulation (in the name of protecting pri-vacy, or culture, or of clamping down on pornog-raphy or crime). Others will say that it is thejob of governments to build and shape (and so,again to control) what is variously known as the‘information superhighway’ or ‘infrastructure’ . . .the changes are neither a big threat to culture ordecency, nor a panacea for jobs . . . Apart fromimposing a few familiar safeguards, the cleverestthing that governments can do about all thesechanges is to stand back and let them happen.(Economist, Feb. 25, 1995: p. 13)

Digital Wire-less

Applets

PCS

VIRTUAL

ATMbackbone

INTELLIGENT SYSTEMS

Knowbots

Intelligent Hubs

SONETSWITCHED 56S M D S

Software agents

Bandwidth-on-demand

P D A

Java

Intranets

Figure 21.2 Supporting technologies

252

Page 266: Telecommunications and Networks

What lies ahead?

Table 21.1 Public versus private networks

Public Private

Cost: Higher LowerFunding: Public By firmQuality: High AdequateServices: Generalized CustomizedSecurity: Good to high Adequate to

highControl: By government

agencyStrategic

controlImplementation: Slow to adapt

and learnQuick to adapt,

learn andinnovate

Standards: Tendency towait forstandards

Consider if any,or else go italone

Features Generalized Uniquelyconfiguredfororganization

There is a strong trend towards deregulation oftelecommunications in industrialized countries.The US broke up the large monopoly of AT&T intelecommunications and allowed all communica-tions companies (telephone and cable) as well ascomputer companies to compete with each other.It is generally agreed that the high rate of innova-tion and integration of such companies is largelydue to their deregulation. We see deregulationof PT&T coming in Europe too though at vary-ing rates among its telecommunications receptivenations. The UK is very committed, with Ger-many not far behind, and France still far behind.The value-added information related industrieswill soon be open to competition though there isstill a strong feeling in many countries that theirtelephone and computer industries are too impor-tant to be left to the private sector. (France sup-ported its ailing computer firm with 10 billionFrench francs in 1992 93).

Privatization has added more alternatives topublic, value added, shared and even intelligentnetworks in the future. The firms involved usedifferent media like fibre optics, telephone, TV,cable, or satellite, or a combination of these. Eachfirm may have a set of suppliers where some aremultimillion dollar firm themselves.

The two main players in this game are thetelephone and the cable companies. Both hope tohave one device for both the telephone and TV.

Some technical experts think that this may not bepossible because a TV that can produce a qualitypicture may not be able to produce good qualitytext which has the greater demand. What mayhappen is that we shall have two devices, one fortext and e-mail and one TV pictures and films.

All this is just one dimension. The otherdimension is the scope of the network: a LAN,MAN, WAN or a global network. And thereis also a third dimension: that of media whichincludes data, voice, graphics, video and multi-media. Some of these media can be offered in oneplace including an access to the Internet insteadof separately through a combination of hardwareand software which makes hardware manufactur-ers and software houses (some being billion dol-lar companies) important players in the market.

Each cell in this three-dimensional matrix isrepresented by one or more company, some com-panies encompassing more than one cell. Thismatrix is shown in Figure 21.3.

There is actually a fourth dimension: content.We do not show the fourth dimension becauseof the difficulty in displaying it on the two-dimensional space of this book. The dimensionof content brings other large oligopolies into playthat include not just film companies in Holly-wood, but video companies and book publishers.We all know about the bigness of Hollywood stu-dios but there is less visibility of the many videocompanies and publishing houses that are largeand powerful. These companies that have the con-tent want an alliance with a carrier to carry theircontent just as much as carriers are looking aroundfor content to carry. They are all dashing aroundtrying to capture the market in as many cells ofthe market as possible. These firms are not smalldwarfs but are often giants of industry that willinvest billions in just testing the market (like BellAtlantic that invested $11 billion into testing afibre optic and two-way TV system in 11 millionhomes by the year 2000). The market though istoo large for any one firm (even titans in largeindustries) and so the firms are prancing aroundmaking alliances and mergers. Their strategistsare debating whether the most profits are in own-ing the content, or distributing it, or both. Eachwants to capture the market where it has a compar-ative advantage which may be a row or column inour matrix of Figure 21.3. This results in intersec-tions of interests and so there are strange alliancesbetween some firms that are in joint ventures inone market but in fierce competition in another.

253

Page 267: Telecommunications and Networks

Telecommunications and networks

MEDIA

IMPLEMENTATION

SCOPE

Multi-media

Video

Graphics

Voice

Data

Public Private Other

LAN

MAN

WAN

GLOBAL

Figure 21.3 Dimensions of networks

This new mix of competition and cooperation is soimportant that it has been given a special name:coopetion.

In this battle for the market which is large andgrowing at a prodigious rate there may well be ashakeout with the strongest alliances surviving,many most likely being oligopolies. They may thenget a foothold into what may soon (around the turnof the century) become the Infobahn or superhigh-way of the future offering a combination of ser-vices for different distances (local, long distanceand wire-less) to a population of over 50 millionin the US alone. Some firms and even industriesmay have to exit, like the video retailing industry,for soon it may be cheaper to store a video in com-puter memory and deliver on demand and store iton magnetic tape or optical disk.

The liberalization of the information industryand a change in attitudes in some countries towardfreedom of transfer of information across nationalborders (such as in the Eastern European coun-tries that are no longer tied to Russia) will greatlyincrease the growth of demand for informationand knowledge and their related services. This willfacilitate our approaching cyberspace, an on-linecomputer service galaxy. The growth in demandfor information services will also increase becauseof an information superhighway and NII (NationalInformation Infrastructure), which in conjunc-tion with services such as the Internet will help

us towards a global information network and atelematique society.

Meanwhile, both Internet and any telecommu-nications infrastructure face common problemsof defining and then providing universal access;offering security and protection against invasionof privacy; controlling ‘obscene’ material; estab-lishing standards that are globally acceptable;and metering users and usage so that any charg-ing system is fair and equitable. The problemof protecting intellectual property rights globallyalso is especially difficult when the traditionalparadigm of author publisher library may nowbe disrupted.

There are also the cautions of authors likePostman in Technopology about the surrender ofculture to technology like the Internet. Postmancautions that we may soon have a diverse andrich set of information without knowing howto handle it. He has the fear that we welllose our community responsibility and insteadrely heavily on chat sessions and downloadedinformation from the Internet and the Web.

Steve Jobs, one founder of the Apple PC andlater the father of the NeXT computer, correctlypredicted in the early days of the PC that a PCwill soon be on every office desktop. In 1996, Jobsasserted, ‘The desktop computer industry is dead.’He then predicted that ‘people are going to stopgoing to a lot of stores. And they’re going to buy

254

Page 268: Telecommunications and Networks

What lies ahead?

stuff all over the Web . . . The Web’s not going tocapture everybody. If the Web got up to 10 per centof the goods and services in the country, it wouldbe phenomenal. I think it’ll go much higher thanthat. Eventually, it will become a huge part of theeconomy . . .We’realreadyininformationoverload. . . We live in an information economy, but I don’tbelieve we live in an information society.’ (Jobs,1996: pp. 103, 104 5).

Network management and thefuture

There is much that network management mustrespond to in terms of the changing environ-ment of telecommunications and networks. Welook at these developments from two points ofview: the demand side, that is, the demand formore network services; and the supply side, theprobable availability of new technology relevantto network management.

Networks are growing continuously and at anexplosive rate. By the year 2000, it is estimatedthat there will be over 100 million microcomput-ers tied into corporate networks. Many of them willbe using Internet. In 1994, there were over 60 mil-lionusersofMicrosoftWindowsthatcouldusenet-works for electronic shopping through credit cardsand access information from public on-line infor-mation services like CompuServe and Prodigywhich in 1994 had over 2 million subscriberseach. In addition, there are minis and mainframesthat use networks for daily transactions like reser-vations, financial services, electronic publishing,EFT (Electronic Fund Transfer), and other com-mercial applications including interactive ones.There is a great need for open systems in hard-ware and software components of networks allow-ing them to be interchangeable and interoperable.This then allows plug-and-play computing withproducts from different international vendors tobe plugged into different platforms of integratedsystems.

Another demand-driven development is thetrend towards a GAN, Global Area Network.This will be a response to the need for a bet-ter architecture suited to commercial operations,the increasing globalization of our economies,and the increase in multinational corporationsand multinational cooperation between nationalgovernments and regions. This does not mean

the dismantling of the LAN, MAN and WAN,but rather their expansion and extension. On theway to a GAN, one will perhaps see a differentarchitecture, more advanced network technologyand a national infrastructure like the telecommu-nications superhighway (or information highway)in the US. The five channels of communication(telephone, TV, cable, satellite and wire-less cellu-lar phones) are now being opened to competitionby being auctioned rather than being licensed bythe government. The desire for competition andthe ‘levelling of the playing field’ for all playersin telecommunications is also being pursued inthe European Union, though through a differentroute: the Open Network Provision, planned forinitiation on 1 January 1998.

It has been argued that we already have a GANin the form of Internet, which is a confederationof networks with each having its own opinion onhow things should work. Each net pays its ownshare with the backbones being supported on anational or regional basis. Thus, in the US, theNational Science Foundation was the payer; inFrance, it is the EASInet partly funded by IBM;and for the 18 European countries, it is CERN,the Corporation for Research and EducationalNetworking, in Geneva, Switzerland. However,there is the counterargument that Internet istoo ad hoc and informal. It is the outgrowthof ARPANET that was designed in the US toallow researchers and educationists to ‘talk’ toeach other electronically. It was never designedfor commercial use and is inadequate for thebusiness and commercial needs of our modernworld. Furthermore, its sources of income aredrying out and the Internet needs reform andnew management.

Management of the InternetAn important and urgent question for networkmanagement concerns the Internet. There aretechnological problems like insufficient band-width but improvement in performance of trans-mission media and component technologies willsoon overcome the bandwidth scare. The problemsof the future are not technological but organiza-tional and financial. The Internet has matured,and so it is having financial problems. Its earlydevelopment was the ARPANET, designed by theUS Department of Defense as a communicationssystem for the cold war. Since the cold war hascooled off (and because much of the network

255

Page 269: Telecommunications and Networks

Telecommunications and networks

architecture had stabilized), the Defense Depart-ment withheld its support for network develop-ment but the National Science Foundation (NSF)continuedtosupportoperationalexpenses inorderto help the Internet build its market. Then, in1995, the NSF said: ‘the market can stand on itsown without our seed money,’ and withdrew itsfinancial support. And so there are no longer any‘free lunches’, free e-mail and downloading fromall round the world. The Internet has to be self-supporting.

The Internet is so successful that recently therewas talk of the danger of a meltdown. It almostoccurred after the bubble telescope sent tons ofdata and swamped many users on the Internet.Melting of the Internet could occur once mone-tary transactions on the Internet are secure andbusinesses start using it not just for advertisingand marketing but for operational traffic.

The nature of demand is also posing uniqueproblems. The use of telemedicine raises the prob-lem of assigning priorities to real-time processingby, say, a doctor wanting a CAT scan of a patientwho is dying. Can such messages barrel down theInternet with the highest priority much as ambu-lances have priority on our roads? And how abouta business wanting an important teleconferencingsession but is behind a massive downloading ofa computer program or a film? Who has priority?Should businesses get priority if they have paid forit? And, if so, how much should they have to pay?

It has been said that the Internet is autonomousand self-policing. That is largely true but there isa governing body of representatives from govern-ment as well as educational and research institu-tions that determine prices that can be charged andstandards that must be followed besides the infor-mal netiquette. They may have to assign prioritiesand fix prices. And they may have the company ofothers as organizations start subsidizing the Inter-net and replace the NSF benefactor, these organi-zations (like information service providers) maydemand a say on how the Internet is to be orga-nized and who pays what. The actual prices willhave to be negotiated with the providers of local,regional and backbone services as well as the accessproviders.

Actually, the Internet was not free for every-one in recent years. Businesses, telecommuters andothers working at home paid a fixed fee, usually afixed fee plus a variable fee for the time beyondthe ‘free’ time. The fee is also based on the band-width and so it is lower for a home user of 9 600 bps

as opposed to a 56 kbps by business users. Thereare other schedules of fee and this emphasizes thepoint that some income is collected by fees. How-ever, the withdrawal of support by the NSF willhurt the non-profit-making organizations and per-haps even academia. How can we reduce the bur-den and pain for users without dampening theirenthusiasm and bona fide use? Will pricing moreon the Internet make it unaffordable and therebydestroy the diversity and populous nature of theInternet? How can we protect the freewheelingexploration and experimentation that takes placein the on-line society that the Internet is? How canwe let researchers and libraries searching for infor-mation get what they want and yet control excessesand misuse? What is the value of public discourseand community services as opposed to commer-cial opportunity on the Internet? How can weprevent businesses from taking advantage of thesystem without limiting their right to make legiti-mate profits on the Internet? How can we preventlarge businesses from swamping smaller ones?

Pricing is crucial to usage of the Internet. Arecent increase in the price of the Internet usagein Australia resulted in a sharp decline of usageby keen students. The price elasticity of demandis expected across a broad spectrum of usersthough hard statistics are hard to come by.

One solution to pricing may be to charge for eachunit consumed just as do the utilities. But this canraise unique problems on the Internet. Herb Brodygives the following example. ‘User A sends a 100byte request to User B, who responds by transmit-ting a 1 megabyte program. A naıve billing sys-tem would charge User B 10 000 times more thanUser A, even though User A initiated the transac-tion and received all the benefits.’ (Brody, 1995:p. 29). This approach is very inequitable, and moreso if the charges would depend not just on the sizeof the message but also on the distance travelled.This would dissuade users from getting what theywant from wherever it is best available. Anyway,this may not be financially feasible. The TCP/IPprotocol used on the Internet was developed for theARPANET switching ‘packets’ of data and has noway of providing the detailed information on trans-actions needed for a charging system. The informa-tion can be collected and protocols adapted but thecost of doing so will require passing the extra costto the customer and may not be worth the trouble.

Another approach is to keep the customer feenominal but to charge businesses for their adver-tisements and for setting up shop in the cybermall.

256

Page 270: Telecommunications and Networks

What lies ahead?

A variation of this approach is to tax each businessand use the tax money for operations and to subsi-dize libraries and community centres.

StandardsAnother of the uncertainties of the future innetwork management concern standards. Oneexample would be the differences in standardsfor EDI. In the US, the X.23 developed byANSI (American National Standards Institute)is in vogue. In Europe, there is the EDIFACT,the EDI for the Administration, Commerce andTransport. Another example would be the exis-tence of two parallel standards in messaging. Wehave the international X.400 adopted by manynational governments in both public as well asprivate sectors around the world, but not in theUS where the Internet standards are in vogueand fashion. Hopefully these two standards willconverge over time on important considerationsof addressing and naming, asynchronous access,security infrastructure, and messaging manage-ment. The danger of course is that there willbe no merging and common standardization ofmessaging architectures and that the two systemswill continue to develop along separate and par-allel paths. We may have a similar problem withthe network systems architecture. Europe andJapan and many countries have adopted the ISOmodel of OSI but the US and the large vendorsof telecommunications systems support SNA byIBM and the TCP/IP used by the Internet.

The problem of international standards har-monization is a difficult one. You do not wantto standardize too early in a fast moving tech-nology like telecommunications or else you runthe risk of freezing technology. At the same timeyou need international standards accepted by thedeveloped countries or else vendors around theworld with supporting applications will be inhib-ited less their product does not sell and becomesobsolete. That is not good for the consumer norfor world trade. Also, the absence of standardsleads to uncertainty in the market which breaksthe cycle of innovation and creative productivity.

A telematique societyTelecommunications provides the glue to inte-grated applications especially for dispersedcomputing whether this be internationally orlocally. At the corporate level this would include

the electronic office and the automated factory.At the city level it would include e-mail,electronic home, the electronic newspaper, home-shopping (teleshopping), home banking, on-linedatabases, and interactive TV and movies. At theregional level it would include the TechnopolisStrategy in Japan where a number of wired cities(technological metropolises) are connected byfibre optics. At the world level, once we have manywired countries and technopolis integrated witheach other, we approach a telematique society.It is the integration of telecommunications andcomputers that led to the term telematiquesociety (telematic society, in English), a conceptoriginally coined by the French administratorsNora and Minc. They were asked by their Presidentto assess the potential of telecommunicationsand computers and the danger of the Americanmonopoly affecting their culture and turning itinto a ‘McDonald’s society’ . . .

Some of the world-wide integration may be inthe far future but integration and telecommuni-cation make them a viable possibility. A glimpseof the future is described in the book TechnopolisStrategy (Tatsuno, 1986). The subtitle of this bookis worth noting: Japan, High Technology, the Controlof the Twenty-First Century. The technopolis strat-egy in Japan is to integrate regionally with 65%being fibre used by corporations and intercity com-munications. A national ‘Next Generation Com-munications Infrastructure’ is planned for 2015at a cost of $410 billion. The Japanese have notplanned fibre to the curbs of homes. Such integra-tion of electronic homes into wired cities and theninto regional integration is currently the approachtaken by the US. The hierarchy of applications thatcould lead to a telematique society through wiredcities is summarized in Figure 21.4.

The problems of a telematique society is lesstechnological and more social and political. Thequestions we must also ask is: Will the quality oflife in a telematique society be improved?

Optimists see freedom from drudgery, intelli-gent management of natural resources, and theelimination of war and poverty in a telematiquesociety. They predict a new Renaissance sincemore time will be available for leisure and cul-tural pursuits. Access to the world’s knowledgewill contribute to mankind’s understanding ofthe future, they say. Interactive communicationtools will stimulate empathy with others and helpbind human ties. The home will become thefocus of daily life, promoting family togetherness

257

Page 271: Telecommunications and Networks

Telecommunications and networks

Télématique(Telematic)Society

Regionalintegration

Regionalintegration(technopolis)

Wired city Wired city Wired cityTypicalwiredcity

Electronichome

Electronic office

Interactive TV

On-line Information Services

E-mail

On-line databases

Telebanking

Teleshopping

Automated factory

Figure 21.4 Hierarchy for a telematique society

and family values. National boundaries will loseimportance as the world community is born.

Pessimists see illiterates glued to game showson video screens in the world of tomorrow, waitedon hand and foot by domestic robots. They seethe responsibility for managing cities, runningfactories, growing crops, and distributing goodsand services delegated to intelligent computers.Surveillance systems with little regard for personalprivacy will be the norm. Individuals, they say, willsever all ties with human, in favour of computercompanionship, which will indulge every individ-ual selfish wish. With self-selective media, peoplewill filter out unwelcome news, preferring to livewith virtual realities that is, with imaginary con-structions of the world rather than the physicalrealities of every day life.

In the Third World, Alvin Toffler sees a middlecourse for society, a ‘practopia’ that is neither thebest nor the worst of all possible worlds. He sees:

A civilization no longer required to put itsbest energies into marketization. A civilizationcapable of directing great passion into art.A civilization facing unprecedented historicalchoices about genetics and evolution, to chosea single example and inventing new ethical ormoral standards to deal with such complex issues.

A civilization, finally, that is at least potentiallydemocratic and humane, in better balancewith the biosphere and no longer dangerouslydependent on exploitive subsidies from the restof the world. (Toffler, 1989: p. 375).

Clearly, advances and innovations in informa-tion technology itself will not decide our future. Itis the way technology is used that will decide thequality of life in the years ahead. We must beginnow to construct a society that can meet the revolu-tionary changes of advanced technology. We havea destiny to create in the telematique society.

A telematique society, from the view pointof technology, represents no great revolutionarytrends on the immediate time horizon. There willbe more of the same in many network technolo-gies, but more enhanced versions. For example,we have heard of ATM (Asynchronous TransferMode) and ISDN (Integrated Services Digital Net-work) throughout the 1980s, with implementa-tions in the late 1980s and early 1990s. In the futurewe will see advanced versions of both ATM andISDN and likely in the open system ‘plug and play’mode. ISDN in a historical perspective may be rev-olutionary in the sense that it will transform ananalogue world of telephony to a digital world ofcomputers and telecommunications.

258

Page 272: Telecommunications and Networks

What lies ahead?

The replacement of telephones will be acceler-ated by the ATM. Greater use of ATM and ATM-LAN switches will change the narrowband worldto a broadband world (with gigabits per second),have wide-areas connections across the enter-prise, reduce delays in telecommunications, andincrease applications such as interactive videoand multimedia. Such multimedia will improveapplications such as those using 3D graphics inCAD (Computer Aided Design), medical appli-cations, video-conferencing and entertainment.

Standardization will also help in more andbetter applications, more LANs, more users ofnetwork computing and more integrated systems.Standardization will also help in the shift ofa proprietary platform to a standard platformand proprietary protocols and objects to standardprotocols and objects. This is part of the shiftfrom proprietary to open systems.

Advanced applicationsMany of the applications shown in Figure 21.4 willsurvive in the future and may well be enhanced.There will be other applications that are innova-tive and advanced. These may include the digi-tal library, which is not new in the conceptualsense but new in that it has never been fully imple-mented. Adaptation of new technology often takeslong. Witness the case of the fax: the first faxmessage was sent from Lyon to Paris. It took over

125 years for it to be universally accepted. (During1987 94, in the US alone, there were over 10 mil-lion fax messages delivered. This was in additionto the 11.9 billion voice messages in 1994.)

Back to the digital library. One view of the orga-nization of a digital library is shown in Figure 21.5.It shows that the author no longer sends thepublisher a typed manuscript but sends it elec-tronically (Path 1 2 in Figure 21.5). The reviewmanagement process in the publishing house maynotchangemuchthoughtherolesof theauthor, thepublisher, the librarian and the marketing depart-ment of a publisher will change. The process ofpublishing will also change and will no longer bea mechanical process. Instead, once a manuscriptis accepted, the publishable material can be avail-able within days (Path 2 4) greatly compressingthe time now required for publishing (including apossible integration of multimedia material) anddistribution, which in the mid-1990s was anywherebetween 6 and 18 months.

The publishing house will be spared much of thepublication headaches but will have new ones suchasroyaltymanagement(Box5inFigure 21.5).Thisinvolves not just assessing royalties, but collect-ing them and distributing them. Collecting roy-alties will be a problem if the collection has tobe done across national borders and the receiv-ing nation is not willing to pay royalties. This isto be expected and is related to the problems ofcopyright management. Even in 1996, the US hadproblems with the acknowledgement of copyrights

AUTHOR

Manuscript sentelectronically (FTP)

Rejection notice

Royaltyaccounting info.

ReviewManagement

OK?

Editors/Reviewers

Yes

PublishingHouse

ROYALTYMANAGEMENT

SCANNER

DIGITALPROPOSITORY &

COPYINGFACILITY

COPYRIGHT

READER

Responses to queriesand requests for materials

No

Existing documents

Access/Queries/Requests

2

3

6

4

5

Figure 21.5 A view of a digital library

259

Page 273: Telecommunications and Networks

Telecommunications and networks

and the collection of royalties for software andmusic created by US citizens. The problem is nottechnological but legal and political. It may wellhave to be decided through international treatiesor through international organizations like theWTO (World Trade Organization).

Meanwhile, there are technical problems withthe digital representation of materials availablein the library. This will be no problem forfuture publications because most of the inputof new materials will be in machine readableform. The problem lies with existing materialsthat are not in digital form. They have to be con-verted through scanning (Box 6) that can be amanual process and hence slow and expensive.But libraries are converting their holdings intomachine readable form (Path 1 2). Even someproducers of Hollywood films are storing all theirnew films in digital form so that they can be eas-ily manipulated and transmitted electronically.

Meanwhile there are technological problems tobe resolved at the reader’s end. Reading off ascreen is not as good as reading from a printedpage. The screen is slower to read (20 30%slower), has flickering that may be uncomfortableor injurious to the eyesight, and certainty posesconstraints of location and posture of reader.

The reader may wish to project the pages of thebook from a digital TV on the wall (or ceiling)and read while lying in bed! What if the readerwants a 600 page book and does not have a fastlaser printer at home? That reader may well haveto go to the local distribution centre that may

replace what was once a local library. Or may bethere is a flat screen display that is lighter thana heavy book and can be read while lying in bed.

These alternative for the representation ofmaterial are relevant not only to the person athome but also to the student in school, collegeand university. Students, teachers and referencelibrarians will have access through the Internetto materials and people (through chat sessions)around the world. Video monitors will replaceblackboards. This empowers the student with con-trol over what can be accessed. This may increasemotivation for self-directed learning. Such accesswill also ‘level the playing field’ between the richurban students and the poorer rural populationoffering equal access to all educational resources(materials and teachers). There will, however, bethe problem of teachers having to learn aboutthe potential of the new technologies made possi-ble through telecommunications and having easyaccess to such technologies. Also, technologiesmay well change and become cheaper. The pop-ularity of the Internet has spurred the computerindustry to produce computers that will alloweasy access to the Internet (including e-mail) inaddition to word processing. Such computer sys-tems could cost half of what they cost in themid-1990s, thereby greatly increasing access tonetworks.

Many of the media currently available, theiruses and their users (customers) are listed inFigure 21.6. Other service (and media) not listedmay well emerge through the creativity of the

ConsumersHome PC userBusinessesStudentsResearchersOffice/Govt. workersTeleworker

NETWORKS & MEDIA

NETWORKS LAN MAN WAN

MEDIA Computer Televisions Digital TV Cellular phone Digital notebook Video-on-demand Movies-on-demand E-mail and fax Hypermedia Collaborative work Intelligent agents

SERVICES

Educational institutionsMuseum accessDigital libraryReligious groupsOn-line informationInternetElectronic shoppingElectronic newsFinancialMedicalOther (P)

Figure 21.6 Consumers, networks, media and services

260

Page 274: Telecommunications and Networks

What lies ahead?

vendor, the user or the entertainment industry.But there seems to be a distinct trends that futureservices will be more robust, digital, user friendlyand multimedia. We will see computer technol-ogy and telecommunications merging informationinto a digital stream of sound, images and words. Inaddition, there will be use of animation. This canbe very useful in both industrial and educationalinstitutions, as for example in the simulation ofconditions of environment or a model.

The mapping of the media to the end-user andits use will be determined by the end-user, theindustries involved (especially the computer andentertainment industries)andthenationalgovern-ment that have to provide the telecommunicationsinfrastructure. For the exchange of informationacross national borders we need the telecommu-nications infrastructure in other countries. Somecountries will try to mix economic and informationopenness with authoritarian policies of restrictionand control. Such policies may enjoy short termsuccess, but in the long run free flowing informa-tion nurtures democracy as in Taiwan and Chile.Sometimes, a country may restrict information aswas the case in 1995 when the German governmentprohibited CompuServe from transmitting infor-mation with pornographic content. CompuServewas then (December 1995) unable to control accessto selected countries and so had to pull-off some200 Internet sites from its approximately two mil-lion users. This made the users in the US very iratefor they felt that their freedom of expression wasbeing controlled by another country. Such issues offreedom of speech and expression across networks(national and global) are important and remaincontroversial as are the issues of security and pri-vacy that remain unresolved.

Future systems will be more OLRT (On-Line-Real-Time) and interactive. Systems willbe intelligent using techniques of AI (ArtificialIntelligence). Systems will be more integrated,not just within an organization and corporation,but also within a city, a geographic region andmaybe even in an entire society.

Perhaps the most exciting and controversialdevelopment in the future of telecommunicationsand networking is the information superhighwaywhich in the US is designed to connect every home,office and school with high-speed data and multi-media links. Notice that it is an infrastructure thatwill bring the links to the nodes rather than thenodes having to go looking for the link. The linkingmay well be done by cable or telephone carriers and

paid for by liberal depreciation allowances, othertax breaks and payment by advertisers. The cor-porate network manager can assemble a networkof bulk rates and offer internal services at a lowerrate than the public provider, but within the cross-subsidies allowed by governments. This shoulddrop the cost of communications and increase notonly corporate communications, but also makeoutsourcing of computing services more econom-ically feasible. In the home, there will be two-wayvideo terminals with multimedia capabilities andinteractive services including movies and video-on-demand.

The superhighway will not raise many techno-logical problems but social, ethical, moral and eth-ical ones. A taste of such problems arose in 1994 atthe Carnegie-Mellon University where there wereover 6.4 million downloads of material that hadsexual content including pictures of humans hav-ing sex with animals. This type of question raisesmany questions like: How does one balance open-ness with good taste? If pornography is allowed,can we also allow say a manual on suicide? Who isresponsible for slander and libel being transmittedon the network? Who is responsible for the con-tent of what is transmitted? If a bookshop is notresponsible for the content of all the books soldand the telephone carrier not responsible for thecontent of the telephone message, is the carrier oftransmission responsible for what (pornography,libel or slander) is transmitted? Or is the senderresponsible? Or, is the receiver responsible? Also,what if the message were to cross national borderslike from the liberal Netherlands to the conser-vative Britain? Should governments intervene bylegislation or subsidize technology that gives par-ents and managers better control of ‘content’ beingtransmitted? These are questions that will eventu-ally have to be addressed if telecommunicationsis to be free and unfettered. One attempt at thiswas the Telecommunications Act of 1996 in theUS which is being contested in court. Even if theAct is considered legal, the question arises: can theUS enforce its laws over the international Internet?

Predictions often go wrong!

In evaluating the predictions made above, thereader should remember that IT is notorious forits overoptimistic predictions especially the macropredictions of ‘megatrends’, ‘future shocks’, ‘thirdwaves’, a ‘leisure society’ and an ‘information

261

Page 275: Telecommunications and Networks

Telecommunications and networks

society’. Computers have permeated social andbusiness life but have not transformed it as greatlyas predicted. We do not have the ‘electronic cot-tages’, the ‘cashless society’, or even the ‘automatedfactories’ and ‘electronic offices’ that many hadpredicted for the 1990s. Some predictions are toolong range to be evaluated today but the dates ofsome predictions have already passed without thepredictions being achieved. For example, the 32hour work week in the US resulting from com-puterization and automation by 1985 and a retire-ment age of 68 with more time for leisure predictedin 1967 is far from being a reality. In the 1990s,the average work-week was above 40 hours a week(including travel time) and the average worker inthe US (and in many other countries) appeared tobe working longer (and harder) than ever.

In the computer applications area we also havebeen overoptimistic in our predictions. Thus inmanufacturing, there were predictions that in theUS there would be 250 000 robots or more by 1990when a survey shows that in 1990 there were only37 000 robots. Also, only 11% of machine toolsin the metalworking industry were NC (Numeri-cally Controlled) and only 53% of the factories sur-veyed did not even have one automated machine.FMS (Flexible Manufacturing Systems) and MAP(Manufacturing Automation Protocol) have beenslow to catch on and CIM (Computer IntegratedManufacturing) is still only a dream.

Given that many IT predictions have gonewrong, one should forgive those who are scepticalof the latest in IT predictions: the comingof the information highway. Is this morehype or is it the coming of a multimediarevolution and the integration of computers andcommunications? Will the information highwayoffer interactive programming of over 500channels and facilitate home-shopping? Whowill take the risks involved and pay for allthe telecommunications infrastructure necessary?Who will assure the consumer that there willbe no open season for viruses, software glitches,hardware ‘crashes’, systems incompatabilities,computer theft and the invasion of privacy? Willthere be consumer acceptance and a willingnessto pay for all the new additional services offered?The proponents of the information highway arguethat all problems and obstacles will be overcome ingood time. They argue that the real question is notwhether we will have an information highway butwhen will we have it. They remind us that mostrevolutionary innovations took years before they

gained consumer acceptance. As examples, theyrecall that the radio took 11 years, colour TV took20 years and cable television took 39 years beforethey became part of our daily lives.

Predictions of a ‘cashless’ society and a ‘paper-less’ office have been Utopian rather than realistic.Despite the increase in the use of EFT (Elec-tronic Fund Transfer), ATM (Automated TellerMachines) and the use of credit cards, there hasbeen a rise in the use of paper. In the US, paper con-sumption rose 320% over the last three decades. Astudy Business Week (June 3, 1991) estimated that95% of the information in business enterprises isstill in paper form and that only 1% of all informa-tion in the world is stored on computers.

Why have our predictions for IT been so wrong,and are there any guidelines to help us preventsuch errors in prediction? One problem is that wehave often relied heavily on estimates made bythe vested interests of vendors and inventors andon simple straight-line trend extrapolation andnot paid adequate attention to human resistance,customer acceptance of small marginal changes inknown products rather than large changes in newproducts, and the long time that it takes for IT todiffuse among customers and society.

Another problem is that we do not always real-istically estimate the complexity of informationsystems. In the UK, the cost of failed softwarein the early 1990s is conservatively estimated at$900 million per year (Forester, 1992: p. 7). In theUS, at General Motors, millions of dollars werelost in premature automation and robotization ofthe production line. The Bank of America aban-doned one computer system after an investment of5 years and $60 million. All State Insurance hadthe cost of its information system increase from $8to $100 million and the time of completion extendfrom 1987 to 1993.

It is also difficult to predict the unintended andunplanned consequences of computing such asthe high level computer piracy, the large numberof successful ‘hackers’, and the extensive misuseand unauthorized access of electronic databasesresulting in computer theft and the invasion ofprivacy.

One business strategy for avoiding or at leastminimizing the danger of unexpected impact isto have an IT watcher. This may be a full-timeperson or someone assigned the duties of readingthe literature, attending professional conferencesand of evaluating IT as it may impact the organi-zation. In doing so, the technology watcher must

262

Page 276: Telecommunications and Networks

What lies ahead?

distinguish between technological forecasts andmarket trends, evaluate the sources of information,identify opportunities of potentially successfulapplications in the organization, give innovationtime to diffuse and get consumer acceptance, antic-ipate problems of implementation and the avail-ability of adequate infrastructure, and, finally,assess (with the help of other analysts, manage-ment and end-users) consumer and organizationalacceptance of innovation. Only by evaluating inno-vation in terms of both its technological potentialand its human implications can an organizationexploit IT and eliminate or at least reduce the con-sequences of incorrect predictions.

Summary and conclusions

Networking in telecommunications can be seenas a recent stage in the evolution of computertechnology. This is shown in Figure 21.7. Anotherview of the evolution is to look at it as anotherstage in computing (Figure 21.8). We startedwith centralized computing in the 1960s, went toshared and distributed computing in the 1970s,to personal computing in the 1980s, and we had

plug-and-play, cooperative processing and net-work computing in the 1990s. This evolution isshown in Figure 21.8 with the corresponding hard-ware in use at the time. In either the technologyview or the computing evolution view, we have notreached the end. We can expect more innovationsin technologies and their application in collabo-rative computing and plug-and-play computing,where we are no longer concerned about compati-bility of hardware and operating software or evenprogramming languages. Whether the message bein French or Arabic, whether it be in text or pic-tures, as long as it is enveloped in a simple languagelike HTML it will be recognized by all computersystems around the world as long as they are con-nected to the Internet.

The Internet in recent years has been calledthe ‘throbbing new center of the computing uni-verse . . . fostering the illusion that all of the Net’scomputers have been stitched together into one’through hyperlinks, making related informationall around the world available at the click of amouse. Businesses around the world are waitingto use the ‘universal portal’ and use the Inter-net platform if only it were secure for monetarytransactions of international commerce. But thereis a bigger problem, that of financing it, managing

Com

putin

g te

chno

logy

Abacus

Calculator

Electronicequipment

Von Neumann

Non-Von Neumann

5 GCS

Mechanizationof arithmeticoperations

Programcontrolled

StoredProgramcontrol

Knowledge processi

ng

Parallel proce

ssing

AI tech

nology

LAN

Mul

timed

ia

Networ

king

18th century 20th century Time

Figure 21.7 Evolution of computing

263

Page 277: Telecommunications and Networks

Telecommunications and networks

1990s

1970s

1960s

1980s

Internet

Client−Server

PCs andworkstations

Minis

Mainframe

Plug-and-playCollaborative computing

Network computing

Personal computing

Shared computingDistributed computing

Centralized

Intranet, Java

Figure 21.8 Evolution in computing

it and controlling it once the demand increasesand the financial support from America dries up.In the resulting reorganization, the Internet maylose some of its character of being an electronicdemocracy, a freewheeling on-line public discus-sion place, of a workplace, playground and a socialclub all at once. This may be the time when we saygood-byte to the freedom of the flat fee rate andthe downloading of free software and unrestrictedand fast e-mail. The use of computers in manyoffices and homes around the world may never bethe same.

What is currently in use cautiously or experi-mentally may soon become necessary and a wayof life. From a technological point of view, thiswill include the common use of hand-held com-puters connected by international networks to any-one around the world, high resolution flat screens,continuous speech recognition, speech synthesis,machine translation, use of natural language forcomputer interfaces, intelligent peripherals andsecure user identification, as well as efficient andfriendly end-user computing. In terms of appli-cations, the future may well see our shopping,banking and reading the newspaper from our com-puter at home; replacing the library visits byusing an interactive videotext; substituting dis-tance learning and digital libraries for educationand training at fixed sites; using e-mail instead ofvisiting friends; using EFT (electronic fund trans-fer) instead of cash, and even travelling to far-off

places by virtual reality. And doing much of thisin cyberspace.

The future of IT will see much integration. Theintegration will be partly technological: the inte-gration of text, sound, video, graphics and anima-tion. This deployment of multimedia technologywill be on a platform of interactive processing.The other dimension of integration is largely orga-nizational and may come from strategic partner-ships, joint ventures, acquisitions, or by hostiletakeovers. Such conglomerates are designed tobring within one firm’s jurisdiction the necessaryinformational infrastructure, which includes tech-nologies (like digital telecommunications), verylarge client base such as television (including cablein the US, which is one-way and high capacity);and telephone (which is two-way but low capacity)subscribers, content programming (including edu-cation and movies); software for program naviga-tion; capacity (including satellite and fibre opticstransmission)andswitchingabilities todeliver fullinformation services to all customers.

Communications will take place in cyberspaceon an electronic highway initially with an NII(National Information Infrastructure) and later onan III (International Information Infrastructure).Access of information will be to businesses, indi-viduals, libraries, educational and training pro-grams, and databases. Furthermore, this accessmay soon become global. Access to the conver-gence of computer, communication structure and

264

Page 278: Telecommunications and Networks

What lies ahead?

programmingmediawillno longerdependonloca-tion but can take place anywhere, from one of manydevices, and may be even in a visual environmentand in virtual reality. Computing will not onlybe democratized but it will offer the consumermore choices and will be end-user friendly. Thespeed of technological and organizational changeswill require adjustment of consumer behaviourand still possibly be pro-competitive. If there areimportant restraints (actual or perceived) to com-petition, then there will most likely be regulationby governments. In any event, our telematiquefuture will require a redefinition of work, leisure,home and community. Along the way, we may welledge towards a cashless and paperless society. Wemay well see a blend of national, regional (like thegovernmental agencies technopolis strategy) andprivate networks with governments agreeing tothe interoperability of national networks, provideopen access to corporate players, universal serviceto consumers, and promotion of cultural and lin-guistic diversity in traffic. Also desirable would bethat national governments provide ramps to theinformation superhighway and enforce interna-tional standards for the protection of intellectualproperty and data, as well as enhance the transbor-der flow of data and information.

Not all the predictions made in this chapterwill come to pass. Some of the goals of the fifthand sixth generation computers may prove tobe technologically unfeasible, or too expensiveto implement. Limited resources, governmentregulatory schemes or interference (for example,unfavourable tax policies), high risk of capitalinvestments, product development priorities inother fields, lack of markets, legal restrictionsand user resistance are factors that may affectthe speed and direction of change.

A high rate of changes in our information ageand their many reasons and dismaying conse-quences confuse our ability to predict the future.However, we must be aware of the caution by theLibrarian Emeritus of the US Congress Library,Daniel Boorstin: ‘We have created and masteredmachines before realizing how they may mas-ter us.’

Since business and IT managers often servein a leadership role in their communities aswell as at work, they will have a prominent rolein shaping our telematic society. Planning mustbegin now for the computerized society to come.As stated by Montague and Snyder:

Many deplore the computer and some even fearit as more monster than machine. Whatever wethink of it, however, we must adjust to it. Thisdoes not imply resignation, but rather that wemust understand the true nature of this latest ofman’s inventions and learn how its powers canbe combined with our own abilities to be used tothe best advantage for humanity. (Montagueand Snyder, 1972, pp. 1 2).

The author hopes that this book will help youto prepare for the challenges and opportunitiesahead.

Case 21.1: Amsterdam’s digitalcity

The digital city of Amsterdam is a city withina city replete with cafes, kiosks, town-squares,billboards, house rentals, an information centreand offices. Founded in 1993, ‘the digital city hasbecome a model virtual community for the dig-itally and politically active, inspiring analogousprojects across the Netherlands and Europe’ andhas a ‘population’ of 30 000.

The city’s manifesto includes the simple philos-ophy: ‘Every plugged-in Amsterdammer should beable to click into the political domain to receiveand exchange information on the latest govern-mental developments, be it about local or nationalelections, political party platforms, or interest-group agendas. The idea was to shed some lighton the minutes of city council meetings and offi-cial policy papers, which up until then had beenburied in the cabinets.’

Source: International Herald Tribune, Oct. 9, 1995,p. 15.

Case 21.2: Spamming on theInternet

Spamming is the flooding e-mail on the Inter-net with undesired and unauthorized informa-tion. Early spamming was when advertisers sentthousands of advertisements to addresses that didnot want those messages. This type of spammingwas controlled by the informal ‘netiquette’ onthe Internet and was policed by members whospammed the advertiser to the point that the

265

Page 279: Telecommunications and Networks

Telecommunications and networks

Internet provider for the advertiser would cut theadvertiser off the Internet.

A new twist to spamming is where people aretargeted largely for political purposes by beingput on mailing lists which then spam the targetedperson with unexpected and unwanted messages.This crowds legitimate e-mail messages and is agreat inconvenience. One such targeted person wasElmer-Dewitt, Senior Editor of Time magazine. Hefound himself being put on 106 mailing lists thatgenerated about 50 messages a day each. Mr Elmer-Dewitt painstakingly unsubscribed himself from106 of these mailing lists only to find that the nextday he was on 1700 more lists.

The Time editor is not the only one spammed.The President of the US was spammed at hisWhite-House e-mail address. The White Housecalled in the Secret Service that managed toreduce the flow to 1200 messages a day.

Surely there will be a solution to such spam-ming problems but just as surely there may wellbe other twists to spamming the interruption of avaluable e-mail service rendered on the Internet.Every innovative application of telecommunica-tion seems to pose a problems that calls for aninnovative response.

Source: Time, March 18, 1996, p. 77.

Case 21.3: Minitel its past andits future

Around 1980, France introduced Minitel termi-nals in Paris and other selected regions. The Mini-tel was connected to message-switching networkcalled the Transpac which was integrated to thetelephone system. Analysts in Paris calculated thatit would be cheaper to give every household a freecomputer terminal with real-time access to thetelephone directory than to annually update theprinted telephone directory. Over the years, theprice of the terminal went up and other relatedinvestments came to over 56 billion francs. Thisallowed over 6.5 million homes with Minitel ter-minals to be connected to the nationwide teletextsystem. There were 25 000 providers of on-line ser-vices with 2 billion calls logging over 110 millionhours of connect-time. France seemed poised totake the lead in the race of developing the Infor-mation Society but never capitalized on its tech-nology. It was unable to sell its system to othercountries. ‘We tried’, said the France Telecom

spokesman. ‘But we could never find the sameingredients in other countries that we had here.’These ingredients were a centralized government,with the political will to introduce a conceptu-ally advanced technology, as well as a monopolisticPT&T for all telecommunications.

France has concentrated on local services whichincluded a credit-card reader that makes it easyand safe for teleshopping because the verificationsystem is at the terminal and does not have tobe sent down a telecommunications line. Besidesteleshopping, users were able to access stockmar-ket prices and perform banking transactions bothnationally and internationally. The system alsohas the ‘Minitel Rose’ which is the equivalent of‘alt.sex’ on the Internet. But meanwhile, the Inter-net was growing beyond anyone’s expectations andFrance was being left behind. Between 1992 and1995, the Internet grew 50 000 times faster in theUS than in France. Minitel’s technology designwas rooted in the 1980s though Minitel’s con-cept of letting the program and data reside incyberspace and offering end-users a cheap termi-nal interface was conceptually advanced comparedto other approaches at the time. Henri Gourald,Digital Corporation’s Internet Technology Man-ager reminds us: ‘The Minitel is both a brakeand a boon. Don’t forget that it has installedan electronic commerce ethos in 25% of Frenchhouseholds.’

Today, however, Minitel’s hardware is slow andmonochrome text-based with rudimentary graph-ics as compared to the faster coloured multimediaability to click around the world afforded by theInternet.

France has now recognized the potential of theInternet and is planning to catch up. According toFrancois Fillon, Minister of Technology, the prin-cipal objective is ‘to make access to the Internetpossible for all French citizens, at a price which isattractive and the same anywhere in France.’ It isexpected that France Telecom will be able to meetthe objective by routing Internet traffic along thesame Tanspac network developed for the Minitel.Terminals will still come free or at a nominal rentand access to databases like a newspaper databasewill be charged at around 9 francs a minute and thebill simply added to the telephone account retain-ing the simplicity and reliability of the Minitelsystem.

Source: International Herald Tribune, Jan. 6, 1996,pp. 1, 8; and Information Week, October 2, 1995,p. 56.

266

Page 280: Telecommunications and Networks

What lies ahead?

Supplement 21.1: Percentagegrowth of phone lines connectedto digital exchanges in the 1990sin Europe

Country End of 1994 End of 1999(predicated)

France 88 100Germany 40 73Italy 64 93Netherlands 62 84Spain 50 74Sweden 63 96UK 81 88Rest of Europe 53 74Western Europe 62 86

Source: Dataquest, May 1995.

Supplement 21.2: World-widepredications for 2010 comparedto 1994

1994 2010

Wired telephonelines:

607 million 1.4 billion

Wireless telephonelines:

34 million 1.3 billion

Number of PCs: 150 million 278 millionNumber of desktop

computers:132 million 230 million

Number of mobilecomputers:

18 million 47 million

Source: Business Week, special Issue on 21st CenturyCapitalism, 1994, p. 194.

Bibliography

Brusseis, J.O.J. (1995). It’s a wired, wired world. Time,Spring, pp. 80ff.

Brody, H. (1995). Internet@Crossroads.$$$. Technol-ogy Review, 98(3), 24 31.

Carlyle, R. M. (1987). Towards 2017. Datamation,33(18), 142 154.

Coleman, K. (1993). The AI market in the year 2000.AI Expert, 8(1), 34 44.

Evans, J. (1994). Where the hackers meet the rockers.Computing Now!, 12(3), 10 13.

Forester, T. (1992). Megatrends or megamistakes?Whatever happened to the information society?Computers and Society, 22(1 4), 2 11.

Fortune, 128(7), 1 162. Special issue on ‘1994 Infor-mation Technology Guide’.

Galliers, R. (1992). Key information systems manage-ment issues for the 1990s. The Journal of InformationSystems, 1(4), 178 180.

Gilder, G. (1993). The death of telephony. Economist,328(7828), 75 78.

Hansson, A. (1995). Evolution of intelligent net-work concepts. Computer Communications, 18(11),793 809.

Heilmeier, G.H. (1993). Strategic technology for thenext ten years and beyond. IEEE CommunicationsMagazine, 31(3), 30 34.

Huws, U. (1991). Telework: projections. Futures, 23(1),135 157.

Jobs, S. (1996). The next insanely great thing. Wired,4(2), 102 107, 158 163.

Keyes, J. (1991). AI on a chip. AI Expert, 6(4), 33 38.Knorr, E. (1990). Software’s nest ware: putting the

user first. PC World, 8(1), 134 143.Levy, S. (1994). E-money: that’s what I want. Wired,

2(12), 174 177, 213 215.Lillis, N. and Herman, J. (eds.) (1991). Supplement

on ‘The internetwork decade.’ Data Communications,16(1), S2 S29.

Lowry, M.R. (1992). Software engineering in thetwenty-first century. AI Magazine, 13(9), 71 87.

Malhotra, Y. (1994). Controlling copyright infringe-ments of intellectual property: Part 2. Journal of Sys-tems Management, 45(7), 12 17.

Montague, A and Snyder, S. (1972). Man and the Com-puter. Auerbach.

Motiwala, J (1991). Artificial intelligence in manage-ment: future challenges. Transactions on Knowledgeand Data Engineering, 3(2), 125 159.

Mujjender, D.D. (1989). Fifth generation computersystems. Telematics India, November, pp. 25 31.

Nash, J. (1993). State of the market, art, union andtechnology. AI Expert, 8(1), 45 51.

Niederman, F., Brancheau, sJ.C. and Wetherbe, J.(1991). Information systems management issues forthe 1990s. MIS Quarterly, 15(4), 475 502.

Nora, S. and Minc, A. (1980). The Computerization ofSociety. MIT Press.

267

Page 281: Telecommunications and Networks

Telecommunications and networks

Reed, S.R. (1990). Technologies in the 1990s. PersonalComputing, 14(1), 66 90.

Rheingold, H. (1991). Virtual Reality: The Revolutionof Computer Generated Artificial World and Howit Promises and Threatens to Transform Business andSociety. Summit Books.

Rockhart, J.F. and Short, J.E. (1989). IT in the1990s: managing organizational independence.Management Review, 30(2), 7 12.

Rowe, A.J. and Watkins, P.R. (1992). Beyond expertsystems reasoning, judgement, and wisdom. ExpertSystems with Applications, 4(1), 1 10.

Postman, N. (1993). Technopology: Technology and theSurrender of Culture. Knopf.

Schroth, R. and Mui, C. (1995). Ten major trendsin strategic networking. Telecommunications, 29(10),33 42.

Schnaars, S. (1989). Megamistakes. Forecasting and theMyth of Rapid Change. Collier Macmillan.

Seitz, K. (1991). Creating a winning culture. SiemensReview, 58(2), 37 39.

Spectrum, 32(1), (1995). 26 51. Special Issue on ‘Tech-nology, 1995’.

Spence, M.D. (1990). A look into the 21st century: peo-ple, business and computer. Information Age, 12(2),91 99.

von Simpson, E. (1993). Customers will be the inno-vators. Fortune, 128(7), 105 107.

Toffler, A. (1980). The Third Wave. William Murrow.Tatsuno, S. (1986). Technopolis Strategy: Japan, High

Technology, the Control of the Twenty-First Century.Prentice-Hall.

Wilkes, M.V. (1996). Computers then and now Part 2.Proceedings of the 1996 Computer Conference, pp.115 119.

Yoon, Y. and Peterson, L.L. (1992). Artificial neuralnetworks: an emerging new technique. Database,23(1), 55 58.

268

Page 282: Telecommunications and Networks

GLOSSARY OF ACRONYMS AND TERMSIN TELECOMMUNICATIONS ANDNETWORKING

Computing is notorious for its jargon andacronyms. Telecommunications and networkingshould take more than their share of the blame.They have introduced many new terms andacronyms to the computing vocabulary. Some ofthese are innocent looking words used in dailylife but mean something different in networkinglike backbone, bonding, bridge, bus, cloud, flag,host, Java layer, open, master slave, packet,peer, platform, robust, token, transparent andvirtual.

There are also terms that are downright con-fusing because they have already been used inthe computing vocabulary. Examples are SDLCwhich does not mean Systems Development LifeCycle but Synchronous Data Link Control; andATM does not mean Automatic Teller Machine

but Asynchronous Transfer Mode. Some terms areduplicates even in telecommunications like MHSfor Message Handling Service and for a productby Novell, a telecommunications product vendor.Some words have different meanings in com-puter science, like encapsulation in OO (object-oriented) methodology and in networking. Similarwords that are confusing are T-1 and T1. To avoidconfusion, most of these terms are defined in thetext. All are defined in this glossary.

There are a number of books and dictionarieson computing which include telecommunicationsand networking terms. There are also books onacronyms alone, like the 1986 oversized bookComputer & Telecommunication Acronyms by JulieE. Towell and Helen E. Sheppard.

269

Page 283: Telecommunications and Networks

ACRONYMS IN TELEPROCESSING ANDNETWORKING

ANSI American National Standards Institute.API Applications Programming Interface.APPN Advanced Peer-to-Peer Networking.ASCII American Standard Code for

Information Exchange.ATM Asynchronous Transfer Mode.B channel Barrier Channel.B-ISDN see BISDN.BBS Bulletin Board System.BISDN Broad-based ISDN.BRI Basic Rate Interface.BSI British Standards Institute.CAT Computerized Axial Tomography.CBDS Connectionless Broadband Data

Service (SDMS in the US).CCIR International Radio Consultative

Committee.CCITT International Telegraph and

Telephone Consultative Committee, now ITU.CD-ROM Compact disk for

read-only-memory.CDDT Copper Distributed Data Exchange.CDMA Code Division Multiple Access.CDPD Cellular Digital Packet Data.CEN European Committee on Standards.CENELEC European Committee for

Electromechanical Standardization.CEPT Conference European des

Administrations des Postes et desTelecommunications.

CIX Commercial Internet Exchange.CNMA Communications Network

Manufacturing Association.CSMA Carrier-Sense Multiple Access.CT-2 Cordless Telephone, 2nd generation.DIN Name of the National Standards

Organization in Germany.E-mail electronic mail.ECMA European Computer Manufacturers

Association.

EDH Electronic Data Handling.EDI Electronic Data Interchange.EDIFACT EDI for Administration,

Commerce and Transportation.EIUF European Computer Manufacturers

Association.ESPIRIT European Strategic Programme for

Research and Development in InformationTechnology.

ETSI European TelecommunicationsStandards Institute.

EUF European ISDN Users Forum.EuroCAIRN European Cooperation for

Academic and Industrial ResearchNetworking.

FAQ Frequently Asked Questions.FDDI Fibre Distributed Data Interface.FLOPS Floating-Point Operations per

Second.FTP File Transfer Protocol.GUI Graphical User Interface.HTML HyperText Markup Language.HTTP HyperText Transfer Protocol.Hz Hertz, cycles per second.ICMP Internet Control Message Protocol.IEC International Electrical Technical

Committee.IEEE Institute of Electrical and Electronics

Engineers.IP Internet Protocol.IPX Protocol for transmitting and moving

information over a network.ISDN Integrated Services Digital Network.ISO International Organization for

Standardization (in Switzerland).ISP Internet service Provider.ITU International Telecommunications

Union.ITU-T ITU-Telecommunications (standards).IXC Inter eXchange Carriers.

270

Page 284: Telecommunications and Networks

Acronyms in teleprocessing and networking

JTC1 Joint Technical Committee 1.Kbps Kilobits per second.LAN Local Area Network.LEC Local Exchange Carriers.LINX London INternet eXchange.LEO Low Earth Orbit satellite.MAN Metropolitan Area Network.Mbps Megabits per second.MHS Mail Handling Systems.MIME Multipurpose Internet Mail

Extensions.MIPS Millions of instructions per second.MPEG Motion Picture Experts Group.MRI Magnetic Resonance Imaging.NII National Information Infrastructure

(in US).NNTP Network News Transfer Protocol.NOS Network Operating system.OSI model Open Systems Interconnection

model developed by the internationalorganization for standards (ISO).

PAL Phase Alternating Line.PBX Private Branch Exchange.PC Personal Computer, also referred to as a

‘home computer’.PCN Personal Communication Network.PCS Personal Communications Service.PIPEX Public IP Exchange.PoP Point of Presence.PnP Plug-and-Play.PPP Point-to-Point protocol.PRI Primary Rate Interface.PRI Prime Rate Interface.PTM Packet Transfer Mode (by IBM).PTT Poste de Telephony (and) Telegraph.PVC Permanent Virtual Connection.RACE R&D in Advanced Computer

Technology (in Europe).RBOC Regional Bell Operating Companies.RPOA Regional Private Operating Agency.SDH A framing format chosen by B-ISDN. It

is also the European name for SONET inAmerica.

SDH Synchronous Digital Hierarchy.SDLC Synchronous Data Link Control.SGML Standard Generalized Markup

Language.SIO Scientific and Industrial Organizations.SLIP Serial Line Internet Protocol.SMDS Switched Multimegabit Data Service

(same as CBDS).SMR Specialized Mobile Radio.SMTP Simple Mail Transfer Protocol.SNA Systems Network Architecture.SNMP Simple Network Management

Protocol.SONET Synchronous Optical Network.STM Synchronous Transfer Mode.SVC Switched Virtual Connection.T1 Transmission link for distances up to

50 miles at 9.6 to 1.544 mbpsT1 Committee on Standards in the US.T3 Transmission link up to 500 miles at

speeds up to 44.7366.TCP/IP Transmission Control

Protocol/Internet Protocol.TERENA Trans-European Research and

Education Networking Association.TTC Telecommunications Technology

Committee (in Japan).URL Uniform Resource Locator.V.xx Designation for standards in the field of

integrated circuit equipment. The xx arenumerals. Each for a different standard.

VERONICA Very Easy Rodent-OrientedNet-wide Index to Computer Archives.

WAIS Wide Area Information Service.WAN Wide Area Network.Web short for WWW.WWW World Wide Web.X.11 The dominant windowing system on

Internet.X.25 A packet switching protocol defined by

CCITT.X.400 A common protocol for standard mail

messaging.

271

Page 285: Telecommunications and Networks

GLOSSARY

access provider a company that sells access tothe Internet.

access time interval of time between theinstant at which a call for data is initiatedand at which it is delivered.

agent programming code designed to handlebackground tasks and perform actions when aspecific event occurs; see daemon.

analogue a transfer method that usescontinuously variable physical quantities fortransmitting data and voice signals overconventional lines.

asynchronous not derived from the sameclock, therefore not having a fixed timingrelationship.

asynchronous transfer mode a packetoriented transfer protocol that isasynchronous.

Archie A system on the Internet that allowssearching of files on public servers byautonomous FTP.

B-channel Bearer channel. A circuit switcheddigital channel that sends and receives voiceand data signals at speeds of 64 kbps.

backbone the top level in a hierarchicalnetwork. Stub and transit networks whichconnect to the same backbone are guaranteedto be interconnected.

backbone network a central network to whichother networks connect.

backplane a pathway in which electricalsignals travel between devices conceptuallysimilar to a bus.

bandwidth the information carrying capacityof a telecommunications media; also theupper and lower limits of a frequency rangeavailable for transmission (e.g. the bandwidthwould be 4000 Hz for a range of 400 to4400 Hz).

bandwidth-on-demand contract for bandwithas it is needed not a contract for fixedcapacity (see SMDS in US and CBDS inEurope).

baud a measure the speed with which amoderm transmits data; and named afterEmile Baud, a telecommunications pioneer.

baud rate the speed rate of a data channelexpressed at bits/second.

beta as in a beta test is the preliminary testingstage.

Bitnet is a subset of the Internet.broadband a method of transmitting large

amounts of data.brouter a bridge and a router.browser a program that allows you to

download and display documents from theWorld Wide Web.

bulletin board a system that allows users topost messages and receive replieselectronically like on the Internet of aninformation service provider.

bus configuration in which all nodes areconnected to one main connection line.

byte 8 bits.cable TV a broadcasting system that uses

giant antennae.cable a flexible metal of glass wire or group of

wires.carrier a public transmission system in the US

and Canada corresponding to the PT&T inEurope.

cell term used in switches D 8 octets.channel a path between sender and receiver

that carries a stream of data; also a pathwaybetween two computers or between computerand control unit or devices.

client it is a user system accessing services asin a ‘client server-system’.

cloud boundary of a packet switching service.Often used as a symbol for a network.

cluster addresses represents a domain as asingle address rather than a set of individualaddresses.

codec short of coder/decoder in multimediaand is the equivalent of a modem fornon-multimedia transmission.

272

Page 286: Telecommunications and Networks

Glossary

compatibility the degree to which there is anunderstanding of the same commands,format, and languages of each other.

configuration shape, arrangement or parts.connection-less service a type of service in

which no particular path is established forthe transfer of information.

connection-oriented service a type of servicein which, for any given call or session,information traverses only one path fromsender to receiver.

contention arises when two or more devicesattempt to use a single resource at any one time.

connection type specifies whether anapplication has a long-term relationship(permanent connection), a boundedshort-term relationship (switchedconnection), or a boundary-less connection(connection-less).

cyber see cyberspace and cyberpunk and thendeduce the meaning of cyberculture,cyberworld, cybersex, etc.

cyberpunk coined by Gardnier Dozois to evokethe combination of anarchy and high tech.

cyberspace a term coined by William Gibson,a science eviction writer to represent thecounterculture outlaws who survived on theedge of the information highway.

D channel a digital channel that carriescontrol signals and customer data in a packetswitched mode.

daemon a program that runs in thebackground on a Unix workstation waiting tohandle requests. It is usually an unattendedprocess initiated at startup. See agent.

dark filtering part of bandwidth held inreserve.

Dante is an organization working towardsa seamless integration of all Europe’snetworks. Dante’s services complement thoseprovided by the national research networks inEurope.

dedicated line same as leased line.Demon a UK-based Internet access provider.digital the representation of data in on/off

signals of 0’s and 1’s. Digital transmissionlines offer faster speeds, greater speed andmore flexibility in transmission than analog.

domain a part of naming hierarchy on theInternet; an organizational area.

dot looks like a period but is used in atelecommunications address as a separator ofthe address subfields.

download transfer of data, offer from larger tosmaller computers.

end-to-end delay the time span between thegeneration a data unit at its origin and itspresentation a the destination.

e-mail electronically transmitted messages.encapsulation inserting a frame header and

data from a higher level protocol into a dataframe of a lower level protocol.

engine the portion of a program thatdetermines how the program manages andmanipulates data; also called a processor.

Ethernet a network cabling and signallingscheme using a bus architecture.

filtering elimination of unwanted networktraffic.

finger a service that provides data about userslogged on the local system or on a remotesystem and used for security purposes.

firewall software that controls network trafficthrough a node.

frame relay a networking technology thatexploits the high quality fibre optics todeliver data up to 10 times faster than today’spacket switching.

gateway a communication program (or device)which passes data between networks havingsimilar functions but dissimilarimplementations.

gigabyte a billion bytes.Gopher a tool on the Internet that searches

fields of a hierarchical nature from a menu.hacker describes a skilled programmer who

has a mischievous bent and likely to intrudeinto computer files without authority to do so.

home-page an explanation of the database onthe Internet and may include a description ofthe content and an explanation of how toaccess and use the database on the server.

host a system that has at least one Internetaddress associated with it; a large computerthat serves other computers or peripherals.

hub a central switching device forcommunication in a store-and-forwardmechanism. It can regenerate signals as wellas monitor signals.

hypermedia video and text files transmittedby way of the Web.

hypertext a link between one document andanother, related documents elsewhere in acollection.

icons graphical symbols that represents afunction or a subroutine.

273

Page 287: Telecommunications and Networks

Telecommunications and networks

infobahn the European version of the‘information highway’.

information highway a term used in the USfor communications acrosstelecommunications networks to transferinformation; also called an information.

information infrastructure a publicly ownedfacility like the roads or electricity utilities tobe shared and used by others infrastructure.

interface connection and interaction betweenhardware, software and user.

intranet a private network using the protocolsand infrastructure of the Internet.

isochronous time dependent, e.g. real-timevideo or telemetry data.

Java is a programming language designed fornetwork computing. Any program written inJava can be executed in any computer ordigital device ranging from a machine tool toa computer.

jitter the variation a packet may get through aservice.

JPEG a way of compressing still images andvideo which is widely used on the Internet.

killer app short for killer application which isa very successful computer application ofcomputers.

kluge a clumsy but serviceable solution.Local Area Network (LAN) a network

offering connection services within a verylimited area typically the same building orcampus.

latency the time interval between the instantat which an instruction control unit initiatesa call for data and the instant at which theactual transfer of the data starts. See accesstime.

layer a group of services that is complete froma conceptual point of view, that is one of a setof hierarchically agreed arranged groups, andextends across all systems that conform to thearchitecture.

leased line a private line for dedicated accessto a network; also referred to as private lineor dedicated line.

link a line channel or circuit over which datais transmitted.

looping a condition in packet switchingnetworks when packets are travelling aroundin a circle.

LINX is an organization set up to provideinterconnectivity for UK Internet service

providers and also to further the cause of theUK within Europe.

mail gateway a machine that connects two ormore electronic mail systems and transfersmessages between them.

master primary and controlling unit.master slave a communication in which one

(the master) initiates and controls the session.The slave responds to the master’s commands.

megabits just over 1 million bits of binarydigits (0 or 1).

message in information theory it is theordered series of characters intended toconvey information; in telecommunications itis a unit of data transmitted.

Metropolitan Area Network (MAN) anetwork that connects computers within ametropolitan city area.

modem a device that allows a computer totransmit information over a telephone line. Amodem performs the function of a modulatorand demodulator.

Mosaic Hypermedia browser on the Internetthat allows searching using hypertext and aGUI, Graphical User Interface.

multilink a software technique of addingchannels to networks.

multimedia information that may be in one ormoreformsincludingdata, text,audio,graphics,animated graphics or full motion video.

mung to destroy data, usually accidentally.network An arrangement of nodes and

connecting branches.newsgroup these are bulletin boards on the

Internet.octet 8 bits.open protocols protocols that do not

purposefully favour any single manufacturer.open systems ability to connect any two

systems that conform to a reference modeland its associated standards.

packet a unit of information travelling as awhole from one device to another on anetwork.

packet switches small computers linked toform a network.

peer a functional unit that is on the sameprotocol layer as another.

peer-to-peer a network architecture where auser’s PC doubles as a server rather thanaccessing centralized file or print servers.

peer-to-peer communication communicationin which both sides have equal responsibility

274

Page 288: Telecommunications and Networks

Glossary

for initiating a session compared to amaster-and-slave relationship.

peer-to-peer networking is where files couldbe exchanged and terminal sessionsestablished on a non-hierarchical basis.

pel short for pixel.photonic switch device that switches optically

rather than convert signals to an electronicpath as in conventional semiconductortechnology.

pipeline allows for simultaneous or parallelprocessing within a computer.

pixel a smallest element on a video displayscreen. It could be a dot and is sometimescalled a pel. Pixel is short for pictureelement, pix(picture) element.

platform the principles on which an operatingsystem is based.

plug-and-play usually referred to forhardware that can be attached (plugged-in)almost anywhere and starts operating(playing) without the need of specialinterfaces.

portal a meeting point between local andlong-distance services.

posting a message on the bulletin board.private line same as leased line.propagation delay a delay in the transmission

of information from source to destination.processor see engine.protocol sets of rules and agreements (on

format and procedures).real-time processing with updated database.

All real-time is on-line, but not all on-line isreal-time

resource allocation a mechanism to allocateresources according to a promised resourcereservation.

roamer a subscriber to a telecom in locationsremote from the home-service area.

robust refers to a solid program that worksproperly under all normal but not abnormalconditions.

router a system responsible for makingdecisions about which of several paths trafficwill follow. In OSI terminology, a router is anetwork layer intermediate system; a systemresponsible for selecting the path for the flowof data from among many alternative paths.

scalability capability of being changes in sizeand configuration.

seamless smooth without awkward transitions.seamlessly blending smoothly.

server a central computer which makesservices and data available.

service provider an on-line service that letsusers connect to the Internet and which inturn is a gateway to the Internet.

sounding a hardware technique of addingchannels to networks (same asbandwidth-on-demand).

sniffer synonymous with network analyser andused for diagnostics.

standard describes how things should be.surfing exploring the Internet.switching means of relaying information from

one path to another.switched 56 digital service are 56 kbps

provided by local telephone companies andlong distance carriers.

switching system consists of hardware andsoftware, a switching system’s primarypurpose is to form dynamic connectionsbetween channels.

synchronous derived from the same clock,therefore having a fixed timing relationship.

T-1 carrier this system uses a time-divisionmultiplexing to carry voice channels.

teleprocessing is a computer-supportedtechnique for providing a number of remoteusers access to a computer system.

Telnet a tool for interaction communicationwith remote computers.

terrestrial earthbound.token a series of bits which when grabbed by

a user allows the use by sending packetsacross the network.

token ring a signalling device where a specialmessage, passed from node to node, gives anode permission to enter a message or frameinto the ring.

transceiver a physical device that connects ahost to a LAN.

transparent virtually ‘‘invisible’’. Of nosignificance.

Universal Resource Locator a URL is thetechnical name of the World Wide Webaddress.

upload opposite of download.Veronica an index to computerized archives.video-on-demand an interactive system that

allows you to point your remote control at thescreen and select a desired program.

virtual pertaining to a functional unit thatappears to be real but whose functions areaccomplished any other means.

275

Page 289: Telecommunications and Networks

Telecommunications and networks

voice line a communications link usuallylimited to transmitting data at the bandwidthof the human voice.

voice mail a service that stores voice messagesfor users and enables them to retrieve andhear their messages in various ways.

Web short for World Wide Web.Web site a collection of files on the web built

around a common subject or theme.Wide Area Network (WAN) a network that

serves a geographic area larger than a city ormetropolitan area.

276

Page 290: Telecommunications and Networks

INDEX

Accounting management [of telecommunications andnetworks] 143 4

Addressing 41 2Addresses on the internet 222Agents 252ANSI 117Applets 252Applications 6 10, 187 248APPN 82 5ARPANET 11, 55, 242, 251, 255 6ATM 62 7, 251 3, 259 60Authorization controls 126 8, 133 4

B-ISDN 72 3, 121Bandwidth management 61 2Bandwidth-on-demand 62, 252Biometric systems 125Bridge 38 9, 75

CCITT 41, 117 18, 120, 146, 205, 225Cellular phones 29Cellular radio 29CERN 255Channels of transmission 17 9Circuit switching 254Client 102 3, 106, 203 4Client server paradigm 100 12Client server system’s impact 106 8CMIP 146Comm software 156Communications security 128 30Compression 38, 40 1Configuration management 143Cooperative processing 194 5Cordless 29 30Cyberspace 232

Digital library 208 9Digital money 179 80Digital world 251Distance learning 208 9Distributed data processing 15 6Distributed multimedia 199 200, 210 11

E-mail 221 4EDI 188 9, 196, 224, 239EFT 190 4, 196 7Electronic money 179 80Electronic publishing 213, 259Encryption 128 9End-user friendly systems 251Ethernet 55ETSI 118 20European Standards Organisation 115 20

Fault management 141 2, 151FDDI 46, 48, 56Film-on-demand 206Firewall 238Frame relay 55Frames 39FTP 239Future of telecommunications 21 3, 241 68,

GAN 211, 255Gateway 38Glass fibre 26, 34Global network in developing countries 174 5Global network infrastructure 172Global network’s impact 173 5, 180 1Global networks 21, 171 84Global networks and businesses 179 80Global outsourcing 175 6

Hardware for network manager 147 8, 153 4Home banking 193Home-page 234HTML 234HTTP 234 5Hubs 45 6, 251Hypermedia 211Hypertext 211, 237

Information service providers 225Information services on the internet 241Infrastructure 161 6, 172Intelligent devices 43 5, 251

277

Page 291: Telecommunications and Networks

Index

Interconnectivity 20 1, 48 50, 251Interfaces 19 20Internet 218, 230 48Internet connections 232 4Internet organization 240 1Internet security 230 40Intranet 11, 252IP 45ISDN 21 2, 69 77, 251, 257 8ISO 113 8, 203

Java 11, 252

Knowbots 252

LAN 48 57, 65 6, 251Levels of organization in network management 91 4Logic bomb 128

MAN 59,Message handling systems 165 95Messaging 185 97MHS 195 6Microwave transmission 28Middleware 157 8Modem 42 3Monitor on traffic 141MPEG 41, 204MPR 40Multimedia 187 97Multiple protocols 84 5

National infrastructure 161 70Network administration 93 5Network chanracteristics 50Network management 91 5, 140 52Network security planning 132 3Network systems architecture 78 88Networks, overview of 15 24NII in the US 163 6NII issues 167 70NIIs around the world 162 3NOS 142, 155 6

Open systems 80 7, 116 7Organization for networking 91 9Organizational levels for network management 91 4OSI model 80 7, 116 7

Packet switching 254Packet-radio 29 30Packets 39, 56PAD 30 1

Parallel processing 153 5Passwords 125 6PCS 31 2, 252PDA 30, 251Performance management 142, 151Personnel for network management 91 4, 148 51Planning of telecomm. and networks 95 8Plug-and-play 45Poller on traffic 141Predictions go wrong 261 3Private networks 252Problem management 141 2, 151Processing in client-server systems 105 6Protection of intellectual property 178 9Protocols 45, 54Public networks 253

Regulations 252Remote processing 16Repeater 39Resources required for teleprocessing 153 60Roamer 29Router 21, 39, 41, 67

Satellite 28 9Security 123 39Security management 142 3, 151Security on the interent 236 40Servers 100 1, 104 5, 202 3Smart cards 193 4Smart devices 43 5SMDS 252SNA 11, 78 80SNMP 146Software for network management 144, 147Software for telecomm. 153 8Software piracy 178 9SONET 56, 252Standards organizations 111 5, 117 20Standards organisations-American 111Standards organisation European 118 20Standards organisation Japaneese 120Standardization 189 90, 251Standards 112 22, 173, 204, 252, 257Surfing on the internet 234 5Switches 38Switching 38 47, 53 6Switching management 62

TCP 45TCP/IP 45, 82 6, 146Telecommunications, overview of 15 24Telecommuters 216 21, 225 6

278

Page 292: Telecommunications and Networks

Index

Teleconferencing 188Telematique society 251, 257Telemedicine 205 7Teleworkers 220 1Telmatic society 251, 257Terminal use controls 124 5Threats and counter threats to secure telecomm. 136Token ring 55Topologies of networks 16, 52 5Transborder flow 176 7Transceiver 29Transmission 26 37, 32 3Transmission media 165 6Trojan horse 128

Video-on-demand 206Videoconferencing 204 6Viruses 128 32

WAN 59 61Wired society 258Wireless 29 30, 56 7Wiring 26 8Worm 130WTO 260

X-25 54, 56, 224

279