The Future of Digital Communications: Technical Perspectives

44
The Future of Digital Communications: Technical Perspectives Edited by Fernando Laguarda Vice President, External Affairs and Policy Counselor Time Warner Cable www.twcresearchprogram.com ESSAY SERIES FALL 2010

Transcript of The Future of Digital Communications: Technical Perspectives

Page 1: The Future of Digital Communications: Technical Perspectives

The Future of Digital Communications: Technical Perspectives

Edited by Fernando Laguarda Vice President, External Affairs and Policy Counselor

Time Warner Cable

www.twcresearchprogram.com

E S S AY S E R I E S

FALL 2010

Page 2: The Future of Digital Communications: Technical Perspectives

For more information:

FernandoLaguarda 901FStreet,NW Suite800 Washington,DC20004 Phone:(202)370-4245

Page 3: The Future of Digital Communications: Technical Perspectives

Introduction ........................................................................................................................................ 3By Fernando R. Laguarda

The Challenge of Increasing Broadband Capacity ........................................................................ 6By Dale N. Hatfield

Understanding Critical Relationships ...............................................................................................6Opportunities and Challenges: Looking Ahead ............................................................................11Summary and Conclusions ................................................................................................................18

The Challenge of New Patterns in Internet Usage ...................................................................... 22By Christopher S. Yoo

Introduction .........................................................................................................................................22Internet Protocol Video ......................................................................................................................22Wireless Broadband ............................................................................................................................27Cloud Computing................................................................................................................................30Programmable Networking ................................................................................................................33Pervasive Computing and Sensor Networks ...................................................................................33Conclusion............................................................................................................................................34

Author Contact Information ........................................................................................................... 38

TABLE OF CONTENTS

1 TheFutureofDigitalCommunications:TechnicalPerspectives 1

Page 4: The Future of Digital Communications: Technical Perspectives

TheFutureofDigitalCommunications:TechnicalPerspectives2

Page 5: The Future of Digital Communications: Technical Perspectives

Introduction

By Fernando R. Laguarda, Time Warner Cable

Stimulating Discussion: The Role of the Time Warner Cable Research Program on Digital Communications This is the first publication of the Time Warner Cable Research Program on Digital Communications. We launched the Research Program with the goal of encouraging debate and discussion on ideas of importance to the future of our industry and its role in the communities we serve. We hope to do so by providing a new forum for scholars to engage with the community of stakeholders who make and influence policy. We want to encourage increased dialogue and generate new ideas that bring us closer to solving the challenges we face. The Research Program will award stipends to scholars to produce 25- to 35-page papers that increase understanding of the benefits and chal-lenges facing the future of digital technologies in the home, office, classroom, and community.

In addition to these papers that will be published later this year, we invited five noted scholars to write an essay discussing a major challenge they anticipate arising as we debate and set policy in digital communications during the next decade. Their essays are published in two volumes in our Fall 2010 Essay Series, the first focusing on policy issues and the second on the intersection of policy and technology. While each author chose a different challenge, they all raise interesting questions that deserve further discussion and debate.

Overview—The Future of Digital Communications: Policy PerspectivesIn the first volume, we present essays by a law professor, a sociologist, and an economist. John Palfrey is a Professor of Law at Harvard Law School and Co-Director of the Berkman Center for Internet & Society. Palfrey writes about the connection between law and social science and the challenge of incorporating current research into policymaking. He uses the example of youth media policy, specifically “privacy regulation,” to frame a challenge to policymakers: learn how young people actually use digital communications or risk public policy that is irrelevant to (or poorly meets the needs of) the digital generation. Palfrey recommends establishing “mechanisms that enable col-laboration between those who set policy … and those who best understand youth media practices.”

Next in the volume is an essay by Dr. Nicol Turner-Lee, Vice President and Director of the Media and Technology Institute at the Joint Center for Political and Economic Studies. Turner-Lee discusses the Internet as a platform for civic engagement. She explains that digital communications tools present both opportunities and perils for the next decade of social activism and political discourse. In particular, “unequal access to the Internet affects civic engage-ment when groups are underrepresented, or on the periphery of online activity.” She offers specific strate-gies for ensuring the Internet and social media tools provide a constructive forum for deliberative exchange.

Federal Communications Law Journal

ThepublicationoftheseessaysisforthcominginVolume63oftheFederal Communications Law Journal. TheFCLJispublishedjointlybytheIndianaUniversityMaurerSchoolofLawandtheFederalCommunicationsBarAssociation.

TheFutureofDigitalCommunications:TechnicalPerspectives 3

Page 6: The Future of Digital Communications: Technical Perspectives

Turner-Lee’s challenge to policymakers is to take steps to ensure that broadband adoption does not create or further solidify existing social stratifica-tion and alienation.

The final essay in the first volume, by Dr. Scott Wallsten, Vice President for Research and Senior Fellow at the Technology Policy Institute, reviews the approach policymakers have taken to broad-band technology and challenges several key assumptions driving recommendations for the next decade. While policymakers “hope that home broadband access will [quickly] spur economic growth,” Wallsten suggests that this narrow focus may be misguided. Specifically, he questions whether residential broadband adoption can have the transformative economic impact many assume it will have. Instead, he writes, the focus should be on “how new communications technologies affect business” because these are the impacts on produc-tivity that will determine whether broadband will “radically reshape[s] the economy.” Wallsten calls for a deeper research agenda into the long-term impact of broadband on the business sector.

The three authors all raise questions about the status quo and challenge the reader to think out-side the box. Palfrey seeks to reconceptualize the process of making public policy as a dialogue with social science research, especially when it comes to the way young people are interacting with digital media. Turner-Lee challenges the reader to rethink policy towards digital social media platforms so as to engage communities that may otherwise be alienated from broadband. And Wallsten questions important assumptions about how broadband affects the economy. We hope their essays inform as well as stimulate discussion.

Fernando R. Laguarda isVicePresident,ExternalAffairsandPolicyCounselorforTimeWarnerCable.Inthisposition,LaguardahelpsTimeWarnerCabledevelopandadvanceitspolicy

positions,focusingonconsumerprotection,competitionissues,intellectualpropertyandtelecommunicationsregulation.Laguardahasprimaryresponsibilityforworkingwithnongovernmentalpolicystakeholderssuchascivilrightsandhumanrightsorganiza-tions,thinktanks,foundations,academicinstitutionsandpublicinterestgroups.HealsoservesasDirectoroftheTimeWarnerCableResearchProgramonDigitalCommunications,whichseekstoexpandscholarshipinfieldsofinteresttothecableandtelecommunicationsindustries.

PriortojoiningTimeWarnerCable,LaguardawasapartneratHarris,Wiltshire&GrannisandatMintzLevin.Inprivatepractice,herepresentedawiderangeofcommunications,entertainmentindustry,healthcareandtechnologyclientsinpro-ceedingsbeforetrialandappellatecourtsandregulatoryagencies.

LaguardareceivedhisA.B.withhonorsfromHarvardCollegeandhisJ.D.withhonorsfromGeorgetownUniversityLawCenter,wherehewasNotesandCommentsEditoroftheGeorgetownLawJournal.Currently,heservesontheBoardofDirectorsofWomenEmpoweredAgainstViolenceInc.,theHarvardAlumniAssociation,theNationalNetworktoEndDomesticViolence(pastChair),theFamilyOnlineSafetyInstitute,theFriendsoftheNationalMuseumoftheAmericanLatino,andontheCorporateAdvisoryBoardoftheLeagueofUnitedLatinAmericanCitizens(VicePresidentforPolicy).

TheFutureofDigitalCommunications:TechnicalPerspectives4

Page 7: The Future of Digital Communications: Technical Perspectives

Overview—The Future of Digital Communications: Technical PerspectivesIn our second volume of essays, two authors review trends in technology and their impact on policy. Dale N. Hatfield, Executive Director of the Silicon Flatirons Center for Law, Technology and Entrepreneurship at the University of Colorado, is a noted expert on telecommunications technology and former chief technologist for the Federal Communications Commission (FCC). His essay addresses the challenge of encouraging broadband deployment, as set forth in the FCC’s National Broadband Plan. Hatfield urges policymakers to seek a deeper understanding of the tech-nology underlying different types of broadband networks, and the implications for policy arising from those differences. His analysis of the opportunities and limitations of wireless and wireline technologies leads him to challenge policymakers to consider strategies that would bring fiber significantly closer to end users.

In a companion piece, Christopher S. Yoo, Professor of Law at the University of Pennsylvania, reviews the impact of rapidly evolving broadband networks and notes that “change is inevitable” when it comes to the architecture of the Internet. Yoo notes that as demand on the network becomes heterogeneous, “different portions of the network will respond in different ways to meet this demand.” Rather than looking to the past, he challenges policymakers to allow the network to evolve in new directions. The future success of the Internet does not depend on “preserving the architecture that has made it successful in the past.” Since change is inevitable, policy should be flexible. Incorporating the insights of engineers and network architects into policy would be of great benefit, as Yoo demonstrates.

The intersection of technology and policymaking is a significant component of our Research Program’s agenda for the coming year. Consumers have increasing choices when it comes to digital communications services, applications, and devices. Many questions are raised as a result of rules and policies enacted without regard to the rapid evolution of digital communications technology. Yoo and Hatfield each make a contribution to the policy debate by explaining not only “how” digital technologies work but “why” policy choices should be informed by the best possible under-standing of technology.

We are pleased that the five essays present a wide range of perspectives on the future of digital communications and the challenges that must be confronted in the next decade. Perhaps the sin-gular theme that arises from looking at the challenges of the next decade is the need to be flexible and pragmatic not only in terms of policy but also in terms of the research that can help inform policymaking. We believe there is a major role that can be played in the coming policy discussions on telecommunications by a research agenda that is multidisciplinary and collaborative. And we hope to play a role in stimulating that discussion and encouraging scholars to contribute to those discussions.

TheFutureofDigitalCommunications:TechnicalPerspectives 5

Page 8: The Future of Digital Communications: Technical Perspectives

T he recent release of the National Broadband Plan by the Federal Communications Commission (FCC) has focused the attention of policymakers, industry leaders, academics, and ordinary citizens on the importance of having sufficient bandwidth available anytime

and anyplace to support a growing array of broadband services.

Broadband services include both wireline and wireless access to the Internet and the delivery of high-definition, even 3-D, television. As popular as these two terms—bandwidth and broadband—have become, and as important as they are to our future as a nation, they are not always well understood. The purpose of this essay is to explain these terms in more technical detail and then to relate the explanations to the opportunities and challenges that are associated with increasing fixed and mobile broadband capacity as envisioned in the National Broadband Plan.

This essay is divided into three sections. The first section discusses the critical relationship between the digital transmission capacity of a communications channel (as expressed in binary digits or “bits” per second) and the amount of bandwidth associated with that channel (as expressed in analog terms). The second section, in turn, builds upon that discussion to explore the opportu-nities and challenges associated with increasing the capacity of the four primary transmission technologies used in the critical access portion of the network: namely, twisted-pair copper cable, coaxial cable, wireless links, and fiber optic cable. The third section provides a summary and offers some concluding thoughts.

Understanding Critical Relationships As discussed in the box on pages 8-9, both analog bandwidth as traditionally defined as well as digital bandwidth expressed as a bit rate determine how much information can be sent over a communications channel in a given amount of time. The two are related to one another by Shannon’s Law. Shannon’s Law is named after Claude Shannon, who is credited with being the founder of information theory, the basis of modern electronic communications. Expressed in words rather than in a mathematical formula, Shannon’s Law states that the maximum amount of information that a circuit or channel can carry per unit of time (as measured in bits per second) depends upon the (analog) bandwidth and the strength of the desired signal relative to the strength of the accompanying undesired noise and interference, as measured at the receiv-ing device.1 For example, if the bandwidth of the channel is 1 MHz and the received power of the desired signal is 15 times as strong as the accompanying noise, then the maximum digital capacity of the 1 MHz channel would be 4 Mbps or 4 bps/Hz of bandwidth. Shannon’s Law suggests two fundamental ways of increasing the digital capacity of a channel: • Increasing the amount of bandwidth devoted to the channel, or• Increasing the received signal level relative to the accompanying noise and interference.

Bandwidth increases, however, are often constrained by the technical characteristics of the transmission medium or, as in the case of wireless communications, by government regulation.

The Challenge of Increasing Broadband Capacity

By Dale N. Hatfield, University of Colorado at Boulder

TheFutureofDigitalCommunications:TechnicalPerspectives6

1

Page 9: The Future of Digital Communications: Technical Perspectives

Increasing the digital capacity of a channel by increasing the transmitted power suffers from diminishing returns and from practical constraints. For instance, in the example given above, increas-ing the received power to 31 times as strong as the accompanying noise only increases the capacity to 5 bps/Hz. Moreover, such power increases are often impractical in the real world because of the increased interference that would be caused to other users of the same radio spectrum (i.e., the same channel), in the case of wireless communications. Moreover, many wireless devices today are battery-powered, and increasing their transmitted power can significantly decrease the length of time that the device can be operated without recharging the battery. While the base station to which the portable device communicates may be able to operate at a higher transmitter power, the power received at the base station is often limited by practical battery life considerations associated with the portable device.2

In the real world today, radio systems typically achieve efficiencies of between less than 1 bps/Hz to 7 bps/Hz or so depending upon the quality of the channel—for example, in terms of the received signal-to-noise/interference ratio. For example, today’s comparatively high-powered digital television (DTV) systems achieve a transmission rate of almost 20 Mbps in a 6 MHz channel, for an efficiency of approximately 3.3 bps/Hz. In its recent report to Congress on the National Broadband Plan, the FCC reported that, over the years, the efficien-cies of digital cellular radio systems have increased from much less than 0.1 bps/Hz at their inception to approximately 1.4 bps/Hz today. Modern cable systems using the DOCSIS 3.0 specification achieve a transmission rate of approximately 43 Mbps in a 6 MHz channel in the downstream direction (i.e., from the cable system “headend” to the subscriber’s premises), for an efficiency of a little over 7 bps/Hz. The techniques used to improve efficiencies as

Dale N. HatfieldistheExecutiveDirectoroftheSiliconFlatironsCenterforLaw,Technology,andEntrepreneurshipandanAdjunctProfessorintheInterdisciplinary

TelecommunicationsProgram—bothattheUniversityofColoradoatBoulder.

PriortojoiningtheUniversityofColorado,HatfieldwasthechiefoftheOfficeofEngineeringandTechnologyattheFederalCommunicationsCommission(FCC)and,immediatelybeforethat,waschieftechnologistattheFCC.Mr.HatfieldretiredfromtheCommissionandgovernmentserviceinDecember2000.

BeforejoiningtheCommissioninDecember1997,hewaschiefexecutiveofficerofHatfieldAssociates,Inc.,aBoulder,Colorado-basedmultidisciplinarytele-communicationsconsultingfirm.Beforefoundingtheconsultingfirmin1982,HatfieldwasactingassistantsecretaryofcommerceforcommunicationsandinformationandactingadministratoroftheNationalTelecommunicationsandInformationAdministration(NTIA).BeforemovingtoNTIA,HatfieldwaschiefoftheOfficeofPlansandPolicyattheFCC.

Mr.Hatfieldhasoverfourdecadesofexpe-rienceintelecommunicationspolicyandregulation,spectrummanagement,andrelatedareas.HatfieldholdsaB.S.inelectri-calengineeringfromtheCaseInstituteofTechnologyandanM.S.inindustrialmanagementfromPurdueUniversity.Hatfieldiscurrentlyservingasco-chairmanoftheCommerceSpectrumManagementAdvisoryCommittee.

TheFutureofDigitalCommunications:TechnicalPerspectives 7

Page 10: The Future of Digital Communications: Technical Perspectives

Understanding Bandwidth

Theterm“bandwidth”originatedintheanalogworld,whereitisdefinedasarange(“band”)offrequen-ciesmeasuredincycles-per-secondorHertz(Hz).Asawidth,itrepresentsthenumericaldifferencebetweentheupperandlowerfrequencylimitsofachannelofcommunications.Inthiscontext,achannelisapathusedforthetransmissionofcom-municationssignalsbetweentwogeographicallyseparatepoints.Forexample,anordinarytelephonechannelmayhaveanupperfrequencylimitof3.5kHzandalowerfrequencylimitof.3kHzand,hence,aaudiofrequency(AF)bandwidthof3.2kHz.3Ahigh-fidelityaudioamplifier,ontheotherhand,mayhaveanupperfrequencylimitof20kHzandalowerfrequencylimitof20Hz—andthusabandwidthof19.98kHz.Inradiofrequency(RF)communica-tions,anordinarytelevisionchannelintheU.S.hasabandwidthof6MHz.Forexample,televisionChannel2occupiesanRFrangeof54–60MHzandChannel3occupiesanRFrangeof60–66MHz.Incontrast,asingleFMradiochanneloccupiesanRFrangeofjust200kHz.

Intheanalogworld,signalsandthechannelstheyoccupyaretypicallyclassifiedlooselyandsomewhatfluidlyasnarrowband,wideband,orbroadband.Achannelthatisclassifiedasbroadbandbecauseofitsgreaterwidth(sayatelevisionchannel)cancarrymoreinformationperunitoftimethanachannelclassifiedasnarrowbandorwideband.Atelevisionsignalmadeupofbothvideoandsound(i.e.,visualandaural)informationcontainsmoreinformationthanasimpleaudiosignalassociatedwithatele-phonecall.Hence,itrequiresmorebandwidthtotransmitinagivenamountoftime.Statedanotherway,anarrowbandchannelmaybeadequatetotransmitanordinaryvoicecallbuttotallyinad-equatetotransmitatelevisionsignal.Inshort,themoreinformationitisdesiredtosendinagivenamountoftime,thegreatertheanalogbandwidthrequired.

Eventhoughitis,strictlyspeaking,ananalogexpression,thetermbandwidthhasbeencarriedoverintothedigitalworld.Inthedigitalworld,whereinformationiscarriedasbitsor“onesandzeros,”thetermbandwidthisalsousedtoindicatehowmuchinformationachannelcantransmitinagivenamountoftime.However,inthedigitalworld,bandwidthismeasuredinbits-per-second(“bps”).Itisimportanttonotethat,inadigitalnetwork,thebandwidthisexpressedasarate—thatis, howmany

eventshappenperunitoftime.Otherexamplesofratesincludeapumpthatcandischargewateratarateof10gallonsper minuteorabridgethatcancarryonethousandvehicles per hour.Statedagainforemphasis,inthedigitalworld,bandwidthreferstoatransmissionrateexpressedinbits-per-second.

Whendigitalnetworksareusedtoconveyanaloginformation,theanalogsignalisfirstconvertedtoadigitalsignalattheoriginatingendthroughapro-cessknownasanalog-to-digitalconversionandthen,attheterminatingend,thedigitalsignalisconvertedbacktoananalogsignalthroughareverseprocessofdigital-to-analogconversion.Asinthecaseintheanalogworld,inthedigitalworld,thedigitalsignalsandthechannelstheyoccupyareclassifiedasnar-rowband,wideband,orbroadband.

Aftertheanalog-to-digitalconversionprocessdescribedabove,anordinaryvoicesignalrequiresatransmissionrateontheorderofafewtensofkilobitspersecond(kbps),whilethetransmissionofahighqualitystillimageinareasonableamountoftimemayrequireatransmissionrateofseveralhundredskbps.Ahigh-qualitytelevisionsignalattheotherextrememayrequireontheorderofseveralmillionbps(Mbps)forsuccessfultransmissioninrealtime.Transmissionratesinthetens-of-kbpsrangearetypicallycategorizedasnarrowband,ratesinthehundredsofkbpsrangearetypicallycategorizedaswideband,whileratesinthemillionsofbpsrangearetypicallyclassifiedasbroadband.So,tosumma-rizeintoday’sterms,inadigitalnetworkthetermbroadbandisassociatedwithatransmissionrateofseveralMbpsormore.4

Asemphasizedearlier,intheworldofdigitalcommunications,bandwidthisassociatedwithatransmissionratebut,confusingly,suchratesaresometimesreferredtoas“speeds.”Thatis,peopleoftenspeakof“high-speedmodems”or“high-speednetworks”whentheyreallymeanhigh-bitratemodemsorhigh-ratenetworks.“Speed”properlyreferstothetimeittakesforanobject—or,inthecaseofelectroniccommunications,asignal—totravelfromonepointtoanotheracrossaninter-veningspace.Inelectroniccommunications,electromagneticwaves(e.g.,RFsignals)travelthroughspaceatthespeedoflightandviacopperwires,coaxialcableorfiberopticcables,orotherphysicalmediaatvelocitiesthatapproachthespeedoflight.

TheFutureofDigitalCommunications:TechnicalPerspectives8

Page 11: The Future of Digital Communications: Technical Perspectives

Inthedigitalworld,somecharacteristic(orcom-binationofthecharacteristics)ofthetransmittedelectromagnetic/RFsignalisrapidlychangedtoreflectwhetherthebitbeingsentisaoneorazero.Thesimplestdigitaltransmissionsystemtoenvi-sionisonethatsendsaburstofelectromagnetic/RFenergyifthebitisaoneanddoesnotsendaburstwhenthebitisazero.Inotherwords,theburstsofenergy(orlackthereof )occuratregularintervals,representingasequenceofonesandzerosthatcorrespondtotheinformationbeingsent.Inordertosendinformationatahigherratetheintervalsareshortenedintime—thatis,theonesandzerosareclosertogetherintimeandspace—suchthatthetransmissionrateincreasesbutthespeedoftransmittingindividualbitsremainsthesame,sincethespeedinvolvedcannotexceedthespeedoflight.Whiletheactualtechniquesusedinthetransmissionofdigitalformatareoftenmuchmoresophisticated,thesamebasicprinciplesapply.

Perhapsthedistinctionbetweenspeedandbitratecanbemadeclearerthroughanexampledrawnfromthephysicalworld.Consider,forexample,astreamofsemi-trailertruckstravelingdownahighwayatthespeedlimitof,say,60mphandassumethatthetrucksarephysicallyspacedatintervalsofonetrucklength.Thatwouldproducea“transmission”rateofsomenumberofvehiclesperhour.Nowconsiderthesamescenario,exceptthatthetrucksarereplacedbyMiniCooperautomobilestravelingatthesame60mphandalsospacedonevehicle-lengthapart.Becausethevehicles,inthiscasethesmallMiniCoopers,arespacedmuchclosertogetherthanthesemi-trailertrucks,thenumberofvehiclesperhourwouldbemuchgreatereventhoughthespeedhasnotchanged.Thisisanalogoustothedigitaltransmis-sionsituation,wherethebitrate—thenumberofbits-per-secondorbps—isincreasedbyspacingtheonesandzerosclosertogetherintimeandspace.Therelationshipbetweenthemaximumbitratethatachannelwillsupportandthebandwidthofthatchannel(asthattermisusedintheanalogworld)willbeexploredlater.

Inthe“networkofnetworks”makinguptheInternet,thebitsofinformationbeingtransmittedareorganizedintosequencesorunitsofbitscalledpackets.Inadditiontothebitsassociatedwiththeinformationbeingtransmitted—say,aportionofane-mailmessageoravoicetelephonecall—thepacketsincludesequencesofbitsidentifyingthe

Internetaddressofthedestination.Theseindividuallyaddressedpacketsofinformationarethenroutedfromnodetonodebetweentheoriginandthedestinationin“storeandforward”fashion.Thetimeittakesforthepackettotravelfromtheorigintothedestinationisknownaslatency.Theirreducibleorabsoluteminimumamountoftimeittakesforapackettotravelfromtheorigintothedestinationisconstrainedbythespeedoflight.

Inadditiontothisirreducibleminimum,thepacketsthemselvesmaybeprocessedseveraltimesatintermediarynodesalongtheway,thusaddingtothelatency.Forexample,anintermediarynode,suchasapacketswitchorrouter,maystorethepacketbrieflyinordertoreadtheaddressassociatedwiththepacketbeforeforwardingitontoanothernodethatisclosertotheultimatedestination.Thisintermediaryincreasesthelatency.Inaddition,justasvehiclessometimeencountercongestiononahighwayduringpeaktravelperiods,thepacketsfromdifferentusersandapplicationsmayencountercongestiondelaysaswell,thusaddingfurthertothetotallatency.Toreturntothehighwayanalogy,avehicletravelingatthespeedlimitmayhavetostopatatollboothandpayatoll.Moreover,theremaynotbeenoughindividualtollboothstohandleallofthearrivingvehicles,inwhichcaseadditionaldelaysmayoccur.Theprocessingdelayatthetollboothandanycongestiondelaysthatoccuruponarrivalordeparturefromthetollboothaddtothetotaltimeittakesthetrucktoreachitsdestination.

LatencyisanimportantmeasureofperformanceinsomeInternetapplications,forexample,inreal-timevoiceapplicationsandhighlyinteractivegames,andlessimportantinotherapplications,e.g.,e-mailorsimplewebbrowsing.Theimportantpointisthattheminimumamountoflatencycannotbedecreasedbyincreasingthebitrate,thatis.,byincreasingthebandwidth.Statedanotherway,theminimumlatencyordelaybetweensourceanddestinationisconstrainedbythespeedoflight,andthetransmis-sionratemerelydetermineshowfastthepacketcanbe“unloaded”onceitarrives.ThustwomajorperformancemeasuresassociatedwiththeInternetarespeed/latencyandbandwidth/datarate.5

TheFutureofDigitalCommunications:TechnicalPerspectives 9

Page 12: The Future of Digital Communications: Technical Perspectives

measured in bps/Hz tend to increase the amount of transmitter energy that is emitted near the edges of the channel compared to the portion of the energy emitted near the center of the channel. Thus, just as higher transmitter power can cause increased interference to other (distant) systems operating on the same channel, increasing the capacity of a channel by increasing the efficiency (again, as measured in bps/Hz) can cause increased interference to other systems operating on adjacent channels. The key point is that the efficiencies and hence the total capacities of all of these communi-cations systems are ultimately constrained by the Shannon limit.6

Before discussing the opportunities and challenges associated with increasing fixed and mobile broadband capacity, it may be useful to say a few words about the noise and interference that inevitably compete with the desired signal in a receiver and constrain the capacity in accordance with the Shannon limit just described. RF devices themselves internally generate a certain amount of electrical noise. This noise is produced by thermal agitation of electrons in a conductor. In an FM radio, this random noise would be heard as a hissing sound in the receiver, and since it is pro-duced in the receiver itself, it would even be heard in a remote area where there were no broadcast stations operating on the channel.7 This noise can be reduced by numerous techniques but it can-not be completely eliminated. In addition to this internal noise, the desired signal must compete with external sources of noise that typically enter the receiver through the same path (e.g., the antenna connection) as the desired signal. There are two broad classes of external noise—natural and man-made. Natural noise sources include, for example, electrical storms (lightning) which produce static in the AM band. Man-made noise in nearby radio channels can be produced by a host of sources including automobile ignition systems and rotating electrical machinery, and by devices like flourescent light fixtures. In effect, these devices act as miniature transmitters that produce unintended RF energy. In wireless systems, this external noise and interference generally decreases in importance at higher frequency bands in the radio spectrum.

In addition to this unintentional radiation, the receiver often must contend with the intentional radiation produced by other RF systems. In wireless systems, such interference may be produced by another transmitter operating on the same channel at some distance away or by “spillover” effects from transmitters operating on adjacent bands/channels as explained above. Also, from a practical point of view, receivers are not perfect in filtering out signals operating on adjacent bands/channels, thus allowing in additional interference from other systems. These unintended forms of man-made radiation can be looked at as a form of environmental pollution, and the FCC has adopted rules that seek to reduce the interference from such sources. Note that the policy objective should not be to eliminate all interference; indeed, that is impossible. Nor should the objective always be to minimize the probability of interference because that may be prohibitively expensive. Rather the objective, in principle at least, should be to choose the level of interference that optimizes the economic value of the resource.8

To summarize then, there are two primary “scarce” resources associated with the design of any digital transmission link:• Received power • “Analog” bandwidth

The underlying capacity of such a digital transmission link as measured in bps depends upon the analog bandwidth of the particular transmission media involved—twisted-pair copper cable, coaxial cable, wireless link, or fiber optic cable—and upon the strength of the desired signal rela-tive to the noise and interference at the receiving device. The following section will review some of the opportunities and challenges associated with increasing fixed and mobile broadband capacity.

TheFutureofDigitalCommunications:TechnicalPerspectives10

Page 13: The Future of Digital Communications: Technical Perspectives

Opportunities and Challenges: Looking Ahead It is generally agreed that, in many cases at least, the primary technical and economic bottle-neck hindering greater availability of broadband facilities and services is in the access portion of the network—i.e., the portion of the network between the customer and a node in the network, which typically represents an aggregation point or point of traffic concentration. For example, in a wireless cellular system, this point might be a radio access node or base station; in the case of the telephone network, it might be a neighborhood terminal where the individual twisted-pair cables are terminated; and in the case of the cable network, it might be the location where the coaxial cables serving an individual neighborhood are terminated. Except in more remote, less-populated or other hard-to-serve areas, the challenges of connecting this access node to the balance of the network (i.e., to the core network) are typically less because upgrades in capacity are easier and there are greater opportunities to achieve economies of scale. For this reason, the rest of this essay will focus on this “last-mile” portion of the network—the portion between the access node just described and the customer premises.

The principle transmission media used to serve this portion of the network are those mentioned above:• Twisted-pair copper cable• Coaxial cable• Wireless links• Fiber optic cable

Since the underlying capacity of a digital transmission link depends upon the analog bandwidth of the particular transmission media and the signal-to-noise/interference ratio at the receiving device, it may be instructive to begin by focusing on these two fundamental factors in terms of each of the four types of transmission media just identified.

Twisted-Pair CableThe original transmission media for both telegraph and telephone systems was open wirelines consisting of pairs of bare conductors that were tied to insulators attached to cross-arms on wooden poles. In the access part of the telephone network, these open wirelines eventually gave way to multipair cables consisting of individually insulated pairs of copper wires enclosed in a common protective sheath. Each wire pair is twisted together for technical reasons and, in a modern installation, these individual wire pairs are connected to the neighborhood access node mentioned above. Often a fiber optic cable is then used as the transmission media from the access node to the telephone company central office and the balance of the network. Because of the enormous bandwidth of the fiber optic cable (as explained in more detail below), it is the characteristics of the individual wire pairs in terms of their analog bandwidth and their noise/interference susceptibility that determine the digital bandwidth that can be delivered over the access portion of the network to the individual subscriber.

In terms of analog bandwidth, the individual wire pairs are capable of handling electrical signals in the AF range up to an RF frequency of perhaps one MHz or more. Stated another way, standard telephone wires have the bandwidth necessary to carry an ordinary analog voice conversation in one frequency range plus digital signals in the higher frequency range of the total available band-width. This sharing of the available bandwidth in the access portion of the network is the basis for the Digital Subscriber Line (DSL) service offerings of local telephone companies. The maximum bandwidth available on an individual wire pair depends upon a number of factors (e.g., the gauge of the wire) and is inversely proportional to distance. That is, the available analog bandwidth decreases with distance.

TheFutureofDigitalCommunications:TechnicalPerspectives 11

Page 14: The Future of Digital Communications: Technical Perspectives

In terms of noise and interference, the individual wire pairs in an ordinary multipair telephone cable are not electrically shielded and thus are susceptible to picking up noise and interference from other nearby wire pairs (“cross-talk”), from other sources of man-made electrical noise and from RF sources such as nearby AM radio stations. While engineers and scientists have developed ingenious ways of dealing with these impairments, the combination of relatively limited analog bandwidth and a relatively hostile noise and interference environment has tended to limit DSL digital bandwidths to a few Mbps. Much higher data rates on the order of tens of Mbps are pos-sible at shorter distances and by utilizing more than one wire pair from the neighborhood node to the individual subscriber.

Coaxial CableEarly Community Antenna Television (CATV) systems used “twin lead” or “ladder lines” as the transmission medium for getting television signals from a central location (the “headend”) to groups of subscribers. Early in the development of the industry, however, the transmission medium was changed to coaxial cable. A coaxial cable consists of a single wire conductor centered within a cylindrical, metallic outer conductor which serves as a shield. The two conductors are insu-lated from one another using various materials. As explained before, in a modern cable television network, coaxial cable is used as the transmission medium between a group of subscribers and a neighborhood access node. The connection between the access node and the headend typically utilizes a fiber optic cable, creating what is known as a Hybrid Fiber Coaxial (HFC) network.

In terms of analog bandwidth, the individual coaxial cables used in cable television networks are capable of handling electrical signals with an RF frequency of up to roughly one GHz. Thus one important advantage of coaxial cable as a transmission medium is its large analog bandwidth. However, the signals transmitted over the coaxial cables steadily weaken with distance and broad-band analog amplifiers are required at regular intervals to boost the signals back up to suitable levels.

In terms of noise and interference, the shielded construction of the coaxial cable significantly reduces the susceptibility of the medium to noise and interference from external sources, includ-ing “over the air” wireless signals that utilize the same RF frequency range. While the shielded construction of the coaxial cable provides protection against external sources of noise and inter-ference, the amplifiers that are required at regular intervals do produce some electrical noise and forms of self-interference (e.g., a form of interference known as “intermodulation”) that must be dealt with by receivers at either end of the path. Because, to a great extent, only internal noise and interference must be dealt with, the environment is much more predictable and thus easier to mitigate. Moreover, the modern HFC architecture reduces the number of amplifiers required and, hence, the amount of internally generated noise and interference. Along with increasing internal noise and interference, the amplifiers employed also limit the available bandwidth. However, a modern HFC network typically has an upper RF frequency limit of 750 or 850 MHz.

In terms of digital bandwidth, it was noted above that modern (i.e., HFC) cable systems using the DOCSIS 3.0 specification are able to achieve an efficiency of slightly more than 7 bps/Hz. Thus, in gross terms, a modern cable system has an ultimate capacity of several Gbps. While the ultimate capacity of the coaxial cable is in the multiple-Gbps range even when constrained by the bandwidth of the associated amplifiers, two caveats are in order. First, reflecting the origins of the industry in retransmitting over-the-air broadcast signals, the bandwidth or spectrum associated with the cable is divided into scores of 6 MHz-wide channels and the division of those channels between the downstream (i.e., between the access node and the subscriber) and upstream channels (i.e., between the subscriber and the access node) is asymmetric. That is, the number of chan-nels (and hence, the analog bandwidth) available in the upstream direction is considerably lower

TheFutureofDigitalCommunications:TechnicalPerspectives12

Page 15: The Future of Digital Communications: Technical Perspectives

than the number of channels available in the downstream direction. Moreover, in the traditional architecture, a substantial fraction of the downstream channels is devoted to the delivery of entertainment television signals although, with advance digital signal compression techniques, the number of channels required to deliver such programming is significantly reduced. Over time, of course, these residual artifacts stemming from the origins of the industry can be removed and closer to the full capacity of the cable itself can be recovered.

Second, again reflecting the origins of the industry, the available capacity on the multiple coaxial cables emanating from a particular neighborhood node is shared among the various households served by that node. Sharing capacity made sense when a large fraction of the subscribers all wanted to view the same television program, say a championship sporting event like the Super Bowl game, at the same time. Said another way, it made sense to send a single copy of the program and have multiple subscribers tap into it rather than sending multiple but identical copies of the same program individually on separate facilities to each subscriber. When the HFC network is used to supply Internet access, however, the situation is much different. With the HFC architecture using DOCSIS modems, one or more of the shared 6 MHz channels is set aside for the provision of downstream Internet access. On this shared 6 MHz channel, different subscribers are indepen-dently receiving different digital content—aural, data, image, or video or combinations thereof—in individually addressed packets of information. The protocols or “rules of the road” prevent con-tending transmissions from interfering with one another. Since the transmission capacity is shared among multiple households in this manner, congestion and packet loss may result as traffic peaks and individual consumers contend for the available capacity.9

One fundamental way of reducing congestion in an HFC network is to decrease the number of households or customers served by each of the shared stretches of coaxial cable. This can be accomplished in two fundamental ways:• First, imagine an access node that connects to three runs of coaxial cable collectively sharing

the same capacity or channel and connected to a fiber optic cable link back to the headend. The capacity per household in this case can be increased by separating each of the three coaxial cable runs and connecting them to different fiber optic links running to the headend. As explained in more detail below, a fiber optic cable typically consists of multiple optical fibers, so such an approach may only require adding the necessary electronics to “light up” the additional indi-vidual fibers. In cable network parlance, this method of increasing capacity is known as “node splitting.”

• Second, in new construction where it is easier to modify the physical layout of the network, the capacity of the HFC architecture can be increased by increasing the density of nodes and, hence, decreasing the number of households or customers served per coaxial cable run and access node. Note that this process moves the broadband fiber optic cable connection closer to the end users and allows more intense reuse of the RF bandwidth or spectrum carried within the individual coaxial cable runs.

Interestingly, each of these two techniques for reducing congestion and the associated latency has analogies in the wireless space and, to a lesser extent, the DSL space as well.

Wireless LinksIt is generally agreed that the preferred frequency range for wireless RF communications used in the access portion of the network lies in the span of roughly 300 MHz to 3,000 MHz (3 GHz). Frequencies below this range are sometimes subject to long-range, highly variable interference due to atmospheric and ionospheric effects; require comparatively large antennas; and suffer to a greater extent from natural and man-made sources of radio noise and interference as explained

TheFutureofDigitalCommunications:TechnicalPerspectives 13

Page 16: The Future of Digital Communications: Technical Perspectives

earlier. Frequencies above this range tend to be blocked or attenuated more by intervening terrain and natural and man-made clutter such as foliage and buildings. At extremely high frequencies, above 10 GHz or so, additional attenuation due to rain and snow becomes an increasingly limit-ing factor in terms of readily achievable transmission distances.10 Although gross generalizations about the suitability of various ranges of the spectrum for different uses are often suspect, it is widely agreed that, in terms of providing fixed and mobile access, the range from roughly 300 MHz to 3 GHz is the most desirable as indicated immediately above. Indeed it is so desirable that it is sometimes referred to as “beachfront property.”

Of course this entire range of spectrum from 300 MHz to 3 GHz is not available for providing wireless broadband access. Large blocks of this spectrum range are already allocated to other important services and applications such as over-the-air radio and television broadcasting, the Global Positioning System (GPS), private and public safety mobile radio systems, weather radars, air navigation systems used to provide communications between pilots and air-traffic controllers on the ground, Digital Audio Radio Satellite (DARS) broadcasting, and Mobile Satellite Services (MSS) to name just a few. The net result is that only a fraction of the spectrum in the desirable 300 MHz to 3 GHz range is currently available for the provision of fixed and mobile broadband wire-less access. In fact, in the recently released National Broadband Plan, the FCC indicates that only 547 MHz of spectrum is available for the provision mobile broadband services in the desirable frequency range below 3.7 GHz.11

The Obama administration has proposed, over the next 10 years, to nearly double the amount of spectrum available for commercial use—including for the provision of broadband wireless access—by reallocating some 500 MHz of spectrum currently held by the federal government and private companies. This proposal is consistent with recommendations contained in the National Broadband Plan and, presumably, the spectrum involved lies within the desirable range of 300 MHz to 3 GHz or so. If this proposal comes to fruition, it means that by the end of the decade there will be approximately 1 GHz of bandwidth available for the commercial provision of broad-band wireless access within this desirable range. It is interesting and instructive to note that this net amount of spectrum is roughly comparable to the amount of analog bandwidth that is cur-rently available on modern cable television systems that use the HFC architecture.12

Even though the net amount of analog bandwidth is roughly comparable (or will be if the Obama administration’s proposal is implemented), there are important differences between the two meth-ods of providing broadband access—coaxial cable and over-the-air wireless. Perhaps the foremost (and obvious) difference is the fact that a wireless network is able to offer mobility to its custom-ers. However, there are significant other differences in terms of the quality and “usability” of the respective analog bandwidth and, as explained below, mobility increases the challenges associated with using the over-the-air spectrum.

Among the other differences is the fundamental one that the over-the-air spectrum available for the provision of wireless broadband access is scattered throughout different frequency ranges or bands and among different providers. That is, in contrast to the coaxial cable case, the spectrum available for broadband wireless access is neither contiguous (in the frequency dimension) nor controlled by a single entity. This, in turn, produces two related technical issues:• First, as explained before, practical transmitters operating in one channel/band inevitably pro-

duce spillover interference into adjacent channels/bands and practical receivers are unable to totally reject radio signals that are emitted in adjacent channels/bands, even if the energy from those signals is contained entirely within those channels or bands. The lack of perfect transmit-ters and receivers inevitably results in the loss of some capacity between bands due to the need

TheFutureofDigitalCommunications:TechnicalPerspectives14

Page 17: The Future of Digital Communications: Technical Perspectives

to, for example, provide a buffer or guard band between the two bands to supply the neces-sary isolation and thereby reduce the associated interference to an acceptable level. Excessively fragmenting the spectrum among different bands reduces the overall technical efficiency of spectrum utilization for this reason.

• Second, the problem is further compounded in the over-the-air case by what is known as the near-far problem. The problem arises when a user is attempting to receive a weak signal from a distant transmitter while in the immediate proximity of a transmitter operating on an adjacent (in frequency) channel. In such a situation, the signal from the nearby station may be so strong that it overwhelms the ability of the user’s receiver to reject that signal and successfully receive the weaker, desired signal from the distant transmitter.

These types of adjacent channel/band interference problems are considerably reduced in the case of cable television systems because, being under the control of a single operator, adjacent channel signals can be adjusted by the cable operator to arrive at the receiver at close to the same signal strength; there is no near-far interference problem with which to deal.

In addition to the more benign adjacent channel/band interference environment associated with the spectrum on a cable television system, the shielding associated with the coaxial cable also protects desired signals internal to the cable from external sources of natural and man-made RF noise as well as interference from other users of the same frequency ranges that are external to the cable system. Such noise and interference can be a major factor in limiting the performance (i.e., the digital transmission capacity) of a given amount of spectrum.

Moreover, RF signals transmitted over-the-air—wireless signals—suffer from other impairments, the most fundamental of which is the steady weakening of the signal as the receiver is physically moved further away from the transmitter. Under ideal conditions (i.e., situations in which there is a totally clear, unobstructed path between the transmitter and the receiver), the strength of the transmitted signal decreases following what is known as the “inverse square rule.” This rule, based upon the physics involved, predicts that doubling the distance decreases the received signal power by a factor of four. Under real-world conditions (i.e., in the presence of physical obstacles such as hills and buildings and clutter such as trees), the strength of the transmitted signal typically decreases with distance at a much faster rate. For example, in an urban area, doubling the dis-tance may reduce the received signal power by a factor of 16 rather than four. The strength of the signals on a coaxial cable network also decrease with distance but, in a cable television network, the signals are boosted by broadband amplifiers at regular intervals so that the signal levels remain comparatively strong from the transmitter to the receiver.

This rapid falloff of signal strength with distance in the over-the-air environment has impor-tant network architecture implications. Recall once again that, according to Shannon’s Law, the maximum data rate (digital bandwidth) that a channel will support depends upon the strength of the desired signal relative to the received noise/interference and the (analog) bandwidth of the channel. This means that, as the over-the-air signal rapidly weakens with distance, the maximum digital bandwidth that is achievable over the path decreases as well. If the transmitter is at a fixed location where primary power from the fixed electrical grid is readily available, the rapid falloff of signal strength can be compensated for, up to a point at least, by an increase in transmitter power.13 In mobile applications with portable handsets, however, the transmitter power is typically constrained by battery size and life and other considerations such as limits on human exposure to nearby RF signals. Thus, at the edge of a coverage area, it is not the amount of bandwidth that limits the maximum digital transmission rate, but rather the strength of the transmitted signal from the handset as received at the base station receiver. This suggests that, in order to achieve

TheFutureofDigitalCommunications:TechnicalPerspectives 15

Page 18: The Future of Digital Communications: Technical Perspectives

broadband data rates while efficiently using the available bandwidth with portable handsets such as smartphones, the coverage areas must be kept small.

In addition to coping with adjacent channel/band interference problems, sources of external noise and interference, and rapid attenuation of signals with distance, over-the-air wireless systems must also contend with other RF transmission impairments found in an uncontrolled environment. These impairments include what is known as multipath (and its attendant con-sequences). Simply stated, multipath is a propagation phenomenon that results in RF signals arriving at a receiver by two or more paths.14 One common cause of multipath is when the trans-mitted signal travels to the receiving antenna directly over what is known as a “line of sight” path and also via a reflection from a terrestrial object such as a mountain or building. Since the latter path is longer than the direct path, there is an additional delay in the amount of time it takes the reflected signal to arrive at the receiver. Depending upon the time spread between the direct and reflected signals, the copies of the transmitted signal arriving at the receiver may combine in a constructive or deconstructive fashion. As the two terms imply, when the combination is con-structive, the total received signal gets stronger and when the combination is destructive the total received signal gets weaker. If the location of the transmitter and/or the receiver is changed, the geometry of the paths and, hence, the relative delay changes as well. This means that the com-bined signal at the receiver will vary over the immediate geographic coverage area. This variation in signal strength is known as fading and is familiar to users of wireless handsets when it mani-fests itself in the form of dropped calls.15

It should be noted that all of these problems tend to be exacerbated in a wireless mobile environ-ment when, as is often the case, the end-user terminal or handset is in motion. For example, a user may go around the corner of a building and, in the process, move from a location where the signal from a base station is very weak (because the received signal is in the “shadow” of the building) to a location where the signal is very strong (because it is within the line of sight of the base station). Similarly, the co-channel and adjacent channel interference environment may change rapidly because of such movement and because of the changing multipath conditions as explained above. In short, mobility substantially increases the challenges associated with wireless communications and, while the effects of the associated impairments can be reduced (or, in the case of multipath, even exploited), there are typically penalties in terms of the added processing power required in the handset (with associated impact on battery life), added latency associated with the time it takes for the additional processing, and in advanced solutions requiring multiple antennas, added physical size.

So, to briefly summarize, over-the-air wireless systems face a significantly harsher signal envi-ronment compared to a “closed” coaxial cable-based system carrying RF signals and, in turn, over-the-air wireless systems serving mobile as opposed to fixed terminals face a still harsher signal environment. The effects of this progression can be seen in relative spectrum efficiencies (as measure in bps/Hz) achieved by cable television systems, over-the-air television broadcasting systems, and mobile wireless (cellular) systems reported earlier in this essay.

As noted, if the Obama administration’s proposal to free up 500 Mhz of additional spectrum for wireless broadband uses is adopted, there will be approximately 1,000 MHz of analog wireless spectrum available for broadband wireless access in the desirable frequency range below 3 GHz or so. While, at first glance at least, this may seem like an abundance of spectrum, it may not be when viewed against the backdrop of the recent exponential increases in broadband wireless demand. To put this in perspective, assume for a moment that each wireless broadband user is consuming 1 Mbps of digital bandwidth and that the spectrum efficiency achieved in the mobile environment

TheFutureofDigitalCommunications:TechnicalPerspectives16

Page 19: The Future of Digital Communications: Technical Perspectives

is 1 bps/Hz. Under these not unreasonable conditions, 1,000 MHz of bandwidth would only sup-port 1,000 simultaneous users if, because of interference considerations, the spectrum could only be used once over a large geographic area. Clearly, this would be a woefully inadequate amount of capacity for a major metropolitan area like New York or Los Angeles, where there may be literally millions of subscribers. Indeed, in such heavily populated areas, the capacity would be inadequate even if the users were communicating using narrowband digital voice. While higher levels of spectrum efficiency can be achieved, the increases are constrained by Shannon’s Law (the “laws of physics”) and, in the upstream direction, even more so by the previously noted practical limits on handset transmitter power imposed by battery life and human RF exposure considerations.

Unlike traditional high-power, high-antenna-height wireless systems, which only allowed spectrum to be used once in a large geographic area, the more modern cellular mobile radio systems that emerged in the 1980s employ frequency reuse in a “cellular” configuration or architecture. Cellular mobile radio systems get their name from the notion of dividing the large geographic area into a series of small, geographically contiguous coverage areas called cells. Relatively low-power base sta-tion transmitters and receivers with relatively low-height antennas (towers) are placed in each cell and connected by wireline or microwave facilities to the balance of the network and, through inter-connection arrangements, to other fixed and mobile networks as well. The relatively low-power/low-antenna heights of the base stations match the radio coverage to the area of these cells and the base stations communicate with the mobile units (e.g., cellular handsets or “smartphones”) in their respective cells. Because of the low-power/low-antenna heights involved, co-channel interference is minimized and, hence, the same channel can be used simultaneously (i.e., reused) for different conversations or data sessions in different cells within the larger geographic area.

In contrast to the traditional systems of the 1970s and earlier, frequency reuse and the cellular architecture enables the scarce spectrum resource to be utilized in a much more intense man-ner. That is, rather than being used just once in a large geographic area, the assigned spectrum is reused many times, thus multiplying the capacity in that area. Another important feature of the cellular architecture is that, as demand develops, a larger number of still smaller cells (i.e., cells covering even smaller geographic areas) can be employed to increase the total capacity of the sys-tem. Or, stated another way, increased frequency reuse (and hence, increased efficiency) through “cell division” can be employed to increase capacity over time.

Such increases in capacity through increased frequency reuse are not insignificant; indeed, the opposite is true because the increases in capacity are exponential. For example, early cell sites (and those cell sites in more rural areas today where spectrum capacity is not yet a consideration) might have had a radius of ten miles. Decreasing the radius to 1 mile increases the capacity 100-fold and decreasing it to 0.1 miles increases it by 10,000.16 As explained before, reducing the distance between the handset and the base station also allows much higher signal strengths at the receiver, thereby increasing the digital bandwidth available at the edge of the coverage area and/or reducing the power consumed by the handset. Thus, while further increases in broadband wireless capacity can be partially met by (a) doubling the amount of available bandwidth allocated to such services as recently proposed and (b) increasing the efficiency with which the available bandwidth is used by more closely approaching the Shannon limit as a practical matter, continued dramatic increases in broadband wireless demand will have to be met through the exponential increases in capacity associated with smaller cell sizes. Evidence suggests that the marketplace is already responding in exactly this way. Broadband wireless providers are not only seeking more spectrum and, with their vendors, further improving spectrum efficiency within the Shannon limits but also turning to microcells, pico cells and femto cells as well as to “smart antennas” and outdoor Distributed Antenna Systems—all of which permit much more intense frequency reuse.

TheFutureofDigitalCommunications:TechnicalPerspectives 17

Page 20: The Future of Digital Communications: Technical Perspectives

It is interesting to note that enhancing wireless capacity through increased frequency reuse is analogous to how the cable television industry increases its capacity by node splitting or increas-ing the geographic density of the access nodes as explained earlier. In the wireless case and in the coaxial cable case, increasing the number of access nodes allows, respectively, the available over-the-air spectrum and the spectrum within the shielded coaxial cable to be used more intensely. In both the wireless and cable cases, shortening the access portion of the network—the distance between end user and the access node—implies the need for a denser fiber network to carry the traffic between the access nodes and the balance of the network.17 Additionally, shortening the access portion of the network in the case of DSL would facilitate greater bandwidths for the reasons given earlier.

Fiber Optic CableIn the descriptions of the three access technologies addressed so far, it was implicitly assumed that a fiber optic cable would be used to carry the broadband digital signals from the neighborhood node to the balance of the network.18 In each case, the technical challenges associated with the three technologies in providing the connection between the neighborhood node and the individual subscriber were addressed. Although there are economic challenges in doing so, fiber optic cables can be used in this portion of the network in what is typically referred to as the Fiber-to-the-Home (FTTH) configuration or architecture.

Optical fibers consist of a very fine cylinder of glass, called the “core,” surrounded by a concentric layer of glass, called the “cladding,” and, as in the case of twisted-pair copper cable, multiple indi-vidual fibers are often grouped together to form a fiber optic cable. In terms of analog bandwidth, the individual fibers making up the cable are, depending upon the type of optic fiber deployed, capable of handling optical signals with a total analog bandwidth in the terahertz (THz)19 or even tens of THz range. Although these rates are primarily associated with long-haul or core networks, the basic technology permits extremely high transmission rates in the local access network as well.

In addition to their enormous advantage in terms of analog bandwidth, optical fibers also have the benefit that, since they use optical signals rather than electrical signals, they do not suffer from the natural and man-made forms of electrical noise or interference described earlier. This immunity to electrical noise and interference results from the fact that the individual optical fibers are made of glass and do not conduct electricity. In terms of digital bandwidth, the combination of substan-tial analog bandwidth coupled with the immunity to electrical noise and interference means that, even in residential FTTH applications, an individual optical fiber is capable of transmission rates on the order of several Gbps. The actual transmission rate delivered to the subscriber depends upon, among other things, the details of the architecture. One popular architecture called a Passive Optical Network or “PON,” divides the available transmission capacity among multiple subscrib-ers—while another architecture, known as Point-to-Point (PtP), is capable of delivering the entire transmission capacity of the fiber to an individual subscriber. The former provides digital band-widths in the tens of Mbps range while the latter provides digital bandwidths in the hundreds of Mbps range.

Summary and ConclusionsThe recent release of the National Broadband Plan by the FCC has focused attention on the importance of having sufficient bandwidth available anytime and anyplace to support a growing array of broadband services that are critical to the nation’s economic and social well being and to public safety and homeland security as well. This essay describes the relationship between the maximum digital transmission capacity of a channel and the two fundamental factors that determine

TheFutureofDigitalCommunications:TechnicalPerspectives18

Page 21: The Future of Digital Communications: Technical Perspectives

that capacity. These fundamental factors are the analog bandwidth of the channel and the strength of the received signal relative to the noise. Building upon this understanding, this essay explored technical opportunities and challenges associated with increasing the capacity of four transmission technologies used in the critical access portion the network: namely, twisted-pair copper cable, coaxial cable, wireless links, and fiber optic cable.

The opportunities and challenges associated with increasing the capacity of each of the four technologies described above extend beyond the pure technical characteristics of the transmission media, associated electronics, and overall architecture to include not only cost but operational fac-tors such as the ease with which the provider can gain access to the necessary public and private rights-of-way or access to added base station antenna sites in the case of wireless systems. It is beyond the scope of this essay to address these latter factors in any detail. Rather, in the balance of this section, the opportunities and challenges associated with the technical aspects of the four digital transmission technologies will be summarized and characterized.

Of the four technologies discussed in this essay, ordinary multipair copper cable used in the tra-ditional telephone network is highly constrained in terms of analog bandwidth at longer distances and, because the wire pairs are unshielded, it is susceptible to both internally and externally gener-ated electrical noise and interference. As Shannon’s Law suggests, the resulting digital transmission bandwidth using DSL technology is limited to a few Mbps. While higher data rates are possible at shorter distances and by utilizing more than one wire pair to the customer, further increases in capacity are apt to be incremental and dramatic increases problematical because of the limited bandwidth and difficult noise and interference environment. An advantage of the DSL archi-tecture, though, is that the entire capacity on a wire pair within the cable is available to a single customer—that is, it is unshared.

The second technology, coaxial cable, in contrast has a much larger analog bandwidth—on the order of 1 GHz—and its shielded construction reduces its susceptibility to externally generated electrical noise and interference. While the inherent bandwidth of the coaxial cable employed is large, in a traditional cable television network, amplifiers are required at regular intervals to maintain the transmitted signals at acceptable levels. These amplifiers generate electrical noise and forms of self-interference but, unlike the situation with wireless, over-the-air transmission, the noise and interference environment internal to the cable is much more predictable and, hence, easier to mitigate. Again, as Shannon’s Law would suggest, with an analog bandwidth on the order of 1 GHz and a comparatively benign and controlled noise and interference environment, the resulting digital transmission bandwidth of a cable television network, optimally configured, is several Gbps. Artifacts stemming from the origins of the industry in delivering over-the-air broadcast television signals—e.g., the placement of the signals or channels within the available analog bandwidth and the need to continue to carry such signals in the analog format—reduce the current digital capacity. However, in the longer term it should be possible to exploit the full digital transmission capacity of the cable itself. A disadvantage of the cable television architecture is that numerous stretches of the coaxial cable—and hence, the digital transmission capacity—are shared among multiple customers. However, as explained in more detail above, the number of customers sharing the available capacity can be decreased and, accordingly, the digital transmission capacity per customer increased through node-splitting and shortening the distance spanned by any single stretch of cable.

The analog bandwidth available to the third technology, wireless links, depends upon the amount of spectrum allocated by the government for the provision of fixed and mobile wireless broadband access. In the United States, only 547 MHz of spectrum is currently available for such purposes.

TheFutureofDigitalCommunications:TechnicalPerspectives 19

Page 22: The Future of Digital Communications: Technical Perspectives

Recently, though, the Obama administration proposed the reallocation of 500 MHz of additional spectrum which, if the reallocation is fully carried out, would bring the total analog bandwidth available to approximately 1 GHz. While wireless systems have the obvious advantage of offer-ing mobility, their use of the over-the-air spectrum presents a myriad of challenges as described earlier. These challenges include the scattering of the available bandwidth across different bands within the desirable range and coping with adjacent channel/band interference problems, sources of external noise and interference, rapid attenuation of signals with distance, and over-the-air propagation impairments such as multipath. In short, wireless systems face a much harsher signal environment compared to “closed” coaxial cable-based and fiber optic cable-based systems. Over-the-air wireless systems serving mobile as opposed to fixed terminals face a still harsher signal environment because of the additional variability of the signal propagation conditions, and the net result is still lower spectrum efficiencies (as measured in bps/Hz). Because of the harsher signal environment, the rapid falloff of signal strength with distance, and the constraints on up-link transmitter power associated with mobile handsets, the overall digital bandwidth achievable in a mobile environment is significantly reduced compared to closed coaxial cable- and fiber optic, cable-based systems even when the available analog bandwidth is roughly the same.

The fourth technology, fiber optic cable, is generally regarded as the “gold standard” in terms of increasing broadband digital access capacity because of its enormous analog bandwidth and its immunity to natural and man-made forms of electrical noise and interference. The actual digital transmission rate delivered to or from a customer depends upon the details of the architecture employed, but the ultimate capacity is limited more by economic factors rather than by the inher-ent technical constraints on the underlying technology imposed by Shannon’s Law. In this regard, fiber optic cable is often referred to as being “future-proof ” because the maximum digital trans-mission rates are governed more by the electronic equipment attached to the cable rather than by the actual fiber itself. It is future-proof in the sense that the capacity can be increased by upgrad-ing the associated electronic equipment rather than by taking the more expensive step of replacing the fiber itself.

With this background on the four technologies as summarized and characterized above, it seems apparent, from a technical standpoint at least, that (a) the exploding demand for wireless broad-band capacity and (b) the constraints on the availability of the over-the-air wireless spectrum and the challenges associated with achieving dramatic improvements in spectrum efficiencies in the mobile wireless environment suggests a compelling need to meet a potentially large portion of that demand through even more intense frequency reuse in local geographic markets. More intense frequency reuse and greater edge-of-coverage area capacity can be accomplished with smaller cells and, in part, more directive antennas at the base station or access node. If seamless coverage is to be maintained, smaller cells translate directly into a requirement for a much higher density of access nodes. Moreover, both DSL and cable modem technology benefit from the shorter distances that are associated with a more dense deployment of their access nodes. This suggests the grow-ing need to extend fiber optic cable capacity closer to the customer—either fixed or mobile—to minimize the distance between the customer and the access nodes.

This, in turn, further suggests that policymakers not only need to focus on the oft-stated long-term goal of encouraging FTTH but also on the more immediate need to bring fiber significantly closer to the customer to support a vastly increased number of access nodes. This is particularly important in the wireless case, where the capacity added through frequency reuse is critical to facilitating wireless competition with the two major suppliers of fixed broadband capacity—the incumbent telephone and cable television companies.

TheFutureofDigitalCommunications:TechnicalPerspectives20

Page 23: The Future of Digital Communications: Technical Perspectives

Endnotes1. Certain types of man-made noise and interference may exhibit some regularity or predictability that permits them

to be dealt with more effectively at the receiver. Strictly speaking, Shannon’s Law assumes that the undesired signal is random rather than having some degree of predictability.

2. As discussed in more detail later, the lower transmitted power often associated with a wireless handset reduces the maximum transmission rate achievable when the device is at the edge of its coverage area and the signal received at the base station is the weakest.

3. Note that 1 kHz = 1,000 Hz, 1 MHz = 1,000,000 Hz, and 1 GHz = 1,000,000,000 Hz.4. To put this transmission rate into perspective, the National Broadband Plan suggests a goal of having at least 100 mil-

lion U.S. homes with affordable access to actual download speeds of at least 100 Mbps and actual upload speeds of at least 50 Mbps by the end of the decade.

5. A third important measure of performance in the Internet is packet loss. Just as vehicles sometimes get lost on their way to their destination, packets are sometimes lost as well. For example, a packet may be discarded at an inter-mediary node if it has already been delayed excessively or the amount of temporary storage available at the node is insufficient.

6. Another technique that can be used to increase the capacity of a link is digital signal compression. Compression is a digital signal processing technique that can reduce the transmission rate required to convey a signal in a given amount of time without unacceptable loss of quality. Digital compression techniques work, for example, by removing redundant information that may be present in the signal to be transmitted.

7. In a television receiver, such noise manifests itself as “speckles” or “snow” in the picture.8. See Ronald H. Coase, The Federal Communications Commission, 2 J. L. & ECON. 1. 27 (1959) (“It is sometimes

implied that the aim of regulation in the radio industry should be to minimize interference. But this would be wrong. The aim should be to maximize output.”)

9. Although this discussion deals with possible congestion/latency in the downstream direction, similar considerations apply in the upstream direction as well.

10. The higher-frequency spectrum is useful for many purposes especially when there is a line-of-sight path between the two ends of a communications link. It is generally less suitable for longer-range communications, especially when the link is close to the ground and mobile devices are involved.

11. The FCC’s National Broadband Plan recommended that 500 Mhz of spectrum be made available for broadband within 10 years, of which 300 Mhz should be made available for mobile use within five years.

12. As explained before, it is important to realize that, while both cable television systems and broadband wireless access systems both use RF spectrum, interference between the two uses of the spectrum is largely prevented by the metallic shielding associated with the coaxial cable transmission media used in cable television systems.

13. Directive antennas that concentrate the available transmitter power in the direction of the receiver can also be used to increase the received signal strength.

14. This definition of multipath is based upon Federal Standard 1037C: Glossary of Telecommunications Terms (avail-able at http://www.its.bldrdoc.gov/fs-1037/).

15. The variation of signal strength or fading produces the familiar phenomenon of cell phone users moving around try-ing to find a good spot to make or maintain a cellular call.

16. This exponential increase is due to the fact that the area of a circle increases with the square of the radius. Thus, for example, decreasing the average cell radius by one-half decreases the coverage area of each cell by one-fourth and increases the amount of frequency reuse obtained by a factor of four. This increase may come at the expense of addi-tional infrastructure investment, of course.

17. A recent Aspen Institute report reached a similar conclusion. See Rethinking Spectrum Policy: A Fiber Intensive Wireless Architecture, Mark MacCarthy Rapporteur, The Aspen Institute, Washington, DC (2010). (“A fiber intensive wireless network architecture should be considered, in parallel with the allocation of additional spectrum, as comple-mentary long-term solutions to the problem of exploding demand for wireless services.”)

18. In some situations, especially in more remote areas, broadband point-to-point microwave systems may continue to play an important role in carrying traffic from the access node to the balance of the network.

19. A terahertz is equal to 1,000 GHz or 1,000,000 MHz.

TheFutureofDigitalCommunications:TechnicalPerspectives 21

Page 24: The Future of Digital Communications: Technical Perspectives

Introduction

T he Internet unquestionably represents one of the most important technological devel-opments in recent history. It has revolutionized the way people communicate with one another and obtain information and has created an unimaginable variety of commercial

and leisure activities. Many policy advocates believe that the Internet’s past success depended in no small part on its architecture and have argued that its continued success depends on preserving that architecture in the future.2

Interestingly, many members of the engineering community see the Internet in starkly different terms. For them, the Internet began as a military network, which caused it to reflect tradeoffs that would have been made quite differently had the Internet been designed as a commercial network from the outset.3 Moreover, engineers often observe that the current network is ill-suited to handle the demands that end users are placing on it.4 Indeed, engineering researchers often describe the network as ossified and impervious to significant architectural change.5 As a result, the U.S. government has launched a series of initiatives to support research into alternative network architectures.6 The European Commission has followed a similar course,7 and university-based researchers in both the U.S. and Europe are pursuing a variety of “clean slate” projects studying how the Internet might be different if it were designed from scratch today.8

This essay explores some emerging trends that are transforming the way end users are using the Internet and examines their implications both for network architecture and public policy. Identifying future trends is inherently speculative and in retrospect will doubtlessly turn out to be mistaken in a number of important respects. Still, I hope that these ruminations and projections will yield some insights into range of possible evolutionary paths that the future Internet may take.

Internet Protocol VideoThe development that has generated the most attention from policymakers and the technical com-munity is the use of Internet-based technologies to distribute video programming. “Over-the-top” services (such as YouTube and Hulu) rely on the public Internet to distribute video. Other services, such as AT&T’s U-verse, use the protocols developed for the Internet to distribute video through proprietary networks. Verizon’s fiber-based FiOS service and many cable television providers already rely on these protocols to provide video on demand and are making preparations to begin using Internet-based technologies to distribute their regular video channels as well. Because these services are often carried in whole or in part by private networks instead of the public Internet, they are called “Internet Protocol (IP) Video” or IPTV. Industry observers have long predicted that video will represent an increasing proportion of total network traffic.

The growing use of IP-based protocols to distribute video has raised a number of technical and policy challenges. Not only will the growth of IPTV require more bandwidth, it may also require more basic changes in the architecture and regulatory regimes governing the network.

The Challenge of New Patterns in Internet Usage

By Christopher S. Yoo,1 University of Pennsylvania

TheFutureofDigitalCommunications:TechnicalPerspectives22

2

Page 25: The Future of Digital Communications: Technical Perspectives

Bandwidth and Quality of ServiceIndustry observers have long disputed how large the video-induced spike in network demand will actually be. A recent industry report estimates that Internet video now represents over one-third of all consumer Internet traffic and will grow to more than 90 percent of all consumer traffic by 2014.9 Experts disagree about what the future holds. Some industry observers have long predicted the coming of a video-induced “exaflood” that would require a sharp increase in capital spending.10 The Minnesota Internet Traffic Studies (also known as MINTS and headed by Andrew Odlyzko) disagrees, pointing to the lack of any sign of such an upsurge in traffic.11 Other observers challenge MINTS’s conclusions, arguing that in focusing solely on traffic patterns at public peering points, MINTS fails to take into account the sizable proportion of the overall traffic that now bypasses the public backbone.12 Moreover, even if the shift to IP-based video distribution has not yet by itself created a spike in the demand for bandwidth, the wide-scale deployment of high definition (and the looming emergence of ultra-high definition), 3D, and multiscreen technologies may cause the rate of traffic growth to increase in the future.

Aside from increased bandwidth, video requires net-work services that are qualitatively different in many ways from those required by the applications that formed the bulk of first-generation Internet usage. On the one hand, video is more tolerant of packet loss than web browsing and e-mail. On the other hand, unlike the performance of e-mail and web browsing, which depends solely on when the last packet is delivered, video quality depends on the timing with which every intermediate packet is delivered.

Specifically, video is more sensitive to jitter, which is variations in spacing between intermediate pack-ets in the same stream and which typically arises

Christopher S. YooisProfessorofLaw,Communication,andComputerandInformationScience,andDirectoroftheCenterforTechnology,Innovation,andCompetitionatthe

UniversityofPennsylvania.

Hisresearchfocusesonexploringtheinsightsthattheprinciplesofnetworkengineeringandtheeconomicsofimper-fectcompetitioncanprovideintotheregulationoftheInternetandotherformsofelectroniccommunications.Hehasbeenaleadingvoiceinthe“networkneutrality”debatethathasdominatedInternetpolicyoverthepastseveralyears.Heisalsopur-suingresearchoncopyrighttheoryaswellasthehistoryofpresidentialpower.

Heistheauthor(withDanielF.Spulber)ofNetworks in Telecommunications: Economics and Law(Cambridge,2009)and(withStevenG.Calabresi)ofThe Unitary Executive: Presidential Power from Washington to Bush(Yale,2008).YootestifiesfrequentlybeforeCongress,theFederalCommunicationsCommission,andtheFederalTradeCommission.

HereceivedhisJ.D.fromNorthwesternUniversity,hisM.B.A.fromUCLA,andhisA.B.fromHarvardUniversity.

TheFutureofDigitalCommunications:TechnicalPerspectives 23

Page 26: The Future of Digital Communications: Technical Perspectives

when a stream of packets traverses routers that are congested. Jitter can cause video playback to freeze temporarily, which degrades the quality of the viewers’ experience.

The usual solution to jitter is to delay playback of the video until the receiver can buffer enough packets to ensure that playback proceeds smoothly. This solution has the drawback of exacerbating another dimension of quality of service that is relevant for video, which is delay or latency, defined as the amount of time that it takes for playback to commence after it has been requested. Interestingly, viewers’ tolerance of latency varies with the type of content being transmitted. While viewers of static video typically do not mind waiting five to 10 seconds for playback to begin, such delays are not acceptable for interactive video applications, such as video conferencing.13 Some content providers reduce latency by using data centers or content delivery networks to position their content in mul-tiple locations, thereby shortening the distance between the content and end users. Storing content in multiple locations only works for static content that does not change. It cannot work for interactive content, such as videoconferencing or online gaming, which changes dynamically.14

For interactive applications, the engineering community has focused on two other means for providing higher levels of quality of service. One solution is for network owners to overprovi-sion bandwidth and switching capacity. When combined with distributed architectures for content delivery (such as caching and content delivery networks), this surplus capacity can give networks the headroom they need to handle any transient bursts in traffic without any congestion-related delays.15 Overprovisioning is subject to a number of limitations, however. Wireless networks cannot simply add capacity to meet demand. Moreover, even networks that can increase bandwidth cannot do so instantaneously. Forecasting errors are inevitable, and in those instances where a network provider has failed to anticipate a key demographic shift or the emergence of a key application, device, or other complementary technology, it may sometimes find itself unable to expand capacity quickly enough to meet this increase in demand.16 Overprovisioning also only increases the probability that particular traffic will pass through the network without delay. It does not guarantee the qual-ity of service that any particular traffic will receive.17 Finally, overprovisioning inherently requires networks to guarantee quality of service through capital expenditures (CapEx) rather than through operating expenditures (OpEx). As the difficulty in raising capital in the current economic downturn eloquently demonstrates, the relative costs of CapEx and OpEx solutions typically vary across time. Simple economics thus militate against locking network providers into one or the other option.18

The other alternative to provide higher-quality video service is to engage in increasingly sophis-ticated forms of network management that either reduce congestion or provide some means for providing higher levels of quality of service. Over the past two decades, the engineering commu-nity has developed a wide range of potential solutions, including Integrated Services (IntServ),19 Differentiated Services (DiffServ),20 MultiProtocol Label Switching (MPLS),21 and Explicit Congestion Notification (ECN).22 Other new initiatives, such as Low Extra Delay Background Transport (LEDBAT), also show promise.23 Other engineers disagree with this approach, com-plaining that adding quality of service to the network would require devoting processing power in routers that would make the network too expensive and too slow.24

Leading engineering textbooks recognize that the engineering community is split over which solu-tion—overprovisioning or network management—would be better.25 The fact that the engineering community has yet to reach consensus counsels against regulatory intervention mandating either approach.

Congestion ManagementThe advent of IPTV may also require fundamental changes to the way the network deals with

TheFutureofDigitalCommunications:TechnicalPerspectives24

Page 27: The Future of Digital Communications: Technical Perspectives

congestion. The current approach to congestion management was developed in the late 1980s, shortly after the Internet underwent a series of congestion collapses. Because congestion is a network-level problem, in many ways the logical solution would have been to address it through a network-level solution, as was done in the original ARPANET, networks running asynchronous transfer mode (ATM), and many other early corporate networks. However, the router hardware of the early 1980s made implementing solutions at the network level prohibitively expensive. On the other hand, although edge-based congestion management is feasible, the hosts operating at the edge of the network typically lack the information to know when the network is congested.

Van Jacobson devised an ingenious mechanism that allows hosts operating at the edge of the network to infer when the core of the network has become congested.26 This solution takes advan-tage of a particular feature of the Transmission Control Protocol (TCP). TCP ensures reliability by requiring the receiving host to send an acknowledgment every time it receives a packet. If the sending host does not receive an acknowledgement within the expected timeframe, it presumes that the packet was lost and resends it. Jacobson noted that packet loss typically occurs for one of two reasons: (1) transmission errors, or (2) discard by a router where congestion caused its buf-fer to become full. Because wireline networks rarely dropped packets due to transmission errors, hosts operating at the edge of the network could take the failure to receive an acknowledgement within the expected time as a sign of congestion and a signal to slow down their sending rates exponentially.27

This edge-based approach is now required of every computer attached to the Internet and con-tinues to serve as the primary mechanism for managing congestion today. The problem is that TCP does not represent the only transport protocol commonly used on the network. In par-ticular, by resending every packet that fails to receive an acknowledgement within the expected timeframe, TCP implicitly prioritizes reliability over delay. The DARPA protocol architects rec-ognized from the Internet’s earliest years that applications such as packet voice cannot tolerate such delays and would prefer to avoid them even if it meant sacrificing reliability altogether. To support these applications, the DARPA protocol architects created the User Datagram Protocol (UDP), which forgoes the use of acknowledgements altogether. UDP has now become the primary transport protocol for transmitting the data traffic associated with Voice over Internet Protocol (VoIP). Because IPTV makes the same tradeoff, UDP also has become the primary protocol for IPTV as well.

Because the mechanism for managing congestion described above depends on acknowledgements to signal when the network is congested, it does not work for protocols like UDP that do not use acknowledgements. While this was not a problem when UDP represented only a small proportion of bandwidth demand, the growing importance of VoIP and IPTV has caused UDP to become an increasingly significant component of network traffic. Consequently, engineers have sought to ensure that UDP acts in a way that is “TCP-friendly,” measured in terms of whether a UDP-based applica-tion consumes more network resources than would a similar TCP-based application.28 Some of these solutions require the receiving hosts to send acknowledgements in a manner somewhat reminiscent of TCP, which threatens to force UDP to run unacceptably slowly.29 Others would require reconfig-uring routers to send information about congestion to sending hosts, which had historically been rejected because of cost.30 More recently, other engineers have organized a more fundamental attack on TCP-friendliness as the benchmark for evaluating bandwidth allocation, arguing that it allocates more bandwidth to users running applications that steadily generate small amounts of traffic than to users running applications that generate traffic in short bursts, even when the total amount of band-width consumed by both types of applications is exactly the same. It also tolerates allowing end users to seize more of the bandwidth simply by initiating multiple TCP sessions.31

TheFutureofDigitalCommunications:TechnicalPerspectives 25

Page 28: The Future of Digital Communications: Technical Perspectives

Simply put, because video relies on UDP, the growth of video is putting pressure on the way the network manages congestion. Considerable disagreement remains over the best means for addressing this problem and also over the basis for evaluating the fairness or optimality of any particular solution. As a result, it is likely that different actors will pursue different solutions. Under these circumstances, policymakers must be careful to avoid the temptation to intervene to establish a uniform solution and should instead allow this debate to run its course.

MulticastingTCP and UDP are unicast protocols, in that they transmit data between a single sender and a sin-gle receiver, with each destination receiving a separate stream of packets. While such an approach makes sense for person-to-person communications like e-mail or file transfer, it makes less sense for mass communications. Consider, for example, what occurs if an IPTV provider uses UDP to transmit video to one million viewers. Unicast technologies require that the provider transmit one million duplicate packets to its first hop router. The first hop router must in turn pass those pack-ets on to downstream routers that serve multiple customers, even though many of those packets are duplicates as well.

Providers can avoid the inefficiency of distributing mass communications in this manner by using a multicast protocol. Instead of sending multiple copies of duplicate packets to the first hop router, multicasting sends a single stream of packets and depends on each downstream router to create duplicates as necessary.32

Although more efficient in terms of bandwidth usage, multicasting presents a number of chal-lenges.33 Multicasting requires the deployment of special routers in the core of the network that are capable of processing group information and duplicating packets as necessary. It also requires group management processes to inform routers when individual hosts tune in and out of the multicast stream. Effective group management also requires the security to ensure that multicast-ing is not used by unauthorized senders or receivers. Multicast flows are also typically not TCP friendly, so widespread use of multicasting may degrade unicast traffic and may even contribute to congestion collapse. Multicasting also presents routing challenges that are quite distinct from and more complicated than routing for unicast protocols. The lack of management tools has inhibited the deployment of multicasting that spans multiple domains. That said, many companies employ multicasting within their proprietary networks. Most notably for our purposes, AT&T relies on multicasting to distribute video through its U-verse network.

Multicasting is likely to play an increasingly important role as other video providers migrate their distribution systems to IP-based technologies. The need to deploy different hardware in the core of the network, group management tools, and routing algorithms create challenges that represent a significant change to the network’s architecture.

Regulatory ClassificationsMore widespread use of IPTV is also likely to raise questions about its proper regulatory classifica-tion. Traditional multichannel video program distribution systems, such as cable television, are regulated as “cable services.”34 As such, they are required to pay franchising fees, provide leased access and PEG access, and to provide open access to their set-top boxes, among other require-ments. Internet-based services have traditionally been classified as “information services” that have largely been exempt from such regulation.

What is the proper regulatory classification for IP-based video distribution systems? New services provided by telephone companies that use Internet technologies to distribute video over their own

TheFutureofDigitalCommunications:TechnicalPerspectives26

Page 29: The Future of Digital Communications: Technical Perspectives

proprietary networks, such as AT&T’s U-verse and Verizon’s FiOS services, are classified as cable services. Other video distribution platforms, such as YouTube and Hulu, do not own any access networks of their own. Instead, they distribute content over the public backbone and whatever last-mile connectivity individual end users have obtained. To date, these so-called “over the top” services have been exempt from regulation as cable services.

The increasing variety of IP video distribution platforms is starting to raise difficult definitional questions. For example, Internet-enabled gaming systems now support multiplayer gaming as well as direct interaction through video chat features. In addition, gaming systems are now important sources of over-the-top video services, such as Netflix. The convergence of gaming into the Internet ecosystem has raised the question of whether carrying over-the-top video turns these gaming platforms into cable services.

Wireless BroadbandAnother emerging trend that is transforming U.S. Internet policy is the emergence of wireless as a technological platform for broadband service. The most recent data released by the FCC indicates that wireless has already captured nearly 25 percent of the market for high-speed lines as of the end of 2008, as compared with just over 40 percent for cable modem and just under 30 percent for ADSL.35 The expansion of the U.S. wireless broadband market since 2008 and the emergence of wireless as the leading broadband platform in other countries both suggest that wireless broad-band will become increasingly important in the years to come.

Policymakers sometimes suggest that the same principles applying to other broadband tech-nologies should simply be extended to wireless. These suggestions overlook key technological differences between wireless and wireline technologies that policymakers must take into account.

Bandwidth Limits and Local CongestionWireless technologies face limitations that are quite different from those faced by wireline tech-nologies. As noted earlier, wireless broadband is subject to bandwidth constraints that are much stricter than those confronted by wireline technologies. While wireless providers can increase capacity by relying on smaller cell sites operating at lower power, they cannot add capacity in the same manner as wireline providers.

In addition, because wireless technologies share bandwidth locally, they are more susceptible to local congestion than many fixed-line services, such as ADSL.36 These problems are exacerbated by the fact that, in wireless networks, data and voice traffic typically share bandwidth, in contrast with telephone and cable companies, which typically place data traffic in a separate channel. Thus, excess data traffic can degrade wireless providers’ core business to an extent not possible for other broadband technologies.

The Physics of Wave PropagationThose who took physics in high school will recall that waves have some unique characteristics. They can reinforce each other in unexpected ways, as demonstrated by unusual echoes audible in some locations in a room, but not others, and by whispering corners, where the particular shape of the room allows sound to travel from one corner to the other even though a person speaks no louder than a whisper. As noise-reducing headphones marketed by Bose and other companies demonstrate, waves can also cancel each other out. Waves also vary in the extent to which they can bend around objects and pass through small openings, depending on their wavelength.

TheFutureofDigitalCommunications:TechnicalPerspectives 27

Page 30: The Future of Digital Communications: Technical Perspectives

The unique features of waves can cause wireless technologies to face interference problems with which wireline technologies do not have to contend. For example, wireless signals attenuate much more rapidly with distance than do wireline signals. Moreover, in contrast to wireline technolo-gies, there is an absolute limit to the density of wireless users that can operate in any particular area. Shannon’s Law dictates that the maximum rate with which information can be transmitted given limited bandwidth is a function of the signal-to-noise ratio.37 Unlike wireline transmis-sions, which travel in a narrow physical channel, wireless signals propagate in all directions and are perceived as noise by other receivers. At some point, the noise becomes so significant that the addition of any additional wireless radios becomes infeasible.

Wireless transmissions also suffer from what are known as multipath problems resulting from the fact that terrain and other physical features can create reflections that can cause the same signal to arrive at the same location multiple times. Unless the receiver is able to detect that it is receiv-ing the same signal multiple times, it will perceive multipathing as an increase in the noise floor, which in turn reduces the available bandwidth. If the signal arrives 180° out of phase, it can even cancel out the original signal completely. Although smart receivers can avoid these problems if they know the exact location of each source, they cannot do so if the receiver or the other sources are mobile devices whose locations are constantly changing.

For these reasons, many wireless providers implement protocols that give priority to time-sensitive applications during times when subscribers are in areas of low bandwidth (such as by holding back e-mail while continuing to provide voice service). Other wireless providers rate limit or ban video or peer-to-peer downloads in order to prevent a small number of users from rendering the service completely unusable.38

Congestion ManagementWireless technologies also require a significantly different approach to congestion management. As noted earlier, the Internet’s primary mechanism for managing congestion is based on the infer-ence that, because wireline networks rarely drop packets due to transmission errors, any observed packet loss is likely to be due to congestion. The problem is that this inference is invalid for wireless networks, which drop packets due to transmission error quite frequently, either because of a bad handoff as a mobile user changes cells or because of the interference problems discussed above. When a packet is dropped due to transmission error, reducing the sending rate exponen-tially is precisely the wrong response. Instead, the sending host should resend the dropped packet as quickly as possible without slowing down. In other words, the optimal response for wireless networks may well be the exact opposite of the optimal response for wireline networks.

These differences have caused wireless networks to manage congestion and packet loss in different ways. Some solutions place a “snoop module” at the base station that serves as the gateway used by wireless hosts to connect to the Internet and that keeps copies of all packets that are transmit-ted and monitors acknowledgments passing in the other direction. When the base station detects that a packet has failed to reach a wireless host, it resends the packet locally instead of having the sending host do so.39 Other solutions call for the sending host to be aware of when its transmission is carried in part by a wireless link and to distinguish between losses due to congestion and losses due to transmission errors. Still other solutions call for a split connection, in which the sending host establishes one TCP connection with an IP gateway in the middle of the network where the transmission shifts to wireless and a separate TCP connection between the IP gateway and the receiving host.40 Some of these solutions violate the semantics of IP. All of them require introduc-ing traffic management functions into the core of the network to a greater extent than originally envisioned by the Internet’s designers.

TheFutureofDigitalCommunications:TechnicalPerspectives28

Page 31: The Future of Digital Communications: Technical Perspectives

The Heterogeneity of DevicesWireless technologies do not vary only in terms of transmission technologies. They also vary in terms of end-user devices. Instead of relying on a personal computer, wireless broadband subscrib-ers connect to the network through a wide variety of smartphones. These devices are much more sensitive to power consumption than are PCs, which sometimes leads wireless network providers to disable certain functions that shorten battery life to unacceptable levels. In addition, wireless devices have much less processing capacity and employ less robust operating systems than do the laptop and personal computers typically connected to wireline services. As a result, they are more sensitive to conflicts generated by multiple applications, which can cause providers to be much more careful about which applications to permit to run on them.

Wireless devices also tend to be much more heterogeneous in terms of operating systems and input interfaces (including keyboards and touch screens). As a result, the dimensions and levels of functionality offered by particular wireless devices vary widely. It seems too early to predict with any confidence which platform or platforms will prevail. Furthermore, as noted earlier, many wire-less networks address bandwidth scarcity by giving a higher priority to time-sensitive applications, which typically requires close integration between network and device. These features underscore the extent to which variations in particular devices are often an inextricable part of the functionality of the network.41

Even more fundamentally, wireless devices interconnect with the network in a manner that is quite different from devices connected to wireline networks. Devices connected to wireline networks have IP addresses that are visible to all other Internet-connected hosts. Wireless devices, in con-trast, do not have IP addresses. Instead, Internet connectivity is provided by an IP gateway located in the middle of the network that connects to individual wireless devices using a telephone-based technology rather than IP. Stated in technical terms, wireless broadband devices operate at Layer 2 rather than Layer 3 of the Internet protocol stack. Wireless devices will eventually connect through the Internet protocol once fourth-generation wireless technologies such as LTE are deployed. Until that time, wireless devices necessarily will connect to the Internet on different and less open terms than devices connected through wireline networks.

Routing and AddressingAnother problem confronting wireless broadband results from the fact that the Internet developed at a time when computers did not move. As a result, the architecture could use a single address to specify both the identity of a particular machine as well as where that machine was connected to the network. The advent of mobility has caused the unity of identity and location to break down. A single mobile device may now connect to the network through any number of locations. Although the network could constantly update the routing table to reflect the host’s current loca-tion, doing so would require propagating the updated information to every router in the network as well as to an unacceptably large number of programs and databases.

Instead, mobile devices typically designate a router on its home network that has a fixed, per-manent IP address as a “home agent” that serves as the initial contact point for all IP-based communications. Anyone seeking to contact a mobile device would first send the packets to the home agent, which would then encapsulate the packets in another packet and forward them to wherever the mobile host is currently located. Managing mobile communications in this manner is surprisingly complex and requires protocols for home agents to notify others of its location, to encapsulate traffic bound for the mobile host, and to allow mobile hosts to register and deregister their current location with their home agents, notify the foreign network that they are currently attached to it, and decapusulate the packets as they arrive. Sending communications via the home

TheFutureofDigitalCommunications:TechnicalPerspectives 29

Page 32: The Future of Digital Communications: Technical Perspectives

agent also suffers from the inefficiency of what is sometimes called “triangle routing,” because, instead of passing directly from the sending host to the receiving host, traffic must travel first from the sending host to the home agent and then from the home agent to the receiving host. In the extreme case, a communication between two mobile hosts located next to one another in a conference room on the West Coast might have to travel back and forth across the country if one of them has a home agent located on the East Coast. The home agent can eliminate triangle rout-ing by passing the mobile host’s current location on to the sender so that the sender may forward subsequent packets to it directly. The initial communications must still bear the inefficiency of triangle routing. Moreover, such solutions become much more difficult to implement if the mobile agent is constantly on the move.42

Cloud ComputingCloud computing represents one of the hottest topics in today’s information technology (IT) com-munity. Under the traditional paradigm, end users run applications on data stored locally on the host computer’s hard disk. Under cloud computing, data resides in the network and is accessed on demand.

An Example of Route Architecture

WirelesstechnologiesarealsocausingpressureontheamountofresourcesthatthenetworkmustspendonkeepingtrackofInternetaddresses.Tier1ISPsnecessarilymustmaintaincompleteroutingtablesthatcontainroutestotheIPaddressforeveryhostconnectedtotheInternet.Thecurrentsystemreliesonrouteaggregationtokeeproutingtablesfromgrowingtoolarge.Thismechanismcanbeillustratedbyananalogytothetelephonesystem.ConsiderapartyinLosAngeleswhoisattemptingtocallthemaintelephonenumberfortheUniversityofPennsylvania,whichis(215)898-5000.Solongasallcallstothe215areacodepassthroughthesameoutboundlink,aphoneswitchinLosAngelescouldrepresentalltelephonenumbersinthatareacodewithasingleentryinitsroutingtable.Similarly,solongasalltelephonenumbersinthe898directoryareconnectedtothesamecentraloffice,switcheswithinPhiladelphianeednotmaintainseparateentriesforeachphonenumberinthatdirectory.Instead,theycanrepresentalltelephonenumberslocatedin(215)898-xxxxwithasingleentry.

TheInternetemploysasimilarsystemknownasClasslessInterDomainRouting(CIDR)toaggregateroutes.CIDRisevenmoreflexible.Itcanaggregateroutesatanynumberofdigitsratherthanbeinglimitedinthemannerofareacodesanddirectories,withthedigitsofcoursebeingrepresentedinbinary.

Thisstrategydependsontheaddressspaceremain-ingcompact.Inotherwords,thisapproachwillfail

ifthe215areacodeincludesphonenumbersthatarenotlocatedinPhiladelphia.Ifthatisthecase,theroutingtablewillhavetouseseparateentriestokeeptrackofeverysingleaddress.Theproblemisthattruemobileaddressingfragmentsthegeo-graphiccompactnessoftheaddressspace.

Anotherproblemissomewhatmoresubtle.Thecur-rentarchitectureisbuiltontheimplicitassumptionthatInternetaddresseschangeonaslowertimescalethandocommunicationsessions.Solongastheaddressarchitecturechangesataslowertimescale,anyparticularInternet-basedcommunicationmaytaketheaddressarchitectureasgiven.Mobility,however,increasestherateatwhichtheaddressarchitecturechanges.Inaddition,becauseaddress-ingishandledonadecentralizedbasis,informationaboutchangesintheaddressarchitecturetakestimetospreadacrosstheInternet.Increasesintheratewithwhichtheaddressspacechangescancausecommunicationssessionstofailandcreatetheneedforanewwaytomanageaddresses.

Othershaveproposedradicalchangesintheaddressingandroutingarchitecture.Oneapproachwouldreplacethesingleaddressnowemployedinthenetworkwithtwoaddresses:onetoidentifytheparticularmachineandtheothertoidentifyitsloca-tion.WhateversolutionisadoptedwouldrepresentafundamentalchangeinthenetworklayerthatunifiestheentireInternet.

TheFutureofDigitalCommunications:TechnicalPerspectives30

Page 33: The Future of Digital Communications: Technical Perspectives

The National Institute of Standards and Technology divides the type of services offered by cloud computing providers into three categories:43 • Software as a Service (SaaS) providers offer finished applications that end users can access

through a thin client (typically a web browser). Prominent examples of SaaS include Gmail, Google Docs, and Salesforce.com. The only computing power that an end user needs to access SaaS is a netbook capable of running a web browser. The end user has limited control over the design of the application, such as minor customization and configuration. It has no control over the servers, networking, and storage infrastructure.

• Platform as a Service (PaaS) providers offer suites of programming languages and software development tools that customers can use to develop their own applications. Prominent examples include Microsoft Windows Azure and Google App Engine. PaaS gives end users control over application design, but does not give them control over the physical infrastructure.

• Infrastructure as a Service (IaaS) providers offer end users direct access to processing, storage, and other computing resources and allow them to deploy their own operating systems and to configure those resources as they see fit. Examples of IaaS include Amazon Elastic Compute Cloud (EC2), Rackspace, and IBM Computing on Demand.

Cloud computing can also be understood in terms of the business needs motivating its adoption. Customers are often reluctant to abandon their entire corporate intranets and to rely exclusively on cloud computing. One business case that stops short of fully embracing cloud computing is disaster recovery, in which customers back up their data remotely on the network. It may also involve the functionality to access that data on a short-term basis should the customer’s internal network fail. Another classic scenario is called cloud bursting, in which the customer relies on cloud computing to provide overflow capacity to cover spikes in demand.

Proponents of cloud computing predict that it will yield substantial benefits.44 Assuming that data centers allow multiple customers to share the same hardware, cloud computing should allow smaller companies to take advantage of scale economies that they could not realize on their own. Even companies that are large enough to achieve minimum efficient scale by themselves may see advantages. The fact that hardware represents discrete (often significant) investments that must typically be provisioned in advance means that companies risk running out of capacity should demand grow more rapidly than anticipated. Conversely, they may face the burden of underuti-lized resources should demand grow unexpectedly slowly. The fact that companies must provision hardware for peak demand also means that cloud computing is particularly helpful when demand is highly variable, since aggregating demand lowers variability.45 The greater dispersion made pos-sible by virtualization can reduce latency and increase reliability.

Predictions about the future of cloud computing run the gamut, with some forecasting that all IT will eventually migrate into the cloud46 and others arguing that it is nothing more than overhyped repackaging of existing technologies.47 What is even more poorly understood is what increasing use of cloud computing would mean for the network architecture.

End User ConnectivityCloud computing customers need different services from the network that provides the physical connectivity to end users (often called the “last mile”). Since the software and data needed to run applications no longer reside on end users’ hard disks, cloud computing needs more ubiquitous connectivity and more substantial uptime guarantees than previously required. Because data processing no longer occurs locally, reliance on cloud computing also increases demand for the quantity of bandwidth as well as its ubiquity.

TheFutureofDigitalCommunications:TechnicalPerspectives 31

Page 34: The Future of Digital Communications: Technical Perspectives

Moreover, because cloud computing provides services that used to be delivered by corporate intranets, cloud computing users may well demand higher levels of quality of service from their last-mile networks. These demands will likely vary from company to company. For example, financial services companies typically require perfect transactions with latency guarantees mea-sured in microseconds. In addition, the provider must be able to verify the delivery time of each and every transaction after the fact. The fact that information that used to reside exclusively within an end user’s hard disk and processor must now be transmitted over the network also means that cloud computing customers are likely to demand higher levels of security from their last-mile networks.

Data Center ConnectivityThe advent of cloud computing also requires improvements in data center connectivity. As an initial matter, customers establishing new cloud computing instances must provision their data to the data center. Because datasets in the terabyte range would take weeks to upload, many cloud computing providers recommend that customers download their data onto a physical storage medium and send it via an overnight mail service, such as FedEx.48

The agility and virtualization demanded by cloud computing also requires the flexibility to move large amounts of data between data centers very quickly. The best-efforts architecture of the cur-rent Internet cannot offer the guaranteed levels of quality of service that these functions require. For this reason, many cloud computing providers interconnect their data centers through dedi-cated private lines. Others have begun outsourcing these services to other networks, partially to gain the economies of sharing resources with other firms and partially out of concern that operat-ing these networks will lead them to be classified as common carriers.

Cloud computing is also placing new demands on the network’s approach to routing. The BGP-based system responsible for routing traffic on the current Internet employs an algorithm that by default sends traffic along the path that transverses the fewest autonomous systems. Most cloud computing providers need greater control over the paths taken by key traffic. As a result, many rely on MPLS or some other protocol to manage routing. On a more radical level, some industry observers note that the identity/locator split discussed above, with mobile computing assigning separate addresses to each individual machine and to the location that the machine is currently connected to the network, should be augmented still further with a third address to mark the location where the application that the machine is accessing resides.49

Privacy and SecurityFinally, cloud computing has fairly significant implications for privacy and security. As an initial matter, cloud computing often requires large amounts of data that previously did not leave a cor-porate campus to be shifted from one data center to another. In addition, virtualization necessarily envisions that this data will reside on the same servers as other companies’ data. As a result, the hardware located in these data centers and the networks interconnecting them require a higher level of security than previously necessary. Industry participants are also often very protective of infor-mation about the volume and pattern of their transactions. They are thus likely to impose stringent requirements on what data can be collected about their operations and how that data is used.

The fact that data may be shifted from one data center to another also potentially makes that data subject to another jurisdiction’s privacy laws. Because customers are ultimately responsible for any such violations, they are likely to insist on a significant degree of control over where it resides at any particular moment.

TheFutureofDigitalCommunications:TechnicalPerspectives32

Page 35: The Future of Digital Communications: Technical Perspectives

Programmable NetworkingOne of the primary architectural principles underlying the Internet is that routers should operate on a pure store-and-forward basis without having to keep track of what happens to packets after they have been passed on. This commitment is reflected in the Internet’s general hostility toward vir-tual circuits and the belief that routers should not maintain per-flow state. Opponents of network management often point to the Senate testimony offered by officials of Internet2 (a nonprofit part-nership of universities, corporations, and other organizations devoted to advancing the state of the Internet) noting that, although their network designers initially assumed that ensuring quality of ser-vice required building intelligence into the network, “all of [their] research and practical experience supported the conclusion that it was far more cost-effective to simply provide more bandwidth.”50

To a certain extent, this long-standing hostility toward virtual circuits is an artifact of the Internet’s military origins that has less relevance for the Internet of today. DARPA protocol architect David Clark has pointed out that the belief that routers operating in the core of the network should not maintain per-flow state derived largely from the high priority that military planners placed on sur-vivability.51 As noted earlier, survivability does not represent a significant concern for the modern Internet.

Moreover, technologies such as IntServ and MPLS, both of which are governed by accepted IETF standards, use virtual circuits to simplify packet forwarding and to support a fairer and more efficient allocation of traffic. Although IntServ has not achieved widespread acceptance, interest in MPLS appears to be growing.

These developments can be seen as part of a broader move away from viewing routers as static devices that always operate in a particular way and toward looking at the network as a programma-ble switching fabric that can be reconfigured from store-and-forward routers into virtual circuits as needed. For example, Internet2 (which, as noted earlier, is often held out as proof of the engineer-ing community’s conviction that network management is unnecessary) offers a service that it calls its Interoperable On-demand Network (ION) that allows researchers to establish dedicated point-to-point optical circuits to support large data transfers and other bandwidth-intensive applications. Internet2 notes that the “advanced science and engineering communities … are already straining the limits of today’s network capabilities—and capacities” and that advanced media and telepresence applications often need the type of dedicated circuits previously regarded as anathema.52

Given the greater flexibility and functionality of today’s routers and the increasingly intense demands being placed on them, there seems little reason to require that they always operate in a single, pre-determined manner. That said, effective utilization of these new capabilities will doubtlessly require the development of new technical and institutional arrangements. Such innovations and changes may be inevitable if end users are to enjoy the full range of the network’s technical capabilities.

Pervasive Computing and Sensor NetworksThe last development that I will discuss that promises to effect some fundamental changes to the network architecture is the deployment of pervasive computing and sensor networks.53 Computer chips are being incorporated into an ever-widening array of devices. In addition, the growth of what the ITU calls “the Internet of things” means that more and more objects are being outfitted with radio frequency identification (RFID) chips, which combine a wireless antenna with a tiny amount of memory.54 Although RFID chips do not have their own power sources, the wireless signal from a sensor can provide enough power to activate them.

TheFutureofDigitalCommunications:TechnicalPerspectives 33

Page 36: The Future of Digital Communications: Technical Perspectives

Most of the literature on sensor networks has focused on the privacy implications. What has been largely overlooked is the extent to which sensor networks and pervasive computing will require different services from the network. As an initial matter, pervasive computing and RFID chips may require a greater degree of last-mile connectivity than the network currently provides. Sensor networks also necessarily involve machine-to-machine communications, which are typically more intensive and follow patterns that are quite different from those that occur when human beings initiate the communications. These developments also represent a significant increase in the num-ber of devices that will require network visibility, which will increase the pressure on the network to migrate to IPv6. In addition, the mobility of many of these endpoints may accelerate the rate of change within the address space, which may cause changes in routing and addressing systems.

Equally importantly, these developments represent a fairly significant increase in the heterogene-ity of devices attached to the network. The current network model implicitly assumes that the network interconnects a series of general purpose devices. Pervasive computing and sensor net-works involve more specialized devices that perform a narrower range of functions. As such, they may require a different approach to networking. For example, these devices may not be able to send acknowledgements in the manner envisioned by TCP. Unleashing the functionality of these stripped-down devices may also require a much tighter integration with the network.

Consequently, these devices may not have individual IP addresses. Instead, they may reside behind an IP gateway and communicate with one another through a lower-layer protocol. If so, they may require more wide-scale deployment of the middlebox architecture that has proven so controversial. That said, it is probably too early to offer any reliable predictions of the impact that deployment of these technologies will have on the architecture of the network.

ConclusionOne recurrent theme in the debates over Internet policy is the claim that the Internet’s future success depends on preserving the architecture that has made it successful in the past. This claim has always struck me as inherently conservative and potentially Panglossian.55 Policymakers must be open to the possibility that fundamental changes in the way people are using the network may require the network to evolve in new directions. Indeed, a significant portion of the engineering community believes that the time is ripe for a clean-slate approach aimed at creating a network that is starkly different from the one we have today. It is also possible that the network may not have a single response to these developments. Instead, as what people want from the network becomes increasingly heterogeneous, different portions of the network will respond in different ways to meet this demand.

Exactly what architectural changes will be required to meet these new challenges is difficult to foresee. Instead of creating regulations that lock in any particular vision of the network’s architec-ture, policymakers should create regulatory structures that give industry actors the latitude they need to experiment with different solutions. In so doing, policymakers would do well to recognize that, while disruptive, change is inevitable and to keep in mind the aphorism that in a technologi-cally dynamic environment, businesses are either moving forward or moving backward; there is no standing still.

TheFutureofDigitalCommunications:TechnicalPerspectives34

Page 37: The Future of Digital Communications: Technical Perspectives

Endnotes1. Professor of Law, Communication, and Computer and Information Science and Founding Director of the Center for

Technology, Innovation, and Competition, University of Pennsylvania.2. See, e.g., Preserving the Open Internet, Notice of Proposed Rulemaking, 24 F.C.C.R. 13064, 13065-67 ¶¶ 3-8

(2009); Net Neutrality: Hearing Before the S. Comm. on Commerce, Science, & Transportation, 109th Cong. 54-56 (2006) (statement of Lawrence Lessig), available at http://commerce.senate.gov/public/?a=Files.Serve&File_id=c5bf9e54-b51f-4162-ab92-d8a6958a33f8.

3. For example, David Clark’s seminal description of the priorities animating the Internet’s initial design (and which represents one of the most frequently cited articles in the literature) notes that the Internet’s origins as a Defense Department initiative led the protocol architects to place a high priority on certain concerns that would be relatively unimportant to the commercial Internet (such as survivability in a hostile environment) and to downplay other pri-orities that would prove critical once the Internet became a mainstream phenomenon (such as efficiency and cost allocation). See David D. Clark, The Design Philosophy of the DARPA Internet Protocols, 18 ACM Computer & Comm. Rev. 106, 107, 110 (1988).

4. These lists typically include such major concerns as security, mobility, quality of service, multicasting, and multihom-ing. See, e.g., Mark Handley, The Internet Only Just Works, 24 BT Tech. J. 119, 123, 126-27 (2006); Jon Crowcroft, Net Neutrality: The Technical Side of the Debate: A White Paper, Computer & Comm. Rev., Jan. 2007, at 49, 50-51, 52.

5. See, e.g., Paul Laskowski & John Chuang, A Leap of Faith? From Large-Scale Testbed to the Global Internet 2 (unpub-lished manuscript presented at TPRC’s 37th Research Conference on Communication, Information, and Internet Policy Sept. 27 2009), available at http://www.tprcweb.com/images/stories/papers/Laskowski_2009.pdf (collecting sources); see also Olivier Martin, State of the Internet & Challenges Ahead 1, 29 (2007), available at http://www.ictconsulting.ch/reports/NEC2007-OHMartin.doc (noting that “there appears to be a wide consensus about the fact that the Internet has stalled or ossified”).

6. One example is DARPA’s New Arch initiative. See David Clark et al., New Arch: Future Generation Internet Architecture 4, 13 (Final Technical Report Dec. 31, 2003), http://www.isi.edu/newarch/iDOCS/final.finalreport.pdf. The National Science Foundation is pursuing similar initiatives. One is known as the Global Environment for Network Innovations (GENI). See National Science Foundation, Global Environment for Network Innovation (GENI), http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=501055. Another was originally known as the Future Internet Design (FIND) project. See Vint Cerf et al., FIND Observer Panel Report (Apr. 9, 2009), http://www.nets-find.net/FIND_report_final.pdf. FIND was subsequently folded into the NSF’s Networking Technology and Systems (NeTS) program. See National Science Foundation, Networking Technology and Systems (NeTS), http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=503307. The NSF’s major current initiative is the Future Internet Architectures program. See National Science Foundation, Future Internet Architectures (FIA), http://www.nsf.gov/funding/pgm_summ.jsp?pims_id=503476.

7. See, e.g., European Commission Directorate-General for the Information Society and Media Project SMART 2008/0049, Internet Development Across the Ages (Jan. 2010), http://cordis.europa.eu/fp7/ict/fire/docs/ executive-summary_en.pdf; European Commission, FIRE – Future Internet and Experimentation, (July 6, 2010), http://cordis.europa.eu/fp7/ict/fire/home_en.html.

8. See, e.g., Jon Crowcroft & Peter Key, Report from the Clean Slate Network Research Post-SIGCOMM 2006 Workshop, Computer & Comm. Rev., Jan. 2007, at 75; Anja Feldmann, Internet Clean-Slate Design: What and Why?, Computer & Comm. Rev., July 2007, at 59; 100x100 Clean Slate Project, http://100x100network.org/; Stanford University Clean Slate Project, Program Goals, http://cleanslate.stanford.edu/index.php.

9. Cisco Systems, Inc., Cisco Visual Networking Index: Forecast and Methodology, 2009-2014, at 2 (June 2, 2010), http://www.cisco.com/en/US/solutions/collateral/ns341/ns525/ns537/ns705/ns827/white_paper_c11-481360.pdf.

10. See, e.g., Brett Swanson & George Gilder, Estimating the Exaflood: The Impact of Video and Rich Media on the Internet (Jan. 2008), http://www.discovery.org/a/4428.

11. See University of Minnesota, Minnesota Internet Traffic Studies, http://www.dtc.umn.edu/mints/home.php (estimat-ing that Internet traffic was continuing to grow at the previous annual rate of 40 percent to 50 percent as of the end of 2009).

TheFutureofDigitalCommunications:TechnicalPerspectives 35

Page 38: The Future of Digital Communications: Technical Perspectives

12. Nemertes Research, Internet Interrupted: Why Architectural Limitations Will Fracture the ’Net 34-35 (Nov. 2008). For example, regional ISPs that are too small to peer with backbone providers are now peering with each other in a practice known as “secondary peering,” which allows them to exchange traffic without employing the public backbone. Content delivery networks such as Akamai and Limelight now use “content delivery networks” to store information at thousands of locations around the world, often in places where they can deliver traffic without traversing the backbone. Lastly, large content and application providers are building large server farms that similarly allow them to distribute their content without touching the public backbone. See Christopher S. Yoo, Innovations in the Internet’s Architecture that Challenge the Status Quo, 8 J. on Telecomm. & High Tech. L. 79, 84-90 (2010).

13. Andrew S. Tanenbaum, Computer Networks 395-98 (4th ed. 2003); James F. Kurose & Keith W. Ross, Computer Networking: A Top-Down Approach 618-19, 622 (5th ed. 2010).

14. Kurose & Ross, supra note 13, at 626-29.15. Id. at 603.16. Christopher S. Yoo, Beyond Network Neutrality, 19 Harv. J.L. & Tech. 1, 22-23, 70-71 (2005).17. Kurose & Ross, supra note 13, at 664.18. Yoo, supra note 16, at 23, 71.19. See Bob Braden et al., Integrated Services in the Internet Architecture: An Overview (1994) (RFC 1633).20. See Steven Blake et al., An Architecture for Differentiated Services (1998) (RFC 2475).21. See Eric C. Rosen et al., Multiprotocol Label Switching Architecture (2001) (RFC 3031).22. See K.K. Ramakrishnan, The Addition of Explicit Congestion Notification (ECN) to IP (2001) (RFC 3168).23. See Internet Engineering Task Force, Low Extra Delay Background Transport (LEDBAT) Working Group Charter

(2009), http://www.ietf.org/html.charters/ledbat-charter.html.24. Douglas E. Comer, Internetworking with TCP/IP 510 (5th ed. 2006).25. The leading engineering textbook on TCP/IP notes the continuing existence of a “major controversy” over whether

quality of service is necessary and feasible. Id. at 510, 515. Another textbook describes the “continuing debate” between those who would use network management to provide quality of service guarantees and those who believe that increases in bandwidth and the use of content distribution networks can obviate the need for network manage-ment. Kurose & Ross, supra note 13, at 602-04.

26. Van Jacobson, Congestion Avoidance and Control, 18 Computer & Comm. Rev. 314 (1988). 27. Id. at 319.28. Jamshid Madhavi & Sally Floyd, TCP-Friendly Unicast Rate-Based Flow Control (Jan. 1997) (unpublished manu-

script), available at http://www.psc.edu/networking/papers/tcp_friendly.html.29. See Randall R. Stewart, Stream Control Transmission Protocol (2000) (RFC 2960).30. Sally Floyd & Kevin Fall, Promoting the Use of End-to-End Congestion Control in the Internet, 6 IEEE/ACM

Transactions on Networking, 458 (Oct. 1998).31. See Bob Briscoe, A Fairer, Faster Protocol, IEEE Spectrum, Dec. 2008, at 38; Jon Crowcroft, TCP Friendly Considered

Unfriendly (Dec. 6, 2001), http://www.cl.cam.ac.uk/~jac22/otalks/TCP_Too_Friendly_files/frame.htm.32. In addition, broadcast protocols exist that transmit packets to every host connected to the network. Broadcasting is

inefficient if only a fraction of the hosts are interested in the message. 33. See Ian Brown et al., Internet Multicast Tomorrow, Internet Protocol J., Dec. 2002, at 2; Christophe Diot et al.,

Deployment Issues for the IP Multicast Service and Architecture, IEEE Network, Jan./Feb. 2000, at 78.34. 47 U.S.C. § 522(6).35. FCC Indus. Analysis & Tech. Div., Wireline Competition Bur., High-Speed Services for Internet Access: Status as of

December 31, 2008, at 9 chart 2 (Feb. 2010), http://hraunfoss.fcc.gov/edocs_public/attachmatch/DOC-296239A1.pdf. 36. Because cable modem systems also share bandwidth locally, they are similarly susceptible to local congestion. See

Christopher S. Yoo, Network Neutrality, Consumers, and Innovation, 2008 U. Chi. Legal F. 179, 199-202.37. C.E. Shannon, A Mathematical Theory of Communication (pt. 1), 27 Bell Sys. Tech. J. 379 (1948); C.E. Shannon, A

Mathematical Theory of Communication (pt. 2), 27 Bell Sys. Tech. J. 623 (1948).38. A recent, eloquent demonstration of this strategy is the placards aboard the Amtrak Acela express trains asking pas-

sengers to refrain from using the WiFi service to download video.39. Hari Balakrishnan et al., Improving Reliable Transport and Handoff Performance in Cellular Wireless Networks, 1

Wireless Networks 469 (1995).

TheFutureofDigitalCommunications:TechnicalPerspectives36

Page 39: The Future of Digital Communications: Technical Perspectives

40. Kurose & Ross, supra note 13, at 585-86; Tanenbaum, supra note 13, at 553-54.41. Charles L. Jackson, Wireless Handsets Are Part of the Network (Apr. 30, 2007), http://files.ctia.org/pdf/Comments_

CTIA_SkypeOpposition_AppendixC_43007.pdf.42. Comer, supra note 24, at 339-46; Kurose & Ross, supra note 13, at 566-77; Tanenbaum, supra note 13, at 372-75,

462-64.43. Peter Mell & Tim Grance, The NIST Definition of Cloud Computing 2 (version 15 Oct. 7, 2009), http://csrc.nist.gov/

groups/SNS/cloud-computing/cloud-def-v15.doc.44. See Joe Weinman, The 10 Laws of Cloudonomics, Bloomberg Bus. Week, Sept. 6, 2008, http://www.businessweek.com/

technology/content/sep2008/tc2008095_942690.htm. 45. In short, aggregating flows that are not perfectly correlated reduces variability. Christopher S. Yoo, Rethinking the

Commitment to Free, Local Television, 52 Emory L.J. 1579, 1707-08 & n.471 (2003). As one industry commentator observes, “The peak of the sum is never greater than the sum of the peaks.” Weinman, supra note 44.

46. Nick Carr, The Big Switch (2007).47. Oracle CEO Larry Ellison Bashes “Cloud Computing” Hype, http://www.youtube.com/watch?v=UOEFXaWHppE.48. Jon Brodkin, Amazon Cloud Uses FedEx Instead of the Internet to Ship Data, Network World, June 10, 2010,

http://www.networkworld.com/news/2010/061010-amazon-cloud-fedex.html.49. Nemertes Research, supra note 12, at 53-54.50. Net Neutrality: Hearing Before the S. Comm. on Commerce, Science, and Transportation, 109th Cong. 64, 66 (2006)

(statement of Gary R. Bachula, Vice President, External Affairs, Internet2), available at http://www.gpo.gov/fdsys/pkg/CHRG-109shrg605/pdf/CHRG-109shrg605.pdf.

51. Clark, supra note 3, at 107-08.52. Internet2, Internet2 ION (Sept. 2009), http://www.internet2.edu/pubs/200909-IS-ION.pdf.53. For the seminal work, see Mark Weiser, The Computer for the 21st Century, Sci. Am., Sept. 1991, at 94.54. Int’l Telecomm. Union, ITU Internet Reports 2005: The Internet of Things (Nov. 2005).55. Pangloss was the teacher in Voltaire’s Candide who often remarked that “we live in the best of all possible worlds.”

Voltaire, Candide and Other Stories 4, 14, 88 (Roger Pearson trans., 2006) (1759).

TheFutureofDigitalCommunications:TechnicalPerspectives 37

Page 40: The Future of Digital Communications: Technical Perspectives

Fernando Laguarda Vice President, External Affairs and Policy CounselorTime Warner Cable901 F Street, NW Suite 800Washington, DC 20004Phone: (202) 370-4245E-mail: [email protected]

Dale N. HatfieldExecutive DirectorSilicon Flatirons Center for Law, Technology, and Entrepreneurship404 Wolf Law Building401 UCBUniversity of ColoradoBoulder, CO 80309-0401Phone: (303) 492-6648E-mail: [email protected]

Christopher S. Yoo Professor of Law, Communication, Computer and Information Science; Director, Center for

Technology, Innovation, and Competition University of Pennsylvania Law School3400 Chestnut StreetSilverman 131Philadelphia, PA 19104Phone: (215) 746-8772 E-mail: [email protected]

Author Contact Information

TheFutureofDigitalCommunications:TechnicalPerspectives38

Page 41: The Future of Digital Communications: Technical Perspectives

TheFutureofDigitalCommunications:TechnicalPerspectives 39

Page 42: The Future of Digital Communications: Technical Perspectives

TheFutureofDigitalCommunications:TechnicalPerspectives40

Page 43: The Future of Digital Communications: Technical Perspectives

About the Time Warner Cable Research Program on Digital Communications

TheTimeWarnerCableResearchProgramonDigitalCommunicationswillbededi-catedtoincreasingpublicunderstandingofthebenefitsandchallengesfacingthefutureofdigitaltechnologiesinthehome,office,classroom,andcommunity.

TheResearchProgramwillfocusonthefollowingareas:• Increasingknowledgeaboutthemarketplaceandtheconsumer• Increasingknowledgeaboutdigitaltechnologies• Increasingknowledgeaboutcommunicationspolicy• Increasingknowledgeaboutinnovationindigitalcommunications

About the Research Stipends

Individualsreceivingastipendshouldproducea25-to35-pagereport.Thereportshouldbesubmittednolaterthansixmonthsafterthestartoftheproject.

Proposalsfromanydisciplinewithresearchinterestindigitalcommunicationswillbeconsidered.Multidisciplinaryresearchteams,consistingoftwoormoreauthorsfromdifferentfields,areencouraged.

Size of Stipend: $20,000

Application Deadlines for 2010–2011 Awards:November1,2010andApril1,2011

Submitting Applications:Applicationsshouldbesubmittedonlineatwww.twcresearchprogram.com.Applicantsshouldsubmit:• Athree-pagedescriptionoftheproposedproject• Aresumé(nomorethanthreepagesperauthor)

Applicantswillbenotifiedwhentheirapplicationisreceivedandwhentheproposalreviewprocessiscompleted.

About Time Warner Cable

TimeWarnerCableisthesecond-largestcableoperatorintheU.S.andamajorpro-viderofcommunicationsandentertainmentservicestohomesandbusinesses.Ourtechnologicallyadvanced,well-clusteredoperationsserveabout14.6millioncus-tomersin28states,withourlargestoperationslocatedprimarilyinfivegeographicareas:NewYorkState(includingNewYorkCity),Texas,Ohio,theCarolinas,andSouthernCalifornia(includingLosAngeles).TimeWarnerCableisbuiltonafounda-tionoftechnologicalinnovationandcommitmenttothecommunitiesweserve.

Page 44: The Future of Digital Communications: Technical Perspectives

4

901FStreet,NWSuite800Washington,DC20004

www.twcresearchprogram.com

PRSRT STDU.S. POSTAGE

PAIDCONTINENTAL

20110