White Paper Gigabit Ethernet and ATM - Câmpus …msobral/RCO2/docs/casagrande/MODULO2/cap8/... ·...

28
White Paper Gigabit Ethernet and ATM A Technology Perspective Bursty, high-bandwidth applications are driving the need for similarly high-bandwidth campus backbone infrastructures. Today, there are two choices for the high-speed campus backbone: ATM or Gigabit Ethernet. For many reasons, business and technical, Gigabit Ethernet is selected as the technology of choice. This paper briefly presents, from a technical perspective, why Gigabit Ethernet is favored for most enterprise LANs.

Transcript of White Paper Gigabit Ethernet and ATM - Câmpus …msobral/RCO2/docs/casagrande/MODULO2/cap8/... ·...

White Paper

Gigabit Ethernet and ATM

A Technology Perspective

Bursty, high-bandwidth applications are driving

the need for similarly high-bandwidth campus

backbone infrastructures. Today, there are two choices

for the high-speed campus backbone: ATM or Gigabit

Ethernet. For many reasons, business and technical,

Gigabit Ethernet is selected as the technology of choice.

This paper briefly presents, from a technical perspective,

why Gigabit Ethernet is favored for most enterprise LANs.

In the past, most campuses use shared-media backbones

— such as 16/32 Mbps Token-Ring and 100 Mbps FDDI —

that are only slightly higher in speed than the LANs

and end stations they interconnect. This has caused

severe congestion in the campus backbones when these

backbones interconnect a number of access LANs.

Gigabit Ethernet and ATM: A Technology Perspective White Paper2

A high capacity, high performance, and highly resilient

backbone is needed-one that can be scaled as end stations

grow in number or demand more bandwidth. Also

needed is the ability to support differentiated service

levels (Quality of Service or QoS), so that high priority,

time-sensitive, and mission-critical applications can

share the same network infrastructure as those that

require only best-effort service.

Gigabit Ethernet and ATM: A Technology Perspective White Paper 3

Interface, and Multiprotocol over ATM).This additional complexity is required inorder to adapt ATM to the connectionless,frame-based world of the campus LAN.

Meanwhile, the very successful FastEthernet experience spurred the developmentof Gigabit Ethernet standards. Withintwo years of their conception (June1996), Gigabit Ethernet over fiber(1000BASE-X) and copper (1000BASE-T)standards were approved, developed, andin operation. Gigabit Ethernet not onlyprovides a massive scaling of bandwidthto 1000 Mbps (1 Gbps), but also shares anatural affinity with the vast installed baseof Ethernet and Fast Ethernet campusLANs running IP applications.

Enhanced by additional protocols alreadycommon to Ethernet (such as IEEE802.1Q Virtual LAN tagging, IEEE802.1p prioritization, IETFDifferentiated Services, and CommonOpen Policy Services), Gigabit Ethernet isnow able to provide the differential qualitiesof service that previously only ATM couldprovide. One key difference with GigabitEthernet is that additional functionalitycan be incrementally added in a non-disruptive way as required, compared with the rather revolutionary approach ofATM. Further developments in bandwidthand distance scalability will see 10 GbpsEthernet over local (10G-BASE-T) andwide area (10G-BASE-WX) networks.Thus, the promise of end-to-end seamlessintegration, once only the province ofATM, will be possible with Ethernet and all its derivations.

Today, there are two technology choicesfor the high-speed campus backbone:ATM and Gigabit Ethernet. While bothseek to provide high bandwidth and differentiated QoS within enterpriseLANs, these are very different technologies.

Which is a “better” technology is nolonger a subject of heated industry debate— Gigabit Ethernet is an appropriatechoice for most campus backbones. Many business users have chosen GigabitEthernet as the backbone technology for their campus networks. An InfoneticsResearch survey (March 1999) recordsthat 91 percent of respondents believethat Gigabit Ethernet is suitable for LANbackbone connection, compared with 66percent for ATM. ATM continues to be agood option where its unique, rich, andcomplex functionality can be exploited by its deployment, most commonly inmetropolitan and wide area networks.

Whether Gigabit Ethernet or ATM is deployed as the campus backbone technology of choice, the ultimate decision is one of economics and soundbusiness sense, rather than pure technicalconsiderations.

The next two sections provide a briefdescription of each technology.

Asynchronous Transfer Mode (ATM)Asynchronous Transfer Mode (ATM) has been used as a campus backbone technology since its introduction in theearly 1990s. ATM is specifically designedto transport multiple traffic types — data, voice and video, real-time or

non-real-time — with inherent QoS for each traffic category.

To enable this and other capabilities,additional functions and protocols areadded to the basic ATM technology.Private Network Node Interface (PNNI)provides OSPF-like functions to signaland route QoS requests through a hierarchical ATM network. Multiprotocol

Until recently, Asynchronous TransferMode (ATM) was the only switchingtechnology able to deliver high capacityand scalable bandwidth, with the promiseof end-to-end Quality of Service. ATMoffered seamless integration from thedesktop, across the campus, and over themetropolitan/wide area network. It wasthought that users would massivelydeploy connection-oriented, cell-basedATM to the desktop to enable new nativeATM applications to leverage ATM’s richfunctionality (such as QoS). However,this did not come to pass. The InternetProtocol (IP), aided and abetted by theexploding growth of the Internet, roderoughshod over ATM deployment andmarched relentlessly to world dominance.

When no other gigabit technology existed,ATM provided much needed relief as ahigh bandwidth backbone to interconnectnumerous connectionless, frame-basedcampus LANs. But with the massive proliferation of IP applications, newnative ATM applications did not appear.Even 25 Mbps and 155 Mbps ATM to the desktop did not appeal to the vast majority of users, because of theircomplexity, small bandwidth increase,and high costs when compared with thevery simple and inexpensive 100 MbpsFast Ethernet.

On the other hand, Fast Ethernet, with itsauto-sensing, auto-negotiation capabilities,integrated seamlessly with the millionsof installed 10 Mbps Ethernet clients and servers. Although relatively simpleand elegant in concept, the actual implementation of ATM is complicatedby a multitude of protocol standards and specifications (for instance, LANEmulation, Private Network Node

over ATM (MPOA) allows the establish-ment of short-cut routes betweencommunicating end systems on differentsubnets, bypassing the performancebottlenecks of intervening routers. Therehave been and continue to be enhancementsin the areas of physical connectivity,bandwidth scalability, signaling, routingand addressing, security, and management.

While rich in features, this functionalityhas come with a fairly heavy price tag incomplexity and cost. To provide backboneconnectivity for today’s legacy access networks, ATM — a connection-orientedtechnology — has to emulate capabilitiesinherently available in the predominantlyconnectionless Ethernet LANs, includingbroadcast, multicast, and unicast transmissions. ATM must also manipulatethe predominantly frame-based traffic onthese LANs, segmenting all frames into cellsprior to transport, and then reassemblingcells into frames prior to final delivery.Many of the complexity and interoperabilityissues are the result of this LANEmulation, as well as the need to provideresiliency in these emulated LANs. Thereare many components required to makethis workable; these include the LANEmulation Configuration Server(s), LAN Emulation Servers, Broadcast andUnknown Servers, Selective MulticastServers, Server Cache SynchronizationProtocol, LAN Emulation User Network

Interface, LAN Emulation Network-Network Interface, and a multitude ofadditional protocols, signaling controls,and connections (point-to-point, point-to-multipoint, multipoint-to-point, andmultipoint-to-multipoint).

Until recently, ATM was the only technology able to promise the benefits of QoS from the desktop, across the LANand campus, and right across the world.However, the deployment of ATM to thedesktop, or even in the campus backboneLANs, has not been as widespread as predicted. Nor have there been manynative applications available or able tobenefit from the inherent QoS capabilitiesprovided by an end-to-end ATM solution.Thus, the benefits of end-to-end QoShave been more imagined than realized.

Gigabit Ethernet as the campus backbonetechnology of choice is now surpassingATM. This is due to the complexity and the much higher pricing of ATMcomponents such as network interfacecards, switches, system software, management software, troubleshootingtools, and staff skill sets. There are alsointeroperability issues, and a lack of suitable exploiters of ATM technology.

Gigabit EthernetToday, Gigabit Ethernet is a very viableand attractive solution as a campus backbone LAN infrastructure. Althoughrelatively new, Gigabit Ethernet is derivedfrom a simple technology, and a large andwell-tested Ethernet and Fast Ethernetinstalled base. Since its introduction,Gigabit Ethernet has been vigorouslyadopted as a campus backbone technology,with possible use as a high-capacity connection for high-performance serversand workstations to the backbone switches.

The main reason for this success is thatGigabit Ethernet provides the functionalitythat meets today’s immediate needs at anaffordable price, without undue complexityand cost. Gigabit Ethernet is complementedby a superset of functions and capabilitiesthat can be added as needed, with thepromise of further functional enhancementsand bandwidth scalability (for example,IEEE 802.3ad Link Aggregation, and 10Gbps Ethernet) in the near future. Thus,Gigabit Ethernet provides a simple scaling-up in bandwidth from the 10/100Mbps Ethernet and Fast Ethernet LANsthat are already massively deployed.

Simply put, Gigabit Ethernet is Ethernet,but 100 times faster!

Since Gigabit Ethernet uses the sameframe format as today’s legacy installedLANs, it does not need the segmentationand reassembly function that ATMrequires to provide cell-to-frame andframe-to-cell transitions. As a connection-less technology, Gigabit Ethernet does notrequire the added complexity of signalingand control protocols and connectionsthat ATM requires. Finally, because QoS-capable desktops are not readily available,Gigabit Ethernet is no less deficient inproviding QoS. New methods have beendeveloped to incrementally deliver QoSand other needed capabilities that lendthemselves to much more pragmatic andcost-effective adoption and deployment.

To complement the high-bandwidthcapacity of Gigabit Ethernet as a campusbackbone technology, higher-layer functionsand protocols are available, or are beingdefined by standards bodies such as theInstitute of Electrical and ElectronicsEngineers (IEEE) and the Internet

Gigabit Ethernet and ATM: A Technology Perspective White Paper4

Engineering Task Force (IETF). Many ofthese capabilities recognize the desire forconvergence upon the ubiquitous InternetProtocol (IP). IP applications and transportprotocols are being enhanced or developedto address the needs of high speed, multi-media networking that benefit GigabitEthernet. The Differentiated Services(DiffServ) standard provides differentialQoS that can be deployed from theEthernet and Fast Ethernet desktopsacross the Gigabit Ethernet campus backbones. The use of IEEE 802.1QVLAN Tagging and 802.1p User Prioritysettings allow different traffic types to be accorded the appropriate forwardingpriority and service.

When combined with policy-enabled networks, DiffServ provides powerful,secure, and flexible QoS capabilities forGigabit Ethernet campus LANs by usingprotocols such as Common Open PolicyServices (COPS), Lightweight DirectoryAccess Protocol (LDAP), Dynamic HostConfiguration Protocol (DHCP), andDomain Name System (DNS). Furtherdevelopments, such as ResourceReservation Protocol, multicasting, real-time multimedia, audio and videotransport, and IP telephony, will addfunctionality to a Gigabit Ethernet campus,using a gradual and manageable approachwhen users need these functions.

There are major technical differencesbetween Gigabit Ethernet and ATM. Acompanion white paper, Gigabit Ethernetand ATM: A Business Perspective, provides acomparative view of the two technologiesfrom a managerial perspective.

Gigabit Ethernet and ATM: A Technology Perspective White Paper 5

Technological AspectsAspects of a technology are importantbecause they must meet some minimumrequirements to be acceptable to users.Value-added capabilities will be usedwhere desirable or affordable. If theseadditional capabilities are not used,whether for reasons of complexity or lackof “exploiters” of those capabilities, thenusers are paying for them for no reason (a common example is that many of theadvanced features of a VCR are rarelyexploited by most users). If features aretoo expensive, relative to the benefits thatcan be derived, then the technology is not likely to find widespread acceptance.Technology choices are ultimately business decisions.

The fundamental requirements for LANcampus networks are very much differentfrom those of the WAN. It is thus necessary to identify the minimumrequirements of a network, as well as the value-added capabilities that are “nice to have”.

In the sections that follow, various termsare used with the following meanings:

• “Ethernet” is used to refer to all currentvariations of the Ethernet technology:traditional 10 Mbps Ethernet, 100Mbps Fast Ethernet, and 1000 MbpsGigabit Ethernet.

• “Frame” and “packet” are used interchangeably, although this is notabsolutely correct from a technicalpurist point of view.

Quality of ServiceUntil recently, Quality of Service (QoS)was a key differentiator between ATMand Gigabit Ethernet. ATM was the onlytechnology that promised QoS for voice,video, and data traffic. The InternetEngineering Task Force (IETF) and various vendors have since developed protocol specifications and standards thatenhance the frame-switched world withQoS and QoS-like capabilities. Theseefforts are accelerating and, in certaincases, have evolved for use in both theATM and frame-based worlds.

The difference between ATM and GigabitEthernet in the delivery of QoS is thatATM is connection-oriented, whereasEthernet is connectionless. With ATM,QoS is requested via signaling beforecommunication can begin. The connectionis only accepted if it is without detrimentto existing connections (especially forreserved bandwidth applications).Network resources are then reserved asrequired, and the accepted QoS service isguaranteed to be delivered “end-to-end.”By contrast, QoS for Ethernet is mainlydelivered hop-by-hop, with standards in progress for signaling, connectionadmission control, and resource reservation.

ATM QoSFrom its inception, ATM has beendesigned with QoS for voice, video and data applications. Each of these has different timing bounds, delay, delay variation sensitivities (jitter), and bandwidth requirements.

In ATM, QoS has very specific meaningsthat are the subject of ATM Forum andother standards specifications. Defined atthe ATM layer (OSI Layer 2), the servicearchitecture provides five categories ofservices that relate traffic characteristicsand QoS requirements to network behavior:

• CBR: Constant Bit Rate, for applicationsthat are sensitive to delay and delayvariations, and need a fixed but continuously available amount ofbandwidth for the duration of a connection. The amount of bandwidthrequired is characterized by the Peak Cell Rate. An example of this is circuit emulation.

• rt-VBR: Real-time Variable Bit Rate, forapplications that need varying amountsof bandwidth with tightly regulateddelay and delay variation, and whosetraffic is bursty in nature. The amountof bandwidth is characterized by thePeak Cell Rate and Sustainable CellRate; burstiness is defined by theMaximum Burst Size. Example applications include real-time voice and video conferencing.

• nrt-VBR: Non-real-time Variable BitRate, for applications with similarneeds as rt-VBR, requiring low cell loss,varying amounts of bandwidth, andwith no critical delay and delay variation requirements. Example applications include non-real-timevoice and video.

• ABR: Available Bit Rate, for applicationsrequiring low cell loss, guaranteed minimum and maximum bandwidths,and with no critical delay or delay variation requirements. The minimumand maximum bandwidths are characterized by the Minimum CellRate and Peak Cell Rate respectively.

• UBR: Unspecified Bit Rate, for applications that can use the networkon a best-effort basis, with no serviceguarantees for cell loss, delay and delayvariations. Example applications are e-mail and file transfer.

Depending on the QoS requested, ATMprovides a specific level of service. At one extreme, ATM provides a best-effortservice for the lowest QoS (UBR), withno bandwidth reserved for the traffic. At the other extreme, ATM provides aguaranteed level of service for the higherQoS (that is, CBR and VBR) traffic.Between these extremes, ABR is able touse whatever bandwidth is available withproper traffic management and controls.

Because ATM is connection-oriented,requests for a particular QoS, admissioncontrol, and resource allocation are anintegral part of the call signaling and connection setup process. The call isadmitted and the connection establishedbetween communicating end systemsonly if the resources exist to meet arequested QoS, without jeopardizing

services to already established connections.Once established, traffic from the endsystems are policed and shaped forconformance with the agreed trafficcontract. Flow and congestion aremanaged in order to ensure the properQoS delivery.

Gigabit Ethernet QoSOne simple strategy for solving thebackbone congestion problem is to over-provision bandwidth in the backbone.This is especially attractive if the initialinvestment is relatively inexpensive andthe ongoing maintenance is virtually‘costless’ during its operational life.

Gigabit Ethernet is an enabler of just sucha strategy in the LAN. Gigabit Ethernet,and soon 10-Gigabit Ethernet, willprovide all the bandwidth that is everneeded for many application types,eliminating the need for complex QoSschemes in many environments. However,some applications are bursty in natureand will consume all available bandwidth,to the detriment of other applications thatmay have time-critical requirements. Thesolution is to provide a priority mechanismthat ensures bandwidth, buffer space, and processor power are allocated to thedifferent types of traffic.

With Gigabit Ethernet, QoS has a broaderinterpretation than with ATM. But it is just as able — albeit with different mechanisms — to meet the requirementsof voice, video and data applications.

In general, Ethernet QoS is delivered at a high layer of the OSI model. Framesare typically classified individually by afiltering scheme. Different priorities areassigned to each class of traffic, eitherexplicitly by means of priority bit settingsin the frame header, or implicitly in the

Gigabit Ethernet and ATM: A Technology Perspective White Paper6

Gigabit Ethernet and ATM: A Technology Perspective White Paper 7

Figure 1: Differentiated Services Field (RFC 2474).

priority level of the queue or VLAN towhich they are assigned. Resources arethen provided in a preferentially prioritized(unequal or unfair) way to service thequeues. In this manner, QoS is deliveredby providing differential services to the differentiated traffic through thismechanism of classification, priority setting, prioritized queue assignment, and prioritized queue servicing. (For further information on QoS in Frame-Switched Networks, see WP3510-A/5-99,a Nortel Networks white paper availableon the Web at www.nortelnetworks.com.)

Differentiated ServicesChief among the mechanisms availablefor Ethernet QoS is DifferentiatedServices (DiffServ). The IETF DiffServWorking Group proposed DiffServ as a simple means to provide scalable differentiated services in an IP network.DiffServ redefines the IP Precedence/Typeof Service field in the IPv4 header and the Traffic Class field in the IPv6 headeras the new DS Field (see Figure 1). An IPpacket’s DS Field is then marked with aspecific bit pattern, so the packet willreceive the desired differentiated service(that is, the desired forwarding priority),

also known as per-hop behavior (PHB), at each network node along the path from source to destination.

To provide a common use and interpreta-tion of the possible DSCP bit patterns,RFC 2474 and RFC 2475 define thearchitecture, format, and general use of these bits within the DSCP Field.

These definitions are required in order to guarantee the consistency of expectedservice when a packet crosses from onenetwork’s administrative domain toanother, or for multi-vendor interoperability.The Working Group also standardized thefollowing specific per-hop behaviors andrecommended bit patterns (also known ascode points or DSCPs) of the DS Fieldfor each PHB:

• Expedited Forwarding (EF-PHB),sometimes described as PremiumService, uses a DSCP of b’101110’.The EF-PHB provides the equivalentservice of a low loss, low latency, lowjitter, assured bandwidth point-to-point connection (a virtual leased line).EF-PHB frames are assigned to a highpriority queue where the arrival rate offrames at a node is shaped to be alwaysless than the configured departure rateat that node.

• Assured Forwarding (AF-PHB) uses 12 DSCPs to identify four forwardingclasses, each with three levels of dropprecedence (12 PHBs). Frames areassigned by the user to the differentclasses and drop precedence dependingon the desired degree of assured — but not guaranteed — delivery. Whenallocated resources (buffers and band-width) are insufficient to meet demand,frames with the high drop precedence arediscarded first. If resources are still

restricted, medium precedence framesare discarded next, and low precedenceframes are dropped only in the mostextreme lack of resource conditions.

• A recommended Default PHB with a DSCP of b’000000’ (six zeros) thatequates to today’s best-effort servicewhen no explicit DS marking exists.

In essence, DiffServ operates as follows:

• Each frame entering a network is analyzed and classified to determine the appropriate service desired by theapplication.

• Once classified, the frame is marked inthe DS field with the assigned DSCPvalue to indicate the appropriate PHB.Within the core of the network, framesare forwarded according to the PHBindicated.

• Analysis, classification, marking, policing, and shaping operations needonly be carried out at the host or network boundary node. Interveningnodes need only examine the shortfixed length DS Field to determine theappropriate PHB to be given to theframe. This architecture is the key toDiffServ scalability. In contrast, othermodels such as RSVP/IntegratedServices are severely limited by signaling,application flow, and forwarding statemaintenance at each and every nodealong the path.

Byte Bit 1 2 3 4 5 6 7 8

1 IP Version IP Header Length

2 Differentiated Services Code Point (DSCP) Currently Unused

3-20 (Remainder of IP Header)

• Policies govern how frames are markedand traffic conditioned upon entry to the network; they also govern theallocation of network resources to thetraffic streams, and how the traffic isforwarded within that network.

DiffServ allows nodes that are not DS-capable, or even DS-aware, to continue to use the network in the same way asthey have previously by simply using the Default PHB, which is best-effort forwarding. Thus, without requiring end-to-end deployment, DiffServ providesGigabit Ethernet with a powerful, yetsimple and scalable, means to provide

differential QoS services to support various types of application traffic.

Common Open Policy ServicesTo enable a Policy Based Networkingcapability, the Common Open PolicyServices (COPS) protocol can be used to complement DiffServ-capable devices.COPS provides an architecture and a request-response protocol for communicating admission controlrequests, policy-based decisions, and policy information between a networkpolicy server and the set of clients it serves.

Connection-orientedvs. ConnectionlessATM is a connection-oriented protocol.Most enterprise LAN networks are connectionless Ethernet networks,whether Ethernet, Fast Ethernet andGigabit Ethernet.

Note: Because of Ethernet’s predominance,it greatly simplifies the discussion to notrefer to the comparatively sparse Token-Ring technology; this avoids complicatingthe comparison with qualifications forToken-Ring LANs and ELANs, RouteDescriptors instead of MAC addresses as LAN destinations, and so forth.

An ATM network may be used as a high-speed backbone to connect EthernetLAN switches and end stations together.However, a connection-oriented ATMbackbone requires ATM Forum LANEmulation (LANE) protocols to emulatethe operation of connectionless legacyLANs. In contrast with simple GigabitEthernet backbones, much of the complexity of ATM backbones arisesfrom the need for LANE.

ATM LAN Emulation v1LANE version 1 was approved in January

1995. Whereas a Gigabit Ethernet backbone is very simple to implement,each ATM emulated LAN (ELAN) needsseveral logical components and protocolsthat add to ATM’s complexity. Thesecomponents are:

• LAN Emulation ConfigurationServer(s) (LECS) to, among otherduties, provide configuration data to anend system, and assign it to an ELAN(although the same LECS may servemore than one ELAN).

• Only one LAN Emulation Server(LES) per ELAN to resolve 6-byte LANMAC addresses to 20-byte ATMaddresses and vice versa.

Gigabit Ethernet and ATM: A Technology Perspective White Paper8

With Gigabit Ethernet, the switches at thenetwork ingress may act as COPS clients.COPS clients examine frames as theyenter the network, communicate with acentral COPS server to decide if the trafficshould be admitted to the network, andenforce the policies. These policies includeany QoS forwarding treatment to beapplied during transport. Once this isdetermined, the DiffServ-capable GigabitEthernet switches can mark the framesusing the selected DSCP bit pattern,apply the appropriate PHB, and forwardthe frames to the next node. The nextnode need only examine the DiffServ

markings to apply the appropriate PHB.Thus, frames are forwarded hop-by-hopthrough a Gigabit Ethernet campus withthe desired QoS.

In Nortel Networks’ Passport* CampusSolution, COPS will be used by Optivity*Policy Services (COPS server) and thePassport Enterprise and Routing Switches(COPS clients) to communicate QoSpolicies defined at the policy server to theswitches for enforcement (see Figure 2).

Figure 2: Passport Campus Solution and Optivity Policy Services.

Data

Passport 1000Routing Switch

Server Farm

Passport 700 Server Switch

Passport 8000Routing Switch

End Station can set802.1p or DSCP field

Routing Switch validatesusing policy server and sets/resetsDSCP using Express Classification

Routing Switch policies,shapes and forwards

classified frames

Policy Server communicatesfilter and queuing rules usingCommon Open Policy Services Server Switch ensures most

appropriate server used,depending on loads and

response times

Optivity Policy Servicesand Management

after. Unintended release of a requiredVCC may trigger the setup process. Incertain circumstances, this can lead toinstability in the network.

The most critical components of the LAN Emulation Service are the LES andBUS, without which an ELAN cannotfunction. Because each ELAN can only be served by a single LES and BUS, thesecomponents need to be backed up by other LESs and BUSs to prevent any single point of failure stopping communication between the possiblyhundreds or even thousands of end stationsattached to an ELAN. In addition, thesingle LES or BUS represents a potentialperformance bottleneck.

Thus, it became necessary for the LANEmulation Service components to bereplicated for redundancy and eliminationof single points of failures, and distributedfor performance.

ATM LAN Emulation v2To enable communication between the redundant and distributed LANEmulation Service components, as well asother functional enhancements, LANE v1was re-specified as LANE v2; it now comprises two separate protocols:

• LUNI: LAN Emulation User NetworkInterface (approved July 1997)

• LNNI: LAN Emulation Network-NetworkInterface (approved February 1999).

Gigabit Ethernet and ATM: A Technology Perspective White Paper 9

Figure 3: LAN Emulation v1 Connections and Functions.

Point-to-point or

Connection Name Uni- or Bi-directional Point-to-multipoint Used for communication

Configuration Direct VCC Bi-directional Point-to-point Between an LECS and an LEC

Control Direct VCC Bi-directional Point-to-point Between an LES and its LECs**

Control Distribute VCC Uni-directional Point-to-multipoint From an LES to its LECs

Multicast Send VCC Bi-directional Point-to-point Between a BUS and an LEC

Multicast Forward VCC Uni-directional Point-to-multipoint From a BUS to its LECs

Data Direct VCC Bi-directional Point-to-point Between an LEC and another LEC

**Note: There is a difference between LECS with an uppercase “S” (meaning LAN Emulation Configuration Server) and LECs with a lowercase “s” meaning LAN Emulation Clients, or more than one LEC) at the end of the acronym.

LUNI, among other enhancements,added the Selective Multicast Server(SMS), to provide a more efficient meansof forwarding multicast traffic, which was previously performed by the BUS.SMS thus offloads much of the multicastprocessing from the BUS, allowing theBUS to focus more on the forwarding of broadcast traffic and traffic with yet-to-be-resolved LAN destinations.

LNNI provides for the exchange of configuration, status, control coordination,and database synchronization betweenredundant and distributed components of the LAN Emulation Service.

However, each improvement adds newcomplexity. Additional protocols arerequired and additional VCCs need to beestablished, maintained, and monitoredfor communication between the newLAN Emulation Service components andLECs. For example, all LESs serving anELAN communicate control messages toeach other through a full mesh of ControlCoordinate VCCs. These LESs must alsosynchronize their LAN-ATM addressdatabases, using the Server CacheSynchronization Protocol (SCSP — RFC2334), across the Cache SynchronizationVCC. Similarly, all BUSs serving anELAN must be fully connected by a mesh of Multicast Forward VCCs used to forward data.

• Only one Broadcast and UnknownServer (BUS) per ELAN to forwardbroadcast frames, multicast frames, andframes for destinations whose LAN orATM address is as yet unknown.

• One or more LAN Emulation Clients(LEC) to represent the end systems.This is further complicated by whetherthe end system is a LAN switch towhich other Ethernet end stations areattached, or whether it is an ATM-directly attached end station. A LANswitch requires a proxy LEC, whereasan ATM-attached end station requires a non-proxy LEC.

Collectively, the LECS, LES, and BUS are known as the LAN EmulationServices. Each LEC (proxy or non-proxy)communicates with the LAN EmulationServices using different virtual channelconnections (VCCs) and LAN EmulationUser Network Interface (LUNI) protocols.Figure 3 shows the VCCs used in LANE v1.

Some VCCs are mandatory — onceestablished, they must be maintained ifthe LEC is to participate in the ELAN.Other VCCs are optional — they may ormay not be established and, if established,they may or may not be released there-

Unicast traffic from a sending LEC is initially forwarded to a receiving LEC viathe BUS. When a Data Direct VCC hasbeen established between the two LECs,the unicast traffic is then forwarded viathe direct path. During the switchoverfrom the initial to the direct path, it ispossible for frames to be delivered out oforder. To prevent this possibility, LANErequires an LEC to either implement theFlush protocol, or for the sending LEC todelay transmission at some latency cost.

The forwarding of multicast traffic from an LEC depends on the availabilityof an SMS:

• If an SMS is not available, the LECestablishes the Default Multicast SendVCC to the BUS that, in turn, will add the LEC as a leaf to its DefaultMulticast Forward VCC. The BUS is then used for the forwarding of multicast traffic.

address database with its LES usingSCSP across Cache SynchronizationVCCs.

Figure 4 shows the additional connectionsrequired by LANE v2.

This multitude of control and coordina-tion connections, as well as the exchange of control frames, consumes memory, processing power, and bandwidth, just so that a Data Direct VCC can finally beestablished for persistent communicationbetween two end systems. The complexitycan be seen in Figure 5.

Gigabit Ethernet and ATM: A Technology Perspective White Paper10

• If an SMS is available, the LEC canestablish, in addition to the DefaultMulticast Send VCC to the BUS, aSelective Multicast Send VCC to theSMS. In this case, the BUS will add theLEC as a leaf to its Default MulticastForward VCC and the SMS will addthe LEC as a leaf to its SelectiveMulticast Forward VCC. The BUS isthen used initially to forward multicasttraffic until the multicast destination isresolved to an ATM address, at whichtime the SMS is used. The SMS alsosynchronizes its LAN-ATM multicast

Figure 4: LAN Emulation v2 Additional Connections and/or Functions.

Point-to-point or

Connection Name Uni- or Bi-directional Point-to-multipoint Used for communication

LECS Synchronization VCC Bi-directional Point-to-point Between LECSs

Configuration Direct VCC Bi-directional Point-to-point Between an LECS and an LEC, LES or BUS

Control Coordinate VCC Bi-directional Point-to-point Between LESs

Cache Synchronization VCC Bi-directional Point-to-point Between an LES and its SMSs

Default Multicast Send VCC Bi-directional Point-to-point Between a BUS and an LEC (as in v1)

Default Multicast Forward VCC Uni-directional Point-to-multipoint From a BUS to its LECs and other BUSs

Selective Multicast Send VCC Bi-directional Point-to-point Between an SMS and an LEC

Selective Multicast Forward VCC Uni-directional Point-to-multipoint From an SMS to its LECs

Figure 5: Complexity of ATM LAN Emulation.

LECS LECS

SMS SMS

LEC LECLEC LEC LECLEC

SMS SMS

LES LES

BUS BUS2 2

3

4

66 6

6 6 6

4

7 78

7 78

9 9 9

9**

10**

9

5

2 2

3

4 4

5

1

11 1

1

11

1

11

1 Configuration Direct VCC2 Control Direct VCC3 Control Distribute VCC4 Default Multicast Send VCC5 Default Multicast Forward VCC6 Data Direct VCC

** may be combined into one dual function VCC between two neighbor LESs

7 Selective Multicast VCC8 Selective Multicast Forward VCC9 Cache Sync-only VCC

10 Control Coordinate-only VCC11 LECS Sync VCC

Gigabit Ethernet and ATM: A Technology Perspective White Paper 11

Figure 6: AAL-5 CPCS-PDU.

AAL-5 EncapsulationIn addition to the complexity of connectionsand protocols, the data carried overLANE uses ATM Adaptation Layer-5(AAL-5) encapsulation, which adds overhead to the Ethernet frame. TheEthernet frame is stripped of its FrameCheck Sequence (FCS); the remainingfields are copied to the payload portion of the CPCS-PDU, and a 2-byte LANEheader (LEH) is added to the front, withan 8-byte trailer at the end. Up to 47 pad bytes may be added, to produce aCPCS-PDU that is a multiple of 48, the size of an ATM cell payload.

The CPCS-PDU also has to be segmentedinto 53-byte ATM cells before beingtransmitted onto the network. At thereceiving end, the 53-byte ATM cells haveto be decapsulated and reassembled intothe original Ethernet frame.

Figure 6 shows the CPCS-PDU that isused to transport Ethernet frames overLANE.

Gigabit Ethernet LANIn contrast, a Gigabit Ethernet LANbackbone does not have the complexityand overhead of control functions,

data encapsulation and decapsulation,segmentation and reassembly, and controland data connections required by anATM backbone.

As originally intended, at least for initialdeployment in the LAN environment,Gigabit Ethernet uses full-duplex transmission between switches, orbetween a switch and a server in a serverfarm — in other words, in the LAN backbone. Full-duplex Gigabit Ethernet is much simpler, and does not suffer from the complexities and deficiencies of half-duplex Gigabit Ethernet, whichuses the CSMA/CD protocol, CarrierExtension, and frame bursting.

CPCS-PDU Trailer

Bytes 1-65535 0-47 1 1 2 4

LEH CPCS-PDU Payload Pad CPCS-UU CPI Length CRC

CPCS-PDU

Bytes 8 6 6 2 46 to 1500 4

Preamble/ Destination Source Length/ SFD Address Address Type

64 min to 1518 bytes max

Data Pad FCS

Figure 7: Full-Duplex Gigabit Ethernet Frame Format (no Carrier Extension).

Frame Format (Full-Duplex)Full-duplex Gigabit Ethernet uses thesame frame format as Ethernet and FastEthernet, with a minimum frame lengthof 64 bytes and a maximum of 1518 bytes(including the FCS but excluding thePreamble/SFD). If the data portion is lessthan 46 bytes, pad bytes are added to produce a minimum frame size of 64 bytes.

Figure 7 shows the same frame format forEthernet, Fast Ethernet and full-duplexGigabit Ethernet that enables the seamlessintegration of Gigabit Ethernet campusbackbones with the Ethernet and Fast Ethernet desktops and servers they interconnect.

Frame Format (Half-Duplex)Because of the greatly increased speed ofpropagation and the need to supportpractical network distances, half-duplexGigabit Ethernet requires the use of theCarrier Extension. The Carrier Extensionprovides a minimum transmission lengthof 512 bytes. This allows collisions to bedetected without increasing the minimumframe length of 64 bytes; thus, no changesare required to higher layer software, suchas network interface card (NIC) driversand protocol stacks.

With half-duplex transmission, if the dataportion is less than 46 bytes, pad bytes are added in the Pad field to increase the minimum (non-extended) frame to64 bytes. In addition, bytes are added in the Carrier Extension field so that aminimum of 512 bytes for transmission isgenerated. For example, with 46 bytes ofdata, no bytes are needed in the Pad field,and 448 bytes are added to the CarrierExtension field. On the other hand, with494 or more (up to 1500) bytes of data,no pad or Carrier Extension is needed.

“Goodput” EfficiencyWith full-duplex Gigabit Ethernet, the good throughput (“goodput”) in a predominantly 64-byte frame size environment, where no Carrier Extensionis needed, is calculated as follows (whereSFD=start frame delimiter, andIFG=interframe gap):

64 bytes (frame)

[64 bytes (frame) + 8 bytes (SFD) + 12 bytes (IFG)]

=

76 % approx.

This goodput translates to a forwardingrate of 1.488 million packets per second(Mpps), known as the wirespeed rate.

With Carrier Extension, the resultinggoodput is very much reduced:

64 bytes (frame)

[512 bytes (frame with CE) + 8 bytes (SFD)

+ 12 bytes (IFG)]

=

12 % approx.

In ATM and Gigabit Ethernet comparisons,this 12 percent figure is sometimes quotedas evidence of Gigabit Ethernet’s inefficiency.

However, this calculation is only applicableto half-duplex (as opposed to full-duplex)Gigabit Ethernet. In the backbone andserver-farm connections, the vast majority(if not all) of the Gigabit Ethernetdeployed will be full-duplex.

Mapping Ethernet Framesinto ATM LANE Cells As mentioned previously, using ATMLAN Emulation as the campus backbonefor Ethernet desktops require AAL-5encapsulation and subsequent segmentationand reassembly.

Figure 9 shows a maximum-sized 1518-byte Ethernet frame mapped into a CPCS-PDU and segmented into 32 53-byte ATM cells, using AAL-5; thistranslates into a goodput efficiency of:

1514 bytes (frame without FCS)

[32 ATM cells x 53 bytes per ATM cell]

=

89 % approx.

For a minimum size 64-byte Ethernetframe, two ATM cells will be required;this translates into a goodput efficiency of:

60 bytes (frame without FCS)

[2 ATM cells x 53 bytes per ATM cell]

=

57 % approx.

Gigabit Ethernet and ATM: A Technology Perspective White Paper12

Figure 9: Mapping Ethernet Frame into ATM Cells.

Bytes 8 6 6 2 1500 4

1514 bytes

Preamble/ Destination Source Length/ SFD Address Address Type Data Pad FCS

CPCS-PDU Payload CPCS-PDU Trailer

Bytes 2 1514 12 1 1 2 4

LEH Ethernet Frame Pad CPCS-UU CPI Length CRC

ATM Cells 1 2 3 4 29 30 31 32

CPCS-PDU

1696 bytes

Figure 8: Half-Duplex Gigabit Ethernet Frame Format (with Carrier Extension).

Bytes 8 6 6 2 46 to 493 4 448 to 1

64 bytes min (non-extended)

512 bytes min transmission

Preamble/ Destination Source Length/ SFD Address Address Type

CarrierExtensionData Pad FCS

Gigabit Ethernet and ATM: A Technology Perspective White Paper 13

Frame BurstingThe Carrier Extension is an overhead,especially if short frames are the predominanttraffic size. To enhance goodput, half-duplex Gigabit Ethernet allows framebursting. Frame bursting allows an endstation to send multiple frames in oneaccess (that is, without contending forchannel access for each frame) up to the burstLength parameter. If a frame isbeing transmitted when the burst Lengththreshold is exceeded, the sender isallowed to complete the transmission.Thus, the maximum duration of a frameburst is 9710 bytes; this is the burstLength (8192 bytes) plus the max FrameSize (1518 bytes). Only the first frame isextended if required. Each frame is spacedfrom the previous by a 96-bit interframegap. Both sender and receiver must beable to process frame bursting.

CSMA/CD ProtocolFull-duplex Gigabit Ethernet does not use or need the CSMA/CD protocol.Because of the dedicated, simultaneous,and separate send and receive channels, itis very much simplified without the needfor carrier sensing, collision detection,backoff and retry, carrier extension, andframe bursting.

Flow Control andCongestion ManagementIn both ATM or Gigabit Ethernet, flowcontrol and congestion management are necessary to ensure that the networkelements, individually and collectively, are able to meet QoS objectives requiredby applications using that network.Sustained congestion in a switch, whetherATM or Gigabit Ethernet, will eventuallyresult in frames being discarded. Varioustechniques are employed to minimize or prevent buffer overflows, especiallyunder transient overload conditions. Thedifference between ATM and GigabitEthernet is in the availability, reach, andcomplexity (functionality and granularity)of these techniques.

ATM Traffic and Congestion ManagementIn an ATM network, the means employedto manage traffic flow and congestion arebased on the traffic contract: the ATMService Category and the traffic descriptorparameters agreed upon for a connection.These means may include:

• Connection Admission Control (CAC):

accepting or rejecting connectionsbeing requested at the call setup stage,depending upon availability of networkresources (this is the first point of control and takes into account connections already established).

• Traffic Policing: monitoring and controlling the stream of cells enteringthe network for connections accepted,and marking out-of-profile traffic forpossible discard using Usage ParameterControl (UPC) and the Generic CellRate Algorithm (GCRA).

• Backpressure: exerting on the source to decrease cell transmission rate whencongestion appears likely or imminent.

• Congestion Notification: notifying thesource and intervening nodes of currentor impending congestion by setting theExplicit Forward CongestionIndication (EFCI) bit in the cell header(Payload Type Indicator) or usingRelative Rate (RR) or Explicit Rate(ER) bits in Resource Management(RM) cells to provide feedback both inthe forward and backward directions,so that remedial action can be taken.

• Cell Discard: employing various discardstrategies to avoid or relieve congestion:

• Selective Cell Discard: dropping cellsthat are non-compliant with trafficcontracts or have their Cell LossPriority (CLP) bit marked for possible discard if necessary

Figure 10: Frame Bursting.

Bytes 8 52-1518 12 8 64-1518 12 8 64-1518 12

1 Frame Burst

Preamble/ MAC Extension Preamble/ MAC Preamble/ MAC SFD Frame-1 (if needed) SFD Frame-2 SFD Frame-nIFG IFG IFG

• Early Packet Discard (EPD): droppingall the cells belonging to a frame thatis queued, but for which transmissionhas not been started

• Partial Packet Discard (PPD): droppingall the cells belonging to a frame thatis being transmitted (a more drasticaction than EPD)

• Random Early Detection (RED):

dropping all the cells of randomlyselected frames (from different sources)when traffic arrival algorithms indicateimpending congestion (thus avoidingcongestion), and preventing waves of synchronized re-transmission precipitating congestion collapse. A further refinement is offered usingWeighted RED (WRED).

• Traffic Shaping: modifying the streamof cells leaving a switch (to enter ortransit a network) so as to ensure conformance with contracted profilesand services. Shaping may includereducing the Peak Cell Rate, limitingthe duration of bursting traffic, andspacing cells more uniformly to reducethe Cell Delay Variation.

Gigabit Ethernet Flow ControlFor half-duplex operation, GigabitEthernet uses the CSMA/CD protocol to provide implicit flow control by “backpressuring” the sender from transmitting in two simple ways:

• Forcing collisions with the incomingtraffic, which forces the sender to backoff and retry as a result of the collision,in conformance with the CSMA/CDprotocol.

• Asserting carrier sense to provide a“channel busy” signal, which preventsthe sender from accessing the mediumto transmit, again in conformance withthe protocol.

With full-duplex operation, GigabitEthernet uses explicit flow control tothrottle the sender. The IEEE 802.3x Task Force defined a MAC Control architecture, which adds an optionalMAC Control sub-layer above the MACsub-layer, and uses MAC Control framesto control the flow. To date, only oneMAC Control frame has been defined;this is for the PAUSE operation.

A switch or an end station can send a PAUSE frame to stop a sender fromtransmitting data frames for a specifiedlength of time. Upon expiration of theperiod indicated, the sender may resumetransmission. The sender may also resumetransmission when it receives a PAUSEframe with a zero time specified, indicatingthe waiting period has been cancelled. On the other hand, the waiting periodmay be extended if the sender receives aPAUSE frame with a longer period thanpreviously received.

Using this simple start-stop mechanism,Gigabit Ethernet prevents frame discardswhen input buffers are temporarilydepleted by transient overloads. It is onlyeffective when used on a single full-duplexlink between two switches, or between aswitch and an end station (server).Because of its simplicity, the PAUSEfunction does not provide flow controlacross multiple links, or from end-to-endacross (or through) intervening switches.It also requires both ends of a link (thesending and receiving partners) to beMAC Control-capable.

Bandwidth ScalabilityAdvances in computing technology havefueled the explosion of visually and aural-ly exciting applications for e-commerce,whether Internet, intranet or extranet.These applications require exponentialincreases in bandwidth. As a businessgrows, increases in bandwidth are alsorequired to meet the greater number ofusers without degrading performance.Therefore, bandwidth scalability in the network infrastructure is critical tosupporting incremental or quantumincreases in bandwidth capacity, which isfrequently required by many businesses.

ATM and Gigabit Ethernet both providebandwidth scalability. Whereas ATM’sbandwidth scalability is more granularand extends from the desktop and overthe MAN/WAN, Gigabit Ethernet hasfocused on scalability in campus networkingfrom the desktop to the MAN/WANedge. Therefore, Gigabit Ethernet providesquantum leaps in bandwidth from 10 Mbps, through 100 Mbps, 1000Mbps (1 Gbps), and even 10,000 Mbps(10 Gbps) without a corresponding quantum leap in costs.

Gigabit Ethernet and ATM: A Technology Perspective White Paper14

Gigabit Ethernet and ATM: A Technology Perspective White Paper 15

ATM BandwidthATM is scalable from 1.544 Mbpsthrough to 2.4 Gbps and even higherspeeds. Approved ATM Forum specifications for the physical layerinclude the following bandwidths:

• 1.544 Mbps DS1

• 2.048 Mbps E1

• 25.6 Mbps over shielded and unshieldedtwisted pair copper cabling (the bandwidththat was originally envisioned for ATMto the desktop)

• 34.368 Mbps E3

• 44.736 Mbps DS3

• 100 Mbps over multimode fiber cabling

• 155.52 Mbps SONET/SDH over UTPand single and multimode fiber cabling

• 622.08 Mbps SONET/SDH over singleand multimode fiber cabling

• 622.08 Mbps and 2.4 Gbps cell-based physical layer (without any framestructure).

Work is also in progress (as of October1999) on 1 Gbps cell-based physical layer,2.4 Gbps SONET/SDH, and 10 GbpsSONET/SDH interfaces.

Inverse Multiplexing over ATMIn addition, the ATM Forum’s InverseMultiplexing over ATM (IMA) standardallows several lower-speed DS1/E1 physicallinks to be grouped together as a singlehigher speed logical link, over which cellsfrom an ATM cell stream are individually

multiplexed. The original cell stream isrecovered in correct sequence from themultiple physical links at the receivingend. Loss and recovery of individual linksin an IMA group are transparent to theusers. This capability allows users to:

• Interconnect ATM campus networksover the WAN, where ATM WANfacilities are not available by usingexisting DS1/E1 facilities

• Incrementally subscribe to moreDS1/E1 physical links as needed

• Protect against single link failures when interconnecting ATM campusnetworks across the WAN

• Use multiple DS1/E1 links that aretypically lower cost than a singleDS3/E3 (or higher speed) ATM WAN link for normal operation or as backup links.

Gigabit Ethernet BandwidthEthernet is scalable from the traditional10 Mbps Ethernet, through 100 MbpsFast Ethernet, and 1000 Mbps GigabitEthernet. Now that the Gigabit Ethernet

standards have been completed, the nextevolutionary step is 10 Gbps Ethernet.The IEEE P802.3 Higher Speed StudyGroup has been created to work on 10Gbps Ethernet, with ProjectAuthorization Request and formation of a Task Force targeted for November 1999and a standard expected by 2002.

Bandwidth scalability is also possiblethrough link aggregation ( that is, grouping multiple Gigabit Ethernet links

together to provide greater bandwidthand resiliency. Work in this area of standardization is proceeding through the IEEE 802.3ad Link Aggregation TaskForce (see the Trunking and LinkAggregation section of this paper).

Distance ScalabilityDistance scalability is important becauseof the need to extend the network acrosswidely dispersed campuses, and withinlarge multi-storied buildings, while makinguse of existing UTP-5 copper cabling and common single and multimode fiber cabling, and without the need foradditional devices such as repeaters,extenders, and amplifiers.

Both ATM and Gigabit Ethernet (IEEE802.3ab) can operate easily within thelimit of 100 meters from a wiring closetswitch to the desktop using UTP-5 copper cabling. Longer distances are typically achieved using multimode(50/125 or 62.5/125 µm) or single mode(9-10/125 µm) fiber cabling.

Gigabit Ethernet and ATM: A Technology Perspective White Paper16

Figure 11: Ethernet and Fast Ethernet Supported Distances.

Ethernet Ethernet Ethernet Ethernet

10BASE-T 10BASE-FL 100BASE-TX 100BASE-FX

IEEE Standard 802.3 802.3 802.3u 802.3u

Data Rate 10 Mbps 10 Mbps 100 Mbps 100 Mbps

Multimode Fiber distance N/A 2 km N/A 412 m (half duplex)

2 km (full duplex)

Singlemode Fiber distance N/A 25 km N/A 20 km

Cat 5 UTP distance 100 m N/A 100 m N/A

STP/Coax distance 500 m N/A 100 m N/A

Gigabit Ethernet DistancesFigure 11 shows the maximum distancessupported by Ethernet and Fast Ethernet,using various media.

IEEE 802.3z Gigabit Ethernet – Fiber CablingIEEE 802.3u-1995 (Fast Ethernet)extended the operating speed ofCSMA/CD networks to 100 Mbps overboth UTP-5 copper and fiber cabling.

The IEEE P802.3z Gigabit Ethernet TaskForce was formed in July 1996 to developa Gigabit Ethernet standard. This workwas completed in July 1998 when theIEEE Standards Board approved the IEEE 802.3z-1998 standard.

The IEEE 802.3z standard specifies theoperation of Gigabit Ethernet over existingsingle and multimode fiber cabling. It also supports short (up to 25m) copperjumper cables for interconnecting switches,routers, or other devices (servers) in a

single computer room or wiring closet.Collectively, the three designations —1000BASE-SX, 1000BASE-LX and1000BASE-CX — are referred to as1000BASE-X.

Figure 12 shows the maximum distancessupported by Gigabit Ethernet, using various media.

1000BASE-X Gigabit Ethernet is capableof auto-negotiation for half- and full-duplexoperation. For full-duplex operation,auto-negotiation of flow control includesboth the direction and symmetry of operation — symmetrical and asymmetrical.

IEEE 802.3ab Gigabit Ethernet— Copper CablingFor Gigabit Ethernet over copper cabling,an IEEE Task Force started developing aspecification in 1997. A very stable draftspecification, with no significant technicalchanges, had been available since July1998. This specification, known as IEEE802.3ab, is now approved (as of June1999) as an IEEE standard by the IEEEStandards Board.

The IEEE 802.3ab standard specifies theoperation of Gigabit Ethernet over distancesup to 100m using 4-pair 100 ohmCategory 5 balanced unshielded twistedpair copper cabling. This standard is alsoknown as the 1000BASE-T specification;it allows deployment of Gigabit Ethernetin the wiring closets, and even to thedesktops if needed, without change to theUTP-5 copper cabling that is installed inmany buildings today.

Trunking and Link AggregationTrunking provides switch-to-switch connectivity for ATM and GigabitEthernet. Link Aggregation allows multiple parallel links between switches,or between a switch and a server, to provide greater resiliency and bandwidth.While switch-to-switch connectivity for ATM is well-defined through the NNI and PNNI specifications, severalvendor-specific

Gigabit Ethernet and ATM: A Technology Perspective White Paper 17

protocols are used for Gigabit Ethernet,with standards-based connectivity to beprovided once the IEEE 802.3ad LinkAggregation standard is complete.

Nortel Networks is actively involved inthis standards effort, while providinghighly resilient and higher bandwidthMulti-Link Trunking (MLT) and GigabitLinkSafe technology in the interim.

ATM PNNIATM trunking is provided through NNI(Network Node Interface or Network-to-

Network Interface) using the Private NNI(PNNI) v1.0 protocols, an ATM Forumspecification approved in March 1996.

To provide resiliency, load distributionand balancing, and scalability in bandwidth, multiple PNNI links may beinstalled between a pair of ATM switches.Depending on the implementation, these parallel links may be treated forConnection Admission Control (CAC)

procedures as a single logical aggregatedlink. The individual links within a set ofparalleled links may be any combinationof the supported ATM speeds. As morebandwidth is needed, more PNNI linksmay be added between switches as necessarywithout concern for the possibility ofloops in the traffic path.

By using source routing to establish a path(VCC) between any source and destinationend systems, PNNI automatically eliminatesthe forming of loops. The end-to-endpath, computed at the ingress ATM

switch using Generic ConnectionAdmission Control (GCAC) procedures,is specified by a list of ATM nodes knownas a Designated Transit List (DTL).Computation based on default parameterswill result in the shortest path meeting therequirements, although preference may begiven to certain paths by assigning lowerAdministrative Weight to preferred links.This DTL is then validated by local CACprocedures at each ATM node in the list.If an intervening node finds the path isinvalid, maybe as a result of topology orlink state changes in the meantime, thatnode is able to automatically “crank”

the list back to the ingress switch forrecomputation of a new path. An ATMswitch may perform path computation asa background task before calls are received(to reduce latency during call setups), or when a call request is received (for real-time optimized path at the cost ofsome setup delay), or both (for certainQoS categories), depending on user configuration.

PNNI also provides performance scalabilitywhen routing traffic through an ATMnetwork, using the hierarchical structure

of ATM addresses. An individual ATMend system in a PNNI peer group can be reached using the summary address for that peer group, similar to using thenetwork and subnet ID portions of an IP address. A node whose address doesnot match the summary address (the non-matching address is known as a foreign address) can be explicitly set to be reachable and advertised.

Figure 12: Gigabit Ethernet Supported Distances.

1000BASE-SX 1000BASE-LX 1000BASE-CX 1000BASE-T

IEEE Standard 802.3z 802.3z 802.3z 802.3ab

Data Rate 1000 Mbps 1000 Mbps 1000 Mbps 1000 Mbps

Optical Wavelength (nominal) 850 nm (shortwave) 1300 nm (longwave) N/A N/A

Multimode Fiber (50 (m) distance 525 m 550 m N/A N/A

Multimode Fiber (62.5 (m) distance 260 m 550 m N/A N/A

Singlemode Fiber (10 (m) distance N/A 3 km N/A N/A

UTP-5 100 ohm distance N/A N/A N/A 100m

STP 150 ohm distance N/A N/A 25 m N/A

Number of Wire Pairs/Fiber 2 fiber 2 fiber 2 pairs 4 pairs

Connector Type Duplex SC Duplex SC Fibre Channel-2 RJ-45or DB-9

Note: distances are for full duplex, the expected mode of operation in most cases.

A Peer Group Leader (PGL) may representthe nodes in the peer group at a higherlevel. These PGLs are logical group nodes(LGNs) that form higher-level peergroups, which allow even shorter summaryaddresses. These higher-level peer groupscan be represented in even higher peergroups, thus forming a hierarchy. Byusing this multi-level hierarchical routing,less address, topology, and link state information needs to be advertised acrossan ATM network, allowing scalability asthe number of nodes grow.

However, this rich functionality comeswith a price. PNNI requires memory, processing power, and bandwidth fromthe ATM switches for maintaining stateinformation, topology and link stateupdate exchanges, and path computation.PNNI also results in greater complexity in hardware design, software algorithms,switch configuration, deployment, andoperational support, and ultimately muchhigher costs.

ATM UNI Uplinks versus NNI RisersPNNI provides many benefits with regard to resiliency and scalability whenconnecting ATM switches in the campusbackbone. However, these advantages arenot available in most ATM installationswhere the LAN switches in the wiringclosets are connected to the backboneswitches using ATM UNI uplinks. Insuch connections, the end stationsattached to the LAN switch are associated,directly or indirectly (through VLANs),with specific proxy LECs located in theuplinks. An end station cannot be associatedwith more than one proxy LEC active inseparate uplinks at any one time. Hence,no redundant path is available if the proxyLEC (meaning uplink or uplink path)representing the end stations should fail.

While it is possible to have one uplinkactive and another on standby, connectedto the backbone via a different path andready to take over in case of failure, veryfew ATM installations have implementedthis design for reasons of cost, complexity,and lack of this capability from the switch vendor.

One solution is provided by the NortelNetworks Centillion* 50/100 and System5000BH/BHC LAN-ATM Switches.These switches provide Token-Ring andEthernet end station connectivity on theone (desktop) side and “NNI riseruplinks” to the core ATM switches on the other (backbone) side. Because these“NNI risers” are PNNI uplinks, the LAN-to-ATM connectivity enjoys all the benefits of PNNI.

Gigabit Ethernet Link AggregationWith Gigabit Ethernet, multiple physicallinks may be installed between twoswitches, or between a switch and a server,to provide greater bandwidth and resiliency.Typically, the IEEE 802.1d Spanning TreeProtocol (STP) is used to prevent loopsforming between these parallel links, byblocking certain ports and forwarding onothers so that there is only one pathbetween any pair of source-destinationend stations. In doing so, STP incurssome performance penalty whenconverging to a new spanning tree structureafter a network topology change.

Although most switches are plug-and-play,with default STP parameters, erroneousconfiguration of these parameters can leadto looping, which is difficult to resolve. Inaddition, by blocking certain ports, STPwill allow only one link of several parallellinks between a pair of switches to carrytraffic. Hence, scalability of bandwidthbetween switches cannot be increased byadding more parallel links as required,although resiliency is thus improved.

To overcome the deficiencies of STP, various vendor-specific capabilities areoffered to increase the resiliency, load distribution and balancing, and scalabilityin bandwidth, for parallel links betweenGigabit Ethernet switches.

For example, the Nortel NetworksPassport Campus Solution offers Multi-Link Trunking and Gigabit EthernetLinkSafe:

Multi-Link Trunking (MLT) that allows up to four physical connections between two Passport 1000 Routing Switches, or

Gigabit Ethernet and ATM: A Technology Perspective White Paper18

Gigabit Ethernet and ATM: A Technology Perspective White Paper 19

a BayStack* 450 Ethernet Switch and an Passport 1000 Routing Switch, to be grouped together as a single logical link with much greater resiliency andbandwidth than is possible with severalindividual connections.

Each MLT group may be made up of Ethernet, Fast Ethernet or GigabitEthernet physical interfaces; all linkswithin a group must be of the same mediatype (copper or fiber), have the samespeed and half- or full-duplex settings,and belong to the same Spanning Treegroup, although they need not be fromthe same interface module within aswitch. Loads are automatically balancedacross the MLT links, based on sourceand destination MAC addresses (bridgedtraffic), or source and destination IPaddresses (routed traffic). Up to eightMLT groups may be configured in anPassport 1000 Routing Switch.

Gigabit Ethernet LinkSafe that providestwo Gigabit Ethernet ports on an Passport1000 Routing Switch interface module to connect to another similar module onanother switch, with one port active andthe other on standby, ready to take overautomatically should the active port orlink fails. LinkSafe is used for riser andbackbone connections, with each linkrouted through separate physical paths to provide a high degree of resiliency protection against a port or link failure.

An important capability is that virtualLANs (VLANs) distributed across multipleswitches can be interconnected, with orwithout IEEE 802.1Q VLAN Tagging,using MLT and Gigabit Ethernet trunks.

With MLT and Gigabit EthernetLinkSafe redundant trunking and linkaggregation, the BayStack 450 EthernetSwitch and Passport 1000 Routing Switchprovide a solution that is comparable to ATM PNNI in its resilience and incremental scalability, and is superior in its simplicity.

IEEE P802.3ad Link AggregationIn recognition of the need for open standards and interoperability, NortelNetworks actively leads in the IEEEP802.3ad Link Aggregation Task Force,authorized by the IEEE 802.3 TrunkingStudy Group in June 1998, to define a link aggregation standard for use onswitch-to-switch and switch-to-server parallel connections. This standard is currently targeted for availability in early 2000.

The IEEE P802.3ad Link Aggregation isan important full-duplex, point-to-pointtechnology for the core LAN infrastructureand provides several benefits:

• Greater bandwidth capacity, allowingparallel links between two switches, ora switch and a server, to be aggregatedtogether as a single logical pipe withmulti-Gigabit capacity (if necessary);traffic is automatically distributed and balanced over this pipe for highperformance.

• Incremental bandwidth scalability,allowing more links to be addedbetween two switches, or a switch and a server, only when needed for greaterperformance, from a minimal initialhardware investment, and with minimaldisruption to the network.

• Greater resiliency and fault-tolerance,where traffic is automatically reassignedto remaining operative links, thusmaintaining communication if individuallinks between two switches, or a switchand a server, fail.

• Flexible and simple migration vehicle,where Ethernet and Fast Ethernetswitches at the LAN edges can havemultiple lower-speed links aggregatedto provide higher-bandwidth transportinto the Gigabit Ethernet core.

A brief description of the IEEE P802.3adLink Aggregation standard (which maychange as it is still fairly early in the standards process) follows.

A physical connection between twoswitches, or a switch and a server, isknown as a link segment. Individual linksegments of the same medium type andspeed may make up a Link AggregationGroup (LAG), with a link segmentbelonging to only one LAG at any onetime. Each LAG is associated with a singleMAC address.

Frames that belong logically together (for example, to an application being usedat a given instance, flowing in sequencebetween a pair of end stations) are treatedas a conversation (similar to the conceptof a “flow”). Individual conversations areaggregated together to form anAggregated Conversation, according touser-specified Conversation AggregationRules, which may specify aggregation, forexample, on the basis of source/destinationaddress pairs, VLAN ID, IP subnet, orprotocol type. Frames that are part of agiven conversation are transmitted on a single link segment within a LAG toensure in-sequence delivery.

A Link Aggregation Control Protocol is used to exchange link configuration,capability, and state information betweenadjacent switches, with the objective of forming LAGs dynamically. A Flushprotocol, similar to that in ATM LANEmulation, is used to flush frames in transit when links are added or removedfrom a LAG.

Among the objectives of the IEEEP802.3ad standard are automatic configuration, low protocol overheads,rapid and deterministic convergence whenlink states change, and accommodation ofaggregation-unaware links.

Technology Complexity and CostTwo of the most critical criteria in thetechnology decision are the complexityand cost of that technology. In bothaspects, simple and inexpensive GigabitEthernet wins hands down over complexand expensive ATM — at least in enterprise networks.

ATM is fairly complex because it is a connection-oriented technology that hasto emulate the operation of connection-less LANs. As a result, additional physicaland logical components, connections, and protocols have to be added, with the attendant need for understanding,configuration, and operational support.Unlike Gigabit Ethernet (which is largelyplug-and-play), there is a steep learningcurve associated with ATM, in productdevelopment as well as product usage.ATM also suffers from a greater numberof interoperability and compatibilityissues than does Gigabit Ethernet, becauseof the different options vendors implementin their ATM products. Althoughinteroperability testing does improve thesituation, it also adds time and cost toATM product development.

Because of the greater complexity, the result is also greater costs in:

• Education and training

• Implementation and deployment

• Problem determination and resolution

• Ongoing operational support

• Test and analysis equipment, and othermanagement tools.

Integration of Layer 3 and Above FunctionsBoth ATM and Gigabit Ethernet providethe underlying internetwork over whichIP packets are transported. Although initially a Layer 2 technology, ATM functionality is creeping upwards in theOSI Reference Model. ATM PrivateNetwork Node Interface (PNNI) providessignaling and OSPF-like best route determination when setting up the pathfrom a source to a destination end system.Multiprotocol Over ATM (MPOA)allows short-cut routes to be establishedbetween two communicating ATM endsystems located in different IP subnets,completely bypassing intervening routersalong the path.

In contrast, Gigabit Ethernet is strictly a Layer 2 technology, with much of theother needed functionality added above it. To a large extent, this separation offunctions is an advantage because changesto one function do not disrupt another if there is clear modularity of functions.This decoupling was a key motivation inthe original development of the 7-layerOSI Reference Model. In fact, the complexity of ATM may be due to therich functionality all provided “in onehit,” unlike the relative simplicity ofGigabit Ethernet, where higher layerfunctionality is kept separate from, and added “one at a time” to, the basicPhysical and Data Link functions.

Gigabit Ethernet and ATM: A Technology Perspective White Paper20

Gigabit Ethernet and ATM: A Technology Perspective White Paper 21

MPOA and NHRPA traditional router provides two basicLayer 3 functions: determining the bestpossible path to a destination using routing control protocols such as RIP and OSPF (this is known as the routingfunction), and then forwarding the framesover that path (this is known as the forwarding function).

Multi-Protocol Over ATM (MPOA)enhances Layer 3 functionality over ATMin three ways:

• MPOA uses a Virtual Router model toprovide greater performance scalabilityby allowing the typically centralizedrouting control function to be divorcedfrom the data frame forwarding function,and distributing the data forwardingfunction to access switches on theperiphery of the network. This “separation of powers” allows routingcapability and forwarding capability tobe distributed to where each is mosteffective, and allows each to be scaledwhen needed without interference from the other.

• MPOA enables paths (known as short-cut VCCs) to be directly establishedbetween a source and its destination,without the hop-by-hop, frame-by-frame processing and forwarding that isnecessary in traditional router networks.Intervening routers, which are potentiallyperformance bottlenecks, are completelybypassed, thereby enhancing forwardingperformance.

• MPOA uses fewer resources in the formof VCCs. When traditional routers areused in an ATM network, one DataDirect VCC (DDVCC) must be established between a source end station and its gateway router, one

DDVCC between a destination endstation and its gateways router, and several DDVCCs between interveningrouters along the path. With MPOA,only one DDVCC is needed betweenthe source and destination end stations.

Gigabit Ethernet can also leverage a similar capability for IP traffic using theNext Hop Resolution Protocol (NHRP).In fact, MPOA uses NHRP as part of theprocess to resolve MPOA destinationaddresses. MPOA Resolution Requestsare converted to NHRP ResolutionRequests by the ingress MPOA serverbefore forwarding the requests towardsthe intended destination. NHRPResolution Responses received by theingress MPOA server are converted toMPOA Resolution Responses beforebeing forwarded to the requesting source.Just as MPOA shortcuts can be establishedfor ATM networks, NHRP shortcuts can also be established to provide the performance enhancement in a frameswitched network.

Gateway RedundancyFor routing between subnets in an ATMor Gigabit Ethernet network, end stationstypically are configured with the static IPaddress of a Layer 3 default gatewayrouter. Being a single point of failure,sometimes with catastrophic consequences,various techniques have been deployed toensure that an alternate backs this defaultgateway when it fails.

With ATM, redundant and distributedLayer 3 gateways are currently vendor-specific. Even if a standard should emerge,it is likely that more logical components,protocols, and connections will need to be implemented to provide redundantand/or distributed gateway functionality.

Virtual Router Redundancy Protocol For Gigabit Ethernet, an IETF RFC 2338Virtual Router Redundancy Protocol(VRRP) is available for deploying interoperable and highly resilient defaultgateway routers. VRRP allows a group ofrouters to provide redundant and distributedgateway functions to end stations throughthe mechanism of a virtual IP address —the address that is configured in endstations as the default gateway router.

At any one time, the virtual IP address is mapped to a physical router, known as the Master. Should the Master fail,another router within the group is electedas the new Master with the same virtualIP address. The new Master automaticallytakes over as the new default gateway,without requiring configuration changes in the end stations. In addition,each router may be Master for a set of end stations in one subnet while providing backup functions for another,thus distributing the load across multiple routers.

Gigabit Ethernet and ATM: A Technology Perspective White Paper22

LAN IntegrationThe requirements of the LAN are verydifferent from those of the WAN. In theLAN, bandwidth is practically “free” onceinstalled, as there are no ongoing usagecosts. As long as sufficient bandwidthcapacity is provisioned (or even over-provisioned) to meet the demand, theremay not be a need for complex techniquesto control bandwidth usage. If sufficientbandwidth exists to meet all demand,then complex traffic management andcongestion control schemes may not beneeded at all. For the user, other issuesassume greater importance; these includeease of integration, manageability, flexibility(moves, adds and changes), simplicity,scalability, and performance.

Seamless IntegrationATM has often been touted as the

technology that provides seamless integration from the desktop, over the campus and enterprise, right through to the WAN and across the world. Thesame technology and protocols are used

Broadcast and MulticastBroadcasts and multicasts are very naturalmeans to send traffic from one source tomultiple recipients in a connectionlessLAN. Gigabit Ethernet is designed forjust such an environment. The higher-layer IP multicast address is easily mappedto a hardware MAC address. UsingInternet Group Management Protocol(IGMP), receiving end stations reportgroup membership to (and respond toqueries from) a multicast router, so as toreceive multicast traffic from networksbeyond the local attachment. Source endstations need not belong to a multicastgroup in order to send to members of that group.

By contrast, broadcasts and multicasts inan ATM LAN present a few challengesbecause of the connection-oriented natureof ATM.

In each emulated LAN (ELAN), ATMneeds the services of a LAN EmulationServer (LES) and a Broadcast andUnknown Server (BUS) to translate fromMAC addresses to ATM addresses. Theseadditional components require additionalresources and complexity needed to signal,set up, maintain, and tear down ControlDirect, Control Distribute, MulticastSend, and Multicast Forward VCCs.Complexity is further increased becausean ELAN can only have a singleLES/BUS, which must be backed up byanother LES/BUS to eliminate any singlepoints of failure. Communication

throughout. Deployment and on-goingoperational support are much easierbecause of the opportunity to “learn once,do many.” One important assumption inthis scenario is that ATM would be widelydeployed at the desktops. This assumptiondoes not meet with reality.

ATM deployment at the desktop is almost negligible, while Ethernet and Fast Ethernet are very widely installed in millions of desktop workstations andservers. In fact, many PC vendors includeEthernet, Fast Ethernet, and (increasingly)Gigabit Ethernet NIC cards on the motherboards of their workstation orserver offerings. Given this huge installedbase and the common technology that itevolved from, Gigabit Ethernet providesseamless integration from the desktops to the campus and enterprise backbonenetworks.

If ATM were to be deployed as the campus backbone for all the Ethernetdesktops, then there would be a need forframe-to-cell and cell-to-frame conversion— the Segmentation and Reassembly(SAR) overhead.

With Gigabit Ethernet in the campusbackbone and Ethernet to the desktops,no cell-to-frame or frame-to-cell conversionis needed. Not even frame-to-frame conversion is required from one form of Ethernet to another! Hence, GigabitEthernet provides a more seamless integration in the LAN environment.

between active and backup LES/BUSnodes requires more virtual connectionsand protocols for synchronization, failuredetection, and takeover (SCSP and LNNI).

With all broadcast traffic going throughthe BUS, the BUS poses a potential bottleneck.

For IP multicasting in a LANE network,ATM needs the services of the BUS and,if available (with LUNI v2), an SMS. For IP multicasting in a Classical IP ATMnetwork, ATM needs the services of aMulticast Address Resolution Server(MARS), a Multicast Connection Server(MCS), and the Cluster Control VCCs.These components require additionalresources and complexity for connectionsignaling, setting up, maintenance andtearing down.

With UNI 3.0/3.1, the source must firstresolve the target multicast address to theATM addresses of the group members,and then construct a point-to-multipointtree, with the source itself as the root tothe multiple destinations before multicasttraffic may be distributed. With UNI 4.0,end stations may join as leaves to a point-to-multipoint distribution tree, with orwithout intervention from the root. Issuesof interoperability between the differentUNI versions are raised in either case.

Multi-LAN IntegrationAs a backbone technology, ATM caninterconnect physical LAN segmentsusing Ethernet, Fast Ethernet, GigabitEthernet, and Token Ring. These are the main MAC layer protocols in use oncampus networks today. Using ATM as

the common uplink technology and with translational bridging functionality,the Ethernet and Token-Ring LANs caninteroperate relatively easily.

With Gigabit Ethernet, interoperationbetween Ethernet and Token-Ring LANsrequires translational bridges that transformthe frame format of one type to the other.

MAN/WAN IntegrationIt is relatively easy to interconnect ATMcampus backbones across the MAN orWAN. Most ATM switches are offeredwith DS1/E1, DS3/E3, SONET OC-3c/SDH STM-1 and SONET OC-12c/SDH STM-4 ATM interfaces that connect directly to the ATM MAN orWAN facilities. Some switches are offeredwith DS1/E1 Circuit Emulation, DS1/E1Inverse Multiplexing over ATM, andFrame Relay Network and ServiceInterworking capabilities that connect to the existing non-ATM MAN or WANfacilities. All these interfaces allow ATMcampus switches direct connections to the MAN or WAN, without the need foradditional devices at the LAN-WAN edge.

At this time, many Gigabit Ethernetswitches do not offer MAN/WAN interfaces. Connecting Gigabit Ethernetcampus networks across the MAN or WAN typically requires the use of additional devices to access MAN/WANfacilities, such as Frame Relay, leasedlines, and even ATM networks. Theseinterconnect devices are typically routersor other multiservice switches that add tothe total complexity and cost. With therapid acceptance of Gigabit Ethernet asthe campus backbone of choice, however,many vendors are now offeringMAN/WAN interfaces such as ATM

Gigabit Ethernet and ATM: A Technology Perspective White Paper 23

SONET OC-3c/SDH STM-1, SONETOC-12c/SDH STM-4, and Packet-over-SONET/SDH in their GigabitEthernet switches.

While an ATM LAN does offer seamlessintegration with the ATM MAN or WAN through direct connectivity, the MAN/WAN for the most part willcontinue to be heterogeneous, and nothomogeneous ATM. This is due to theinstalled non-ATM equipment, geographicalcoverage, and time needed to change.This situation will persist more so than inthe LAN where there is a greater controlby the enterprise and, therefore, greaterease of convergence. Even in the LAN,the convergence is towards Ethernet andnot ATM as the underlying technology.Technologies other than ATM will beneeded for interconnecting between locations, and even over entire regions,because of difficult geographical terrain or uneconomic reach. Thus, there willcontinue to be a need for technology conversion from the LAN to the WAN, except where ATM has beenimplemented.

Another development — the widespreaddeployment of fiber optic technology —may enable the LAN to be extended overthe WAN using the seemingly boundlessoptical bandwidth for LAN traffic. Thismeans that Gigabit Ethernet campusescan be extended across the WAN just aseasily, perhaps even more easily and withless cost, than ATM over the WAN.Among the possibilities are access to DarkFiber with long-haul extended distanceGigabit Ethernet (50 km or more),Packet-over-SONET/SDH and IP over Optical Dense Wave DivisionMultiplexing.

One simple yet powerful way for extendinghigh performance Gigabit Ethernet campus networks across the WAN, especially in the metropolitan area, is theuse of Packet-over-SONET/SDH (POS,

While all these technologies are evolving,businesses seek to minimize risks byinvesting in the lower-cost GigabitEthernet, rather than the higher-cost ATM.

Management AspectsBecause businesses need to be increasinglydynamic to respond to opportunities and challenges, the campus networkingenvironment is constantly in a state offlux. There are continual moves, adds, and changes; users and workstations formand re-form workgroups; road warriorstake the battle to the streets, and highlymobile users work from homes and hotelsto increase productivity.

With all these constant changes, manageability of the campus network is a very important selection criterion.The more homogeneous and simpler the network elements are, the easier theyare to manage. Given the ubiquity ofEthernet and Fast Ethernet, GigabitEthernet presents a more seamless integration with existing network elements than ATM. Therefore, GigabitEthernet is easier to manage. GigabitEthernet is also easier to manage becauseof its innate simplicity and the wealth ofexperience and tools available with itspredecessor technologies.

Gigabit Ethernet and ATM: A Technology Perspective White Paper24

Figure 13: Interconnection Technologies over the MAN/WAN.

IP over ATM IP over SONET/SDH

IP over Optical

B-ISDN

IP

ATM IP IP

SONET/SDH ATM SONET/SDH IP

Optical Optical Optical Optical

(A) (B) (C) (D)

also known as IP over SONET/SDH).SONET is emerging as a competitiveservice to ATM over the MAN/WAN.With POS, IP packets are directly encapsulated into SONET frames, thereby eliminating the additional overhead of the ATM layer (see column“C” in Figure 13).

To extend this a step further, IP packetscan be transported over raw fiber withoutthe overhead of SONET/SDH framing;this is called IP over Optical (see column“D” in Figure 13). Optical Networkingcan transport very high volumes of data,voice and video traffic over different lightwavelengths.

The pattern of traffic has also been rapidlychanging, with more than 80 percent ofthe network traffic expected to traversethe MAN/WAN, versus only 20 percentremaining on the local campus. Given the changing pattern of traffic, and theemergence of IP as the dominant networkprotocol, the total elimination of layers of communication for IP over theMAN/WAN means reduced bandwidthusage costs and greater application performance for the users.

Gigabit Ethernet and ATM: A Technology Perspective White Paper 25

By contrast, ATM is significantly differentfrom the predominant Ethernet desktopsit interconnects. Because of this differenceand its relative newness, there are fewtools and skills available to manage ATMnetwork elements. ATM is also more difficult to manage because of the complexity of logical components andconnections, and the multitude of protocolsneeded to make ATM workable. On topof the physical network topology lie anumber of logical layers, such as PNNI,LUNI, LNNI, MPOA, QoS, signaling,SVCs, PVCs, and soft PVCs. Logicalcomponents are more difficult to troubleshoot than physical elements when problems do occur.

Standards andInteroperabilityLike all technologies, ATM and GigabitEthernet standards and functions matureand stabilize over time. Evolved from acommon technology, frame-based GigabitEthernet backbones interoperate seamlesslywith the millions of connectionless,frame-based Ethernet and Fast Ethernetdesktops and servers in today’s enterprisecampus networks. By contrast, connection-oriented, cell-based ATM backbones needadditional functions and capabilities thatrequire standardization, and can easilylead to interoperability issues.

ATM Standards Although relatively new, ATM standardshave been in development since 1984 aspart of B-ISDN, designed to support private and public networks. Since theformation of the ATM Forum in 1991,many ATM specifications were completed,especially between 1993 and 1996.

Because of the fast pace of developmentefforts during this period, a stable environment was felt to be needed forconsolidation, implementation and inter-operability. In April 1996, the AnchorageAccord agreed on a collection of some 60ATM Forum specifications that provideda basis for stable implementation. Besidesdesignating a set of foundational andexpanded feature specifications, theAccord also established criteria to ensureinteroperability of ATM products andservices between current and future specifications. This Accord provided theassurance needed for the adoption ofATM and a checkpoint for further standardsdevelopment. As of July 1999, there aremore than 40 ATM Forum specificationsin various stages of development.

To promote interoperability, the ATMConsortium was formed in October1993, one of several consortiums at theUniversity of New HampshireInterOperability Lab (IOL). The ATMConsortium is a grouping of ATM productvendors interested in testing interoperabilityand conformance of their ATM productsin a cooperative atmosphere, withoutadverse competitive publicity.

Gigabit Ethernet Standards By contrast, Gigabit Ethernet has evolvedfrom the tried and trusted Ethernet andFast Ethernet technologies, which havebeen in use for more than 20 years. Beingrelatively simple compared to ATM,much of the development was completedwithin a relatively short time. The GigabitEthernet Alliance, a group of networkingvendors including Nortel Networks, promotes the development, demonstration,

and interoperability of Gigabit Ethernetstandards. Since its formation in 1996,the Alliance has been very successful inhelping to introduce the IEEE 802.3z1000BASE-X, and the IEEE 802.3ab1000BASE-T Gigabit Ethernet standards.

Similar to the ATM Consortium, theGigabit Ethernet Consortium was formedin April 1997 at the University of NewHampshire InterOperability Lab as acooperative effort among GigabitEthernet product vendors. The objectiveof the Gigabit Ethernet Consortium is the ongoing testing of Gigabit Ethernetproducts and software from both an inter-operability and conformance perspective.

Passport Campus SolutionIn response to the market requirementsand demand for Ethernet, Fast Ethernet,and Gigabit Ethernet, Nortel Networksoffers the Passport Campus Solution as the best-of-breed technology for campusaccess and backbone LANs. The PassportCampus Solution (see Figure 14) comprises the Passport 8000 EnterpriseSwitch, with its edge switching and routingcapabilities, the Passport 1000 RoutingSwitch family, Passport 700 Server Switch

• Up to 160 Fast Ethernet 100BASE-FX ports

• Up to 64 Gigabit Ethernet 1000BASE-SX or -LX ports

• Wirespeed switching for Ethernet, Fast Ethernet and Gigabit Ethernet

• High resiliency through GigabitLinkSafe and Multi-Link Trunking

• High availability through fully distributed switching and managementarchitectures, redundant and load-sharingpower supplies and cooling fans, andability to hot-swap all modules

• Rich functionality through support of:

• Port- and protocol-based VLANs for broadcast containment, logicalworkgroups, and easy moves, adds and changes

• IEEE 802.1Q VLAN Tagging for carrying traffic from multiple VLANsover a single trunk

• IEEE 802.1p traffic prioritization for key business applications

• IGMP, broadcast and multicast rate limiting for efficient broadcast containment

• Spanning Tree Protocol FastStart forfaster network convergence and recovery

• Remote Network Monitoring(RMON), port mirroring, and RemoteTraffic Monitoring (RTM) for networkmanagement and problem determination.

Gigabit Ethernet and ATM: A Technology Perspective White Paper26

Figure 14: Passport Campus Solution and Optivity Policy Services.

Data

Data

VoiceVideo

Data

Data

WAN

Passport 8000Enterprise Switch

Centillion 100Multi-LAN Switch

Passport 1000Routing Switch

BN Routerredundant gateways

BayStack 450Ethernet Switch

10/100/Gigabit EthernetMLT resiliency

OSPFEMP

Common Open Policy ServicesDifferentiated Services

IP Precedence/Type of ServiceIEEE 802.1Q VLAN Tag

IEEE 802.1p User PriorityExpress Classification

Optivity Policy Services& Management

Server Farm

10/100 EthernetMLT resiliency

Gigabit EthernetLinkSafe resiliency

Passport 700 Server Switchserver redundancy& load balancing

System 390Mainframe Server

System 5000BHMulti-LAN Switch

Data

Voice

Voice

family, and BayStack 450 StackableSwitch, complemented by Optivity PolicyServices for policy-enabled networking.

The following highlights key features ofthe Passport 8000 Enterprise Switch, thewinner of the Best Network Hardwareaward from the 1999 Annual SI ImpactAwards, sponsored by IDG’s SolutionsIntegrator magazine:

• High port density, scalability and performance

• Switch capacity of 50 Gbps, scalable to 256 Gbps

• Aggregate throughput of 3 millionpackets per second

• Less than 9 microseconds of latency

• Up to 372 Ethernet 10/100BASE-Tauto-sensing, auto-negotiating ports

Gigabit Ethernet and ATM: A Technology Perspective White Paper 27

For these reasons, Nortel Networks recommends Gigabit Ethernet as thetechnology of choice for most campusbackbone LANs. ATM was, and continuesto be, a good option where its unique andcomplex functionality can be exploited, in deployment, for example, in the metropolitan and wide area network. This recommendation is supported bymany market research surveys that showusers overwhelmingly favor GigabitEthernet over ATM, including surveyssuch as User Plans for High PerformanceLANs by Infonetics Research Inc. (March 1999), and Hub and Switch 5-Year Forecast by the Dell’Oro Group(July 1999).

For users with investments in Centillion50/100 and System 5000BH LAN-ATMSwitches, evolution to a Gigabit Ethernetenvironment will be possible once GigabitEthernet switch modules are offered inthe future.

Information on the other award-winningmembers of the Passport CampusSolution is available on Nortel Networkswebsite: http://www.nortelnetworks.com

Conclusion andRecommendationIn enterprise networks, either ATM orGigabit Ethernet may be deployed in thecampus backbone. The key difference isin the complexity and much higher costof ATM, versus the simplicity and muchlower cost of Gigabit Ethernet. While itmay be argued that ATM is richer infunctionality, pure technical considerationis only one of the decision criteria, albeit a very important one.

Of utmost importance is functionalitythat meets today’s immediate needs at a price that is realistic. There is no pointin paying for more functionality andcomplexity than is necessary, that may or may not be needed, and may even be obsolete in the future. The rate of technology change and competitive pressures demand that the solution be

available today, before the next paradigmshift, and before new solutions introduceanother set of completely new challenges.

Gigabit Ethernet provides a pragmatic,viable, and relatively inexpensive (andtherefore, lower risk) campus backbonesolution that meets today’s needs andintegrates seamlessly with the omnipresenceof connectionless, frame-based Ethernetand Fast Ethernet LANs. Enhanced byrouting switch technology such as theNortel Networks Passport 8000Enterprise Switches, and policy-enablednetworking capabilities in the NortelNetworks Optivity Policy Services,Gigabit Ethernet provides enterprise busi-nesses with the bandwidth, functionality,scalability, and performance they need, ata much lower cost than ATM.

By contrast, ATM provides a campusbackbone solution that has the disadvantagesof undue complexity, unused functionality,and much higher cost of ownership in theenterprise LAN. Much of the complexityresults from the multitude of additionalcomponents, protocols, control, and data connections required by connection-oriented, cell-based ATM to emulatebroadcast-centric, connectionless, frame-based LANs. While Quality of Service(QoS) is an increasingly importantrequirement in enterprise networks, thereare other solutions to the problem that aresimpler, incremental, and less expensive.

http://www.nortelnetworks.com*Nortel Networks, the Nortel Networks logo, the Globemark, How the World Shares Ideas, Unified Networks, BayStack, Centillion, Optivity,and Passport are trademarks of Nortel Networks. All other trademarks are the property of their owners.© 2000 Nortel Networks. All rights reserved. Information in this document is subject to change without notice. Nortel Networks assumes no responsibility for any errors that may appear in this document. Printed in USA.

W P 3 7 4 0 - B / 0 4 - 0 0

United StatesNortel Networks4401 Great America ParkwaySanta Clara, CA 950541-800-822-9638

CanadaNortel Networks8200 Dixie RoadBrampton, OntarioL6T 5P6, Canada1-800-466-7835

Europe, Middle East, and AfricaNortel NetworksLes Cyclades - Immeuble Naxos25 Allée Pierre Ziller06560 Valbonne France33-4-92-96-69-66

Asia PacificNortel Networks151 Lorong Chuan #02-01 New Tech Park Singapore 55674165-287-2877

Caribbean and Latin America Nortel Networks1500 Concord TerraceSunrise, Florida33323-2815 U.S.A. 954-851-8000

For more sales and product information, please call 1-800-822-9638.Author: Tony Tan, Portfolio Marketing, Commercial Marketing