Agenda (Rev 1)
Week 1: Internet History and Basic ConceptsWeek 2: Routing vs. SwitchingWeek 3: Architecture and Topology TrendsWeek 4: Performance, Congestion ControlWeek 5: Multimedia Support, ATM vs. IPWeek 6: Routing part 1 (Intro, RIP, OSPF)Week 7: Routing part 2 (BGP, state of the Internet)Week 8: Guest lectures: Greg Minshall, and ??Week 9: Failure Modes and Fault DiagnosisWeek 10: Product evaluation criteria
Loose Ends...
• RTP vs. UDP
• Enet framing: postamble byte
• Token Ring vs. Ethernet Reliability
• Repeaters = Hubs = Layer 1 or 2?
Week 3: Architecture & Topology Trends
• Focus on Campus/Enterprise networks
• Use UW network as case study
• Introduce DNS and DHCP
• Continue to examine design issues/choices
Technology/Usage Trends
• TCP/IP
• Switching & point-to-point links
• Multimedia
• Desktop web servers
• Push publishing
• Web caching
• Non-locality of reference
Backbone Design Issues
• Link Technology & Topology• Routers vs. Switches vs. ATM• Single vs. Multiprotocol• Central vs. In-building Routers• Low-Density vs. High-Density Routers• Large vs. Small Subnets• Address Management• Redundancy
Core Network Elementseverything except the end-systems
• Name Resolution
• Host Configuration
• Multimedia Support
• Data Transport
• Data Caching??
• Management
Name Resolution
• DNS = Domain Name System
• Distributed, hierarchical directory service
• Maps host/service names to IP addresses
• Resiliency requires client failover
• Susceptible to bad data in root servers
• Growth of .com domain triggered crisis
• Need: security and dynamic update
Host Configuration
• RARP
• BOOTP (and variants)
• DHCP
Problems with DHCP
• Client bugs leading to duplicate addresses
• Scaling
• Redundancy
• Conflict with desktop server trend
• Conflict with network management needs
• How long should the leases be?
Data (Web) Caching
• Important for improving web performance
• Resiliency requires client failover
• Scalability requires server-server protocol
• ICP = Internet Caching Protocol
• Legal & Economic issues: copying & click-thrus
Data Transport
• Getting bits from A to B
• But how fast? How well?
• Not just unicast
Multimedia Support
• Multicast
• QoS = Quality of Service
• Performance = Speed + QoS
• Is QoS important if you have enough bandwidth?
Performance Elements
• Client Machine/Software– Computer-to-Closet– Closet-to-BDF– BDF-to-Router– Router-to-Router
• Server Machine/Software
Network CoreEnd-EndSystem
High-Speed Technologies• 100 Mbps
– FDDI (MTU=4500)– 100VG (MTU=1500)– 100BaseT (MTU=1500)
• 155 Mbps– PPP over SONET OC3c– ATM over SONET OC3c
• 1000 Mbps– PPP over SONET OC24 or 48– ATM over SONET OC24 or 48– Gigabit Ethernet– HIPPI, Fiber Channel
Ethernet Performance Levels
• 10 Mbps– Shared– Dedicated (= Switched)– Dedicated Full-Duplex
• 100 Mbps– Shared– Dedicated (= Switched)– Dedicated Full-Duplex
• 1000 Mbps– Shared– Dedicated (= Switched)– Dedicated Full-Duplex
ATM Performance Levels
• 25 Mbps
• 155 Mbps (OC3)
• 622 Mbps (OC12)
• 1244 Mbps (OC24)
• 2488 Mbps (OC48)
Next-step Desktop Connectivity
• Switched 10 (Half Duplex)• Shared 100 (Half Duplex)• Switched 100
• Would you rather have switched 10 or shared
100?• What are the implications of each on the
backbone?
Case Study: UW’s Campus Network
• The Problem
• History
• Growth
• Key Decisions
• Topology Evolution
• Future Choices
UW’s Network Problem “Death of the net predicted; film at eleven”
• More users
• More usage
• More demanding applications
• More bad guys
• Apparent slow-downs due to net congestion
• Delays still spotty, but expected to worsen
More Demanding Applications
• Non-interactive: email
• Baseline interactive: telnet, web
• Multimedia: desktop conferencing, VOD
• High-end: Medical imaging, VR
Scaling Considerations
• Where do we feel the pressure from increasing use?
– Performance (Speed + QoS)– Address Management– End-user Support
UW Network History
• 1988: five anti-interoperable campus nets...– 3,000 machines on a bridged Ethernet– A large Micom terminal network– Separate library, hospital, and administrative nets
• 1997: one campus net with...– 12,000 PCs– 6,000 Macs– 4,000 Unix workstations– 3,000 X terminals– 1,000 hubs, routers
UW Node Growth
• By 12/94 we had 17,000 nodes and 650 modems
• By 12/95 we had 22,000 nodes and 1,300 modems
• By 12/97 we had 27,000 nodes and 1,500 modems
• Run-rate had been 3k/year nodes, now flat… > Saturation at last??
UW Backbone Traffic
0
50
100
150
200
250
300
1990 1991 1992 1993 1994 1995 1996
Billions of Bytes eachNovember
UW Key Decisions
• Use Internet standards (Interoperate!)
• Route only IP (Simplify!)
• Use lots of 10BaseT Ethernet (Cheap!)
• Use multiple links (Redundancy, loadsharing)
• Use lots of subnets (Isolate Faults)
• Use lots of switches (Isolate Traffic)
• Use DHCP (Automate!)
UW Topology Evolution
• Epoch 1 (c. 1989): Dual Shared Ethernet Cables
• Epoch 2 (c. 1992 ): Dual Routers
• Epoch 3 (c. 1995): Quad Ethernet Switches
• Epoch 4 (c. 1997): Quad Fast Ethernet Switches
UW Current Backbone Topology
S1 S2 S3 S4
R1 R2 R38 R39 R40
To Building Subnets …
UW Building Infrastructure
To Router Center
UW Future Topology Choices
• Ring?
• Mesh?
• Continue with Hierarchy?
Should we…
• Use conventional routers?
• User “layer 3 switches”?
• Use edge routers, ATM core?
• Use Ipsilon IP switching?
• Use 3Com VLANs & Fast IP architecture?
• Use Cisco Tag switching?
Where to put Layer 3 Functionality?
• Edges, nearest the end-systems
• In each Building Distribution Frame
• Centrally, at/near top of hierarchy
• One arm router between VLANs
Decision Criteria
• Interoperability
• Reliability
• Performance
• Fault Tolerance
• Simplicity/Manageability
• Cost
Conclusions?
• Simplify!
• IP Rules!
• Ethernet simpler/cheaper than ATM
• Adequate Frame-based QoS still a question
• Avoid *having* to upgrade end-systems
• Caching becoming part of the network
Top Related