Post on 08-Sep-2018
KU EECS 881 – High-Performance Networking – Principles
– 1 –
© James P.G. SterbenzITTCHigh-Performance Networking
The University of Kansas EECS 881Preliminaries and Principles
© 2004–2010 James P.G. Sterbenz26 August 2010
James P.G. Sterbenz
Department of Electrical Engineering & Computer ScienceInformation Technology & Telecommunications Research Center
The University of Kansas
jpgs@eecs.ku.edu
http://www.ittc.ku.edu/~jpgs/courses/hsnets
rev. 10.0
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-2
© James P.G. SterbenzITTC
Preliminaries and PrinciplesOutline
PP.0 PreliminariesPP.1 Fundamental axiomsPP.2 Drivers and constraints PP.2 Design principles and tradeoffsPP.3 Design techniques
KU EECS 881 – High-Performance Networking – Principles
– 2 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-3
© James P.G. SterbenzITTC
Preliminaries and PrinciplesPP.0 Preliminaries
PP.0 PreliminariesPP.1 Fundamental axiomsPP.2 Drivers and constraints PP.2 Design principles and tradeoffsPP.3 Design techniques
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-4
© James P.G. SterbenzITTC
Network Architecture and TopologyThe Network
• Collection nodes or intermediate systems (IS)– switches, routers, bridges, etc.
• Interconnected by links that• Provide connectivity among
end systems (ES) or hosts or terminals– desktops, laptops, servers, telephone handsets, etc.– note: in some networks nodes may be both ES and IS
• To support distributed applications– e.g. email, Web browsing, peer-to-peer file sharing
KU EECS 881 – High-Performance Networking – Principles
– 3 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-5
© James P.G. SterbenzITTC
Network Architecture and TopologyThe Network
End system• Intermediate system
edge or access switch
core or backbone switch
multihomed
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-6
© James P.G. SterbenzITTCNetwork Architecture and Topology
Heterogeneous Networks
• Disparate networks are interconnected by gateways– translate data packet formats– interoperate signalling and control
GW
KU EECS 881 – High-Performance Networking – Principles
– 4 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-7
© James P.G. SterbenzITTCNetwork Architecture and Topology
Application Relationships
• Peer-to-peer– e.g. telepresence (video-conferencing), P2P file lookup
request
response • Client/server
– e.g. Web browsing, P2P file transfer
data streams with embedded synchronisation
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-8
© James P.G. SterbenzITTC
Network Control and SignallingCommunication Flow Diagrams
• Packets are parallelograms• Messages are directed line segments
tb
kc
tp
tr
time
tf+tq
21 3
d
distance
x23x12
SIG-MSG
KU EECS 881 – High-Performance Networking – Principles
– 5 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-9
© James P.G. SterbenzITTC
Preliminaries and PrinciplesPP.1 Fundamental Axioms
PP.0 PreliminariesPP.1 Fundamental axioms
Ø Know the past, present, and futureI. Application primacyII. High-performance paths goalIII. Limiting constraintsIV. Systemic optimisation principle
PP.2 Drivers and constraintsPP.3 Design principles and tradeoffsPP.4 Design techniques
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-10
© James P.G. SterbenzITTC
Fundamental AxiomsKnow the Past
• It is essential to know the past– to know what is really new– to not waste time reinventing the past– to not repeat past mistakes
Know the Past Ø1
Genuinely new ideas are rare. Almost every “new” idea has a past full of lessons that can either be learned or ignored.
KU EECS 881 – High-Performance Networking – Principles
– 6 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-11
© James P.G. SterbenzITTC
Fundamental AxiomsKnow the Past, Present, and Future
Know the Present Ø2
“Old” ideas look different in the present because the context in which they have reappeared is different. Understanding the difference tells us which lessons to learn from the past, and which to ignore.
• It is essential to have a broad current understanding– to understand how to reapply past ideas– to understand how to avoid past mistakes– to know what new needs to be done
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-12
© James P.G. SterbenzITTC
Fundamental AxiomsKnow the Past, Present, and Future
Know the Future Ø3
The future hasn’t happened yet, and is guaranteed to contain at least one completely unexpected discovery that changes everything.
• The future will contain surprises• We can’t predict the future …
… but we can prepare for it– understand historical progression of technology– not design systems only current assumptions …
… that will break in the future
KU EECS 881 – High-Performance Networking – Principles
– 7 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-13
© James P.G. SterbenzITTC
Fundamentals and Design PrinciplesA Brief History of Networking
2.1 A brief history of networking2.1.1 First generation: emergence2.1.2 Second generation: the Internet2.1.3 Third generation: convergence and the Web2.1.4 Fourth generation: scale, ubiquity, and mobility
2.2 Drivers and constraints2.3 Design principles and tradeoffs2.4 Design techniques
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-14
© James P.G. SterbenzITTC
History: First Generation (≤1970s) Emergence
• Voice – widely deployed– circuit switched PSTN over copper– RF two-way radios
• Entertainment – widely deployed– broadcast RF for radio and television
• Data – limited pervasiveness– serial link over copper– modem remote terminal access
Distinct infrastructure for the three
KU EECS 881 – High-Performance Networking – Principles
– 8 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-15
© James P.G. SterbenzITTC
History: Second Generation (1980s) Internet
• Voice – widely deployed– digital switched PSTN over copper– cellular mobile telephony (late)
• Entertainment – significant deployment– CATV over copper coax starts to supplement broadcast
• Data – research / corporate enterprise networks– packet switched store-and-forward: Internet, X.25– gatewayed subnetworks: SNA, BNA, DCNA, etc
High-speed networking emerges as a discipline
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-16
© James P.G. SterbenzITTC
History: Second Generation (1980s) Unix and Internet
Not Invented Here Corollary [1980s version] Ø-A
Operating systems didn’t begin with Unix, and networking didn’t begin with TCP/IP.
• Dominant systems emerged in the second generation– operating systems: BSD and System V Unix– networking: TCP/IP
• Pioneering earlier work shouldn’t be forgotten …… or reinvented– e.g. OS/360, MVS, Multics, TSS/360, CP-67, MCP– e.g. SNA, X.25
KU EECS 881 – High-Performance Networking – Principles
– 9 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-17
© James P.G. SterbenzITTC
History: Third Generation (1990s) Convergence and the Web
• Beginnings of converged IP-based infrastructure– IP-based global Internet subsuming enterprise networks– multimedia streaming– voice and video-conferencing over IP
• Web– replaces all other information access
• e.g. FTP, gopher, archie
• Fast packet switching over fiber• Significant 1st world deployment
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-18
© James P.G. SterbenzITTC
History: Third Generation (1990s)Windows and PC
Not Invented Here Corollary [1990s version] Ø-A
Operating systems didn’t begin with Windows, and host architectures didn’t begin with the PC and x86 architecture.
• Dominant systems emerged in the third generation– operating systems: Windows– host architecture: PC based on x86
• Pioneering earlier work shouldn’t be forgotten …… or reinvented– e.g. Unix, OS/360, Multics, TSS/360, CP-67, MCP– e.g. mainframe and superminicomputer architectures
KU EECS 881 – High-Performance Networking – Principles
– 10 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-19
© James P.G. SterbenzITTC
History: Fourth Generation (2000s+)Scale, Ubiquity, Mobility
• Global infrastructure– optical core networks– wireless access networks– (hopefully) significant 2nd and 3rd world pervasiveness
• Ubiquitous and pervasive computing– personal area networks– networked embedded devices (e.g. in vehicles)
• Tera- and Peta-node networks• Towards:
– autonomic control and management– resilient, survivable, and secure networking
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-20
© James P.G. SterbenzITTC
Preliminaries and PrinciplesPP.2 Drivers and Constraints
PP.0 PreliminariesPP.1 Fundamental axiomsPP.2 Drivers and constraints
PP.2.1 ApplicationsPP.2.2 The ideal networkPP.2.3 Limiting constraints
PP.3 Design principles and tradeoffsPP.4 Design techniques
KU EECS 881 – High-Performance Networking – Principles
– 11 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-21
© James P.G. SterbenzITTC
Fundamental AxiomsApplication Primacy
• Corollaries1. field of dreams vs. killer app dilemma2. interapplication delay3. network bandwidth and latency4. networking importance in system design
Application Primacy I
The sole and entire point of building a high-performance network infrastructure is to support the distributed applications that need it.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-22
© James P.G. SterbenzITTC
Application PrimacyField of Dreams vs. Killer App Dilemma
Field of Dreams vs. Killer App Dilemma I.1
The emergence of the next “killer application” is difficult without sufficient network infrastructure. The incentive to build network infrastructure is viewed as a “field of dreams” without concrete projections of user demand.
• Applications need infrastructure on which to build– field of dreams
• Infrastructure deployment needs to justify expense– killer app
• Difficult to resolve without government funding– e.g. ARPANET, NSFNET, BSD Unix
KU EECS 881 – High-Performance Networking – Principles
– 12 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-23
© James P.G. SterbenzITTC
Application PrimacyInterapplication Delay
Interapplication Delay I.2
The performance metric of primary interest to communicating applications is the total delay in transferring data. The metric of interest to the users includes the delay through the application.
• Users and Applications care about delay– not bandwidth! (directly)– users: interapplication delay– applications: end-to-end network and end system delay
• If delay is zero, all data is available instantaneously– no difference between distributed and local
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-24
© James P.G. SterbenzITTC
Fundamental AxiomsApplication Primacy
• Interapplication delay D consists of two components– path latency d path
– transmission delay d transmission = b [bit] / r [bit/sec]D = d path + d transmission
• Low latency directly needed • High bandwidth needed based on object size
Network Bandwidth and Latency I.3
Bandwidth and latency are the primary performance metrics important to interapplication delay.
KU EECS 881 – High-Performance Networking – Principles
– 13 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-25
© James P.G. SterbenzITTC
Application PrimacyImportance of Networking
• Components that have significant networking role– communication performance should drive design
• Obvious for network components• Has not been driving consumer PC architecture
– even though Web browsing is primary application for many
Networking Importance in System Design I.4
Communication is the defining characteristic of networked applications, and thus support for communication must be an integral part of systems supporting distributed applications.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-26
© James P.G. SterbenzITTC
Fundamental AxiomsHigh-Performance Paths Goal
• Corollaries1. path establishment – signalling, routing, and control2. path protection – resource reservation or overprovisioning3. store-and-forward and copy avoidance4. blocking avoidance5. contention avoidance6. path information assurance tradeoff (against performance)
High-Performance Paths Goal II
The network and end systems must provide a low-latency, high-bandwidth path between applications to support low interapplication delay.
KU EECS 881 – High-Performance Networking – Principles
– 14 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-27
© James P.G. SterbenzITTC
Fundamental AxiomsHigh-Performance Paths in Ideal Network
networkCPU
M app
end system
CPU
M app
end system
D = 0
R = ∞
zero latency
infinite bandwidth
• Unlimited bandwidth• Zero delay
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-28
© James P.G. SterbenzITTC
High-Performance PathsPath Establishment
• High performance paths need to be established– signalling is needed to establish paths– routing algorithms are need to determine path route– forwarding mechanisms move data along path
• Paths may need to be reconfigured– routing around faults, congestion, opponents– in response to path dynamics: mobility or multipoint
Path Establishment Corollary II.1
Signalling and routing mechanisms must exist to discover, establish, and forward data along the high performance paths.
KU EECS 881 – High-Performance Networking – Principles
– 15 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-29
© James P.G. SterbenzITTC
High-Performance PathsPath Protection
• High performance paths need to be protected– by overprovisioning– in a resource constrained environment
• resource reservation• congestion avoidance and control
Path Protection Corollary II.2
In a resource constrained environment, mechanisms must exist to arbitrate and reserve the resources needed to provide the high-performance path and prevent other applications from interfering by congesting the network.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-30
© James P.G. SterbenzITTC
High-Performance PathsStore-and-Forward and Copy
• Per hop per byte delays are significant– store-and-forward– packet copying between memory buffers– per byte touching operations
• Avoid store-and-forward– cut-through– zero copy mechanisms
Store-and-Forward Avoidance II.3
Store-and-forward and copy operations on data have such a significant latency penalty that they should be avoided whenever possible.
KU EECS 881 – High-Performance Networking – Principles
– 16 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-31
© James P.G. SterbenzITTC
High-Performance PathsBlocking
• Blocking delays packets– interfering paths
• non-blocking switches• alternate path routing
– queueing• steady-state queue length should approach zero
Blocking Avoidance II.4
Blocking along paths should be avoided, whether due to that overlap of interfering paths, or due to the building of queues.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-32
© James P.G. SterbenzITTC
High-Performance PathsContention
• Contention delays packets– MAC delays for shared medium– spatial reuse to reduce contention
• parallel waveguides• direction antennæ• transmission power control
Contention Avoidance II.5
Channel contention due to a shared medium should be avoided. Spatial reuse techniques including parallel waveguide, directional antennæ, and transmission power limitations can mitigate contention.
KU EECS 881 – High-Performance Networking – Principles
– 17 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-33
© James P.G. SterbenzITTC
High-Performance PathsTransfer of Control
• Transfer control operations assist data movement– critical path analysis needed
• granularity and implementation for line-rate performance
– efficient transfer of control between protocol components
Efficient Transfer of Control II.6
Control mechanisms on which the critical path depends should be efficient. High overhead transfer of control betweenprotocol processing modules should be avoided.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-34
© James P.G. SterbenzITTC
High-Performance PathsInformation Assurance
• Information assurance requires resources– control processing delays: authentication and keying– data path delays: encryption/decryption, security headers– processing/memory that could be used for packet processing
• Trade IA vs. performance requirements
Path Information Assurance Tradeoff II.7
Paths have application-driven reliability and security requirements, which may have to be traded against performance.
KU EECS 881 – High-Performance Networking – Principles
– 18 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-35
© James P.G. SterbenzITTC
Fundamental AxiomsLimiting Constraints
• Corollaries1. speed of light2. channel capacity9. attenuation and transmission power3. switching speed4. cost and feasibility5. heterogeneity6. policy and administration7. backward compatibility inhibits real change8. standards both facilitate and impede dilemma
Limiting Constraints III
Real-world constraints make it difficult to provide high-performance path to applications.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-36
© James P.G. SterbenzITTC
Limiting ConstraintsSpeed of Light
• Propagation velocity through a medium– ~ 0.66–1.0 c = 3–5 μs/km– dictates fundamental limit on latency over a distance– techniques can mask, but not eliminate
• caching• prediction
Speed of Light III.1
The latency suffered by propagating signals due to the speed of light is a fundamental law of physics, and is not susceptible todirect optimisation.
KU EECS 881 – High-Performance Networking – Principles
– 19 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-37
© James P.G. SterbenzITTC
Limiting ConstraintsChannel Capacity
• Bandwidth of a medium– dictates fundamental limit on data rate– techniques to efficiently utilise available channel bandwidth
• clever line coding (e.g. QAM)• multiplexing• spatial reuse
Channel Capacity III.2
The capacity of communication channels is limited by physics. Clever multiplexing and spatial reuse can reduce the impact, butnot eliminate the constraint.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-38
© James P.G. SterbenzITTC
Limiting ConstraintsAttenuation and Transmission Power
• Attenuation limits the range of transmission– guided: logarithmic dependent on medium (~0.1–10 dB/km)– wireless: square law 1/r2 – 1/r4 (with multipath)
• Transmission energy needed to compensate– limited by transmitter power– constrained by channel design parameters
• interference with other channels
Attenuation and Power III.9
Attenuation of signals limits their propagation distance for a given transmission energy. Transmission energy is limited by the power available of the transmitter, and may be constrained by the design parameters of the transmission medium
KU EECS 881 – High-Performance Networking – Principles
– 20 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-39
© James P.G. SterbenzITTC
Limiting ConstraintsSwitching Speed
• Switching rate of electronic & photonic components– dictates fundamental limit on data rate– Moore’s law keeps reducing the constraint– impact on data rate dependent on transistor complexity
• transmission rate typically 4–10× > packet processing rate– e.g. OC-768 vs. OC-192
Switching Speed III.3
There are limits on switching frequency of components, constrained by process technology at a given time, and ultimately limited by physics.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-40
© James P.G. SterbenzITTC
Limiting ConstraintsCost and Feasibility
• Competing solutions have different costs– C = f + ƒ(v ): fixed plus variable (scaling) cost– cost of deployment is significant determinant
Cost and Feasibility III.4
The relative cost and scaling complexity of competing architectures and designs must be considered in choosing alternatives to deploy.
KU EECS 881 – High-Performance Networking – Principles
– 21 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-41
© James P.G. SterbenzITTC
Limiting ConstraintsHeterogeneity
• Networks serve to interconnect heterogeneous– users, end systems, applications
using heterogeneous– technologies, network infrastructure
• Significant overhead needed to suppor heterogeity– transcoding, format conversion, control interworking
Heterogeneity III.5
The network is a heterogeneous world, which contains the applications and end systems that networks tie together, and the node and link infrastructure from which networks are built.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-42
© James P.G. SterbenzITTC
Limiting ConstraintsPolicy and Administration
• Policies & administrative concerns constrain networks – economics and business models– intellectual property and legal issues– government regulation– social dynamics
Policy and Administration III.6
Policies and administrative concerns frustrate the deployment ofoptimal high-speed network topologies, constrain the paths through with applications can communicate, and may dictate how application functionality is distributed.
KU EECS 881 – High-Performance Networking – Principles
– 22 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-43
© James P.G. SterbenzITTC
Limiting ConstraintsBackward Compatibility
• Backward compatibility is very important– the Internet cannot be simply upgraded to a new version
• network infrastructure• end systems
• Change is – incremental– hacks extend the life of existing non-extensible protocols
Backward Compatibility Inhibits Radical Change III.7
The difficulty of completely replacing widely deployed network protocols means that improvements must be backward compatible and incremental. Hacks are used and institutionalised to extend the life of network protocols.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-44
© James P.G. SterbenzITTC
Limiting ConstraintsStandards
• Standards are a very difficult dilemma– necessary for interoperation– timing is critical
• too early or over specific: impedes progress• too late or general: useless
• De-facto standards may be better than official ones– monopolies threaten both (e.g. MS-Windows)
Backward Compatibility Inhibits Radical Change III.8
Standards are critical to facilitate interoperability, but standards that are specified too early or are overly specific can impede progress. Standards that are specified too late or are not specific enough are useless.
KU EECS 881 – High-Performance Networking – Principles
– 23 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-45
© James P.G. SterbenzITTC
Preliminaries and PrinciplesPP.3 Design Principles and Tradeoffs
PP.0 PreliminariesPP.1 Fundamental axiomsPP.2 Drivers and constraintsPP.3 Design principles and tradeoffs
PP.3.1 Critical pathPP.3.2 Resource tradeoffsPP.3.3 End-to-end vs. hop-by-hopPP.3.4 Protocol layeringPP.3.5 State and hierarchyPP.3.6 Control mechanismsPP.3.7 Distribution of application dataPP.3.8 Protocol data units
PP.4 Design techniques
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-46
© James P.G. SterbenzITTC
Fundamental AxiomsSystemic Optimisation
• Corollaries1. consider side effects 2. keep it simple and open3. system partitioning4. flexibility and workaround
Systemic Optimisation Principle IV
Networks are systems of systems with complex compositions and interactions at multiple levels of hardware and software. These pieces must be analysed and optimised in concert with one another.
KU EECS 881 – High-Performance Networking – Principles
– 24 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-47
© James P.G. SterbenzITTC
Systemic OptimisationGranularity
• Many orders of magnitude time difference: >1018
– protocol and architecture standardisation– protocol deployment– network configuration– network management– session establishment and lifetime– connection/flow establishment and lifetime– packet/frame transfer– cell transfer– byte transfer– bit/photon transfer
• Principles help isolate/mitigate concerns……but not completely!
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-48
© James P.G. SterbenzITTC
Systemic OptimisationGranularity: Unit Multipliers
hhecto102ccenti10–2
dadeka101ddeci10–1
EIC binarySI decimal
exbipebitebi
gibimebikibi
260
250
240
230
220
210
yz
afp
n
μ
m
YZ
EPT
GMk
Yotta1024yocto10–24
Zetta1021zepto10–21
EiExa1018atto10–18
PiPeta1015femto10–15
TiTera1012pico10–12
GiGiga109nano10–9
MiMega106micro10–6
Kikilo103milli10–3
KU EECS 881 – High-Performance Networking – Principles
– 25 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-49
© James P.G. SterbenzITTC
Systemic OptimisationSide Effects
• Optimisations frequently have side effects– may reduce effectiveness of optimisation– may reduce overall performance
• Careful systemic analysis needed
Consider Side Effects IV1
Optimisations frequently have unintended side effects to the detriment of overall performance. It is important to consider, analyse, and understand the consequences of optimisations.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-50
© James P.G. SterbenzITTC
Systemic OptimisationSimple and Open
• Difficult to understand and optimise complex systems• Virtually impossible for closed systems
– open source highly desirable– open interfaces essential with sufficient functionality
Keep it Simple and Open IV2
It is difficult to understand and optimise complex systems, and virtually impossible to understand closed systems, which do not have open published interfaces.
KU EECS 881 – High-Performance Networking – Principles
– 26 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-51
© James P.G. SterbenzITTC
Systemic OptimisationSystem Partitioning
• Essential to analyse partitioning of functionality– across network– among components
• switch node central or line interface• host CPU or network interface
• Improper partitioning– decreases overall performance– may increases overall cost
System Partitioning Corollary IV3
Carefully determine how functionality is distributed across a network. Improper partitioning of a function can dramatically reduce overall performance, and increase overall cost.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-52
© James P.G. SterbenzITTC
Systemic OptimisationFlexibility and Workaround
• Impossible to perfectly future-proof systems• Provide mechanisms to allow future enhancements
– protocol fields– control mechanism– software hooks– modular extensability
Flexibility and Workaround Corollary IV4
Provide protocol fields, control mechanisms, and software hooks to allow graceful enhancements and workarounds when fundamental tradeoffs change.
KU EECS 881 – High-Performance Networking – Principles
– 27 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-53
© James P.G. SterbenzITTC
Systemic OptimisationDesign Principles
• Systemic optimisation supported by design principles1. Selective optimisation2. Resource tradeoffs3. End-to-end arguments4. Protocol layering5. State management6. Control mechanism latency7. Distributed data8. Protocol data units
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-54
© James P.G. SterbenzITTC
Selective OptimisationSelective Optimisation
• Corollaries– second order effects– critical path– functional partitioning and assignment
Selective Optimisation Principle 1
It is neither practical nor feasible to optimise everything. Spend implementation time and system cost on the most important constituents of performance.
KU EECS 881 – High-Performance Networking – Principles
– 28 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-55
© James P.G. SterbenzITTC
Selective Optimisation Second-Order Effects
• Impact of optimisations should be understood• Optimising second-order effects is not useful
– e.g. optimising link that is not bottleneck– e.g. optimising LAN latency for a WAN application– e.g. optimising operations not in critical path
Second-Order Effect Corollary 1A
The impact of spatially local or piecewise optimisations on the overall performance must be understood; components with only a second-order effect on performance should not be the target of optimisation.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-56
© James P.G. SterbenzITTC
Selective Optimisation Critical Path
• Optimise critical path on which high speed depends– data transfer operations– transfer control operations that assist in data transfer
• occur frequently• cause serial delay to subsequent data transfer
Critical Path Corollary 1B
Optimise implementations for the critical path, in both control and data flow.
KU EECS 881 – High-Performance Networking – Principles
– 29 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-57
© James P.G. SterbenzITTC
Selective Optimisation Functional Partitioning and Assignment
• Essential to analyse partitioning of functionality– between hardware and – software
• electronic vs. photonic • compiled vs. hand optimised• CMOS vs. GaAs • main memory vs. cache• DRAM vs. SRAM vs. CAM
• Improper partitioning– increases overall cost– may decreases overall performance
Functional Partitioning and Assignment Corollary 1C
Carefully determine what functionality is implemented in scarce or expensive technology. Improper partitioning of a function can dramatically increase overall cost and reduce performance.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-58
© James P.G. SterbenzITTC
Resource TradeoffsResource Tradeoffs
• Network is collection of resources: P, M, B, Eand constraints: L
• Balance relative composition to optimise– performance and cost
• Corollaries– resource tradeoffs change– optimal resource utilisation vs. overengineering tradeoffs– support for multicast
Resource Tradeoff Principle 2
Networks are collections of resources. The relative compositionof these resources must be balanced to optimise cost and performance.
KU EECS 881 – High-Performance Networking – Principles
– 30 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-59
© James P.G. SterbenzITTC
Resource TradeoffsResource Tradeoffs Change
• Relative cost of resources change over time– non-uniform advances in technology– changing application constraints
• Design and engineering for future – track trends and shifts in resource tradeoffs– be prepared for significant and disruptive shifts
Resource Tradeoffs Change 2A
The relative cost of resources and the impact of constraints change over time, due to non-uniform advances in different aspects in technology.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-60
© James P.G. SterbenzITTC
Resource TradeoffsOptimality vs. Overengineering
• Optimal use of a resource costs other resources– e.g. optimal bandwidth reservation and utilisation costs
• memory for link state and topology databases• processing for admission control and resource reservation
• Optimise entire system cost– overengineering of a resource may reduce cost of another
Optimal Resource Utilisation vs. Overengineering 2B
Tradeoff Balance the benefit of optimal resource utilisation against the costs of the algorithms that attempt to achieve optimal solutions.
KU EECS 881 – High-Performance Networking – Principles
– 31 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-61
© James P.G. SterbenzITTC
Resource TradeoffsMulticast
• Multicast needed by network control protocols– e.g. address resolution
• Multicast group communication model for apps– e.g. audio and video distribution– L3 multicast less bandwidth than n 2 point-to-point mesh
• requires network layer protocol support e.g. IGMP, ATM• requires native switch multicast
Support for Multicast 2C
Multicast is an important mechanism for conserving bandwidth and supporting network control protocols, and should be supported by the network.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-62
© James P.G. SterbenzITTC
End-to-End vs. Hop-by-HopEnd-to-End Argument
• End-to-End Arguments *rephrased from [Saltzer 1984]
1. end-to-end argument2. hop-by-hop performance enhancement
End-to-End Argument* 3
Functions required by communicating applications can be correctly and completely implemented only with the knowledge and help of the applications themselves. Providing these functions as features within the network itself is not possible.
KU EECS 881 – High-Performance Networking – Principles
– 32 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-63
© James P.G. SterbenzITTC
End-to-End vs. Hop-by-HopEnd-to-End Argument
• Hop-by-Hop functions do not compose end-to-end– between HBH boundaries, function f is defeated (g)
• e.g. error control: errors may occur within switches• e.g. encryption: cleartext within switches may be snooped
– these functions must be done E2E• doing them HBH is redundant , and may lower performance
• Corollaries– hop-by-hop performance enhancement– endpoint recursion
f -1 f
f1 f3-1f3f2
-1f2f1-1
g g′′′ g′′ g′
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-64
© James P.G. SterbenzITTC
End-to-End vs. Hop-by-HopHop-by-Hop Performance Enhancement
• E2E Argument (1st half) says what must be E2E• HBH Performance enhancement (2nd half)
– functions should duplicated HBH if overall E2E benefit• e.g. wireless link error control reduces E2E control overhead
shorter control loop frequent; longer control loop infrequent
• Analysis is required to determine cost/benifit
Hop-by-Hop Performance Enhancement Corollary* 3A
It may be beneficial to duplicate an end-to-end function hop-by-hop, if doing so results in an overall (end-to-end) improvement in performance.
KU EECS 881 – High-Performance Networking – Principles
– 33 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-65
© James P.G. SterbenzITTC
End-to-End vs. Hop-by-HopPerformance Enhancement Example
• E2E vs. HBH error control for reliable communication– E2E argument says error control must be done E2E
• e.g. E2E ARQ (error check code and retransmit if necessary– but should HBH error control also be done?
• Effect of high loss rate on wireless link– ~300 ms RTT retransmission for every corrupted packet
• Error control on wireless link reduces to ~1μs– shorter control loop results in dramatically lower E2E delay
100 mwireless LANUniv. Kansas
100 mfiber LAN
Univ. Sydney
15 000 kmfiber WAN
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-66
© James P.G. SterbenzITTC
End-to-End vs. Hop-by-HopEndpoint Recursion
• End-to-end arguments– apply edge-to-edge– subnet-to-subnet– link-to-link
• Overall E2E concernsmust not be forgotten
Endpoint Recursion Corollary 3B
What is hop-by-hop in one context may be end-to-end in another. The End-to-End Arguments can be applied recursively to any sequence of nodes in the network, or layers in the protocol stack.
end system
subnetwork
link E2E
HBH
HBH HBH
subnet E2E
end system
HBH HBH
transport E2E
KU EECS 881 – High-Performance Networking – Principles
– 34 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-67
© James P.G. SterbenzITTC
Protocol LayeringProtocol Layering
• Layering isa usefulabstraction– role-based
2. link3. switch4. end system5. session (control plane)7. application
Protocol Layering Principle 4
Layering is a useful abstraction for thinking about networking system architecture and for organising protocols based on network structure.
end system
link link
network
link
network
transport
end system router repeater / bridge
link
network
transport
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-68
© James P.G. SterbenzITTC
Protocol LayeringProtocol Layering
• Layeringprovidesserviceabstraction– isolate:
protocolscomponentstechnology
• any transport layer over IP• IP over any link layer• commodity link layer chip evolution, e.g.
– 10BASE-T→ 100BASE-T → 1000BASE-X→ 802.11b → 802.11g
abstraction boundary
abstraction boundary
peer-peer virtual interaction
services provided
services used
layer n+1
layer n–1
layer n
layer n+1
layer n–1
layer n
KU EECS 881 – High-Performance Networking – Principles
– 35 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-69
© James P.G. SterbenzITTC
Protocol LayeringProtocol Layering
• Corollaries– laying as an implementation technique performs poorly– redundant layer functionality– layer synergy– hourglass– integrated layer processing– balance transparency and abstraction vs. hiding– support a variety of interface mechanisms– interrupt vs. polling– interface scalability
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-70
© James P.G. SterbenzITTC
Protocol LayeringLayering as Implementation Technique
• Layered model– specifies
service abstraction
– doesn’t specifyimplementation
– process/layerimplementationterriblyinefficient
Layering as an Implementation Technique Performs 4A
Poorly Layered protocol architecture should not be confused with inefficient layer implementation techniques.
ADU
coded LPDU
SPDUTH TT
NH TPDU
LH NPDU LT
SH PPDU
PH APDU
session
transport
presentation
application
network
link
physical
session
transport
presentation
application
network
link
physical
network
link
physical
Send ES Receive ES
IS
KU EECS 881 – High-Performance Networking – Principles
– 36 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-71
© James P.G. SterbenzITTC
Protocol LayeringHybrid Layer/Plane Cube
physicalMAC
link
networktransport
sessionapplication
L1
L7L5L4L3
L2L1.5
data plane control plane
plane
management
socialL8
virtual linkL2.5
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-72
© James P.G. SterbenzITTC
Protocol LayeringRedundant Functionality
• Functionality should not be included in a layer– that must be located at a higher layer (E2E argument)– unless an overall performance benefit (HBH corollary)
• E2E vs. A2A (application-to-application) functionality
Redundant Layer Functionality Corollary 4B
Functionality should not be included at a layer that must be duplicated at a higher layer, unless there is a performance benefit in doing so.
KU EECS 881 – High-Performance Networking – Principles
– 37 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-73
© James P.G. SterbenzITTC
Protocol LayeringLayer Synergy
• Inter-layer transfers involve non-trivial overhead– encapsulation/decapsulation of PDUs– inter-layer control transfer– effects of overlapping intra-layer control mechanisms
• Protocol layers should be designed with this in mind– antithesis of layering to isolate protocols and technology
Layer Synergy Corollary 4C
When layering is used as a means of protocol division, allowing asynchronous protocol processing and independent data encapsulations, the processing and control mechanisms should not interfere with one another. Protocol data units should translate efficiently between layers.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-74
© James P.G. SterbenzITTC
Protocol LayeringHourglass
• Common network layer– common addressing essential
for seamless interworking– compatible routing & signalling
• Active networking– reduces constraint
Hourglass Corollary 4D
The network layer provides the convergence of addressing, routing, and signalling that ties the global Internet together. It is essential that addressing be common and that routing and signalling protocols be highly compatible.
IP
4
2
3
TCP UDP RTP • • •
Ethernet SONET 802.11 • • • λ • • •
KU EECS 881 – High-Performance Networking – Principles
– 38 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-75
© James P.G. SterbenzITTC
Protocol LayeringIntegrated Layer Processing
• Process multiple layers simultaneously– en/decapsulations– per byte operations
• checksums• copies
• Software or hardware
Integrated Layer Processing (ILP) Corollary 4E
When multiple layers are generated or terminated in a single component, all encapsulations/decapsulations should be done at once, if possible.
ILP loop
red processing yellow processing green processing cyan processing violet processing
F4
F5
F3
F2
F1
IILP
IF
ƒ4
ƒ5
ƒ3
ƒ2
ƒ1
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-76
© James P.G. SterbenzITTC
Protocol LayeringTransparency and Hiding
• Layering abstracts interface below– simpler representation of complex interface
• Hiding needed properties or parameters is bad• Translucency is better then transparency
Balance Transparency and Abstraction vs. Hiding 4F
Layering is designed around abstraction, providing a simpler representation of a complicated interface. Abstraction can hidenecessary property or parameter, which is not a desirable property of layering.
KU EECS 881 – High-Performance Networking – Principles
– 39 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-77
© James P.G. SterbenzITTC
Protocol LayeringVariety of Interface Mechanisms
• Interface mechanisms are best in different situations– synchronous vs. asynchronous
• synchronised interlayer interfaces• asynchronous nondeterministic interlayer interfaces
– interrupt vs. polled
• Interlayer interfaces should provide necessary variety
Support a Variety of Interface Mechanisms 4G
A range of interlayer interface mechanisms should be provided as appropriate for performance optimisation: synchronous and asynchronous, as well as interrupt-driven and polled.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-78
© James P.G. SterbenzITTC
Protocol LayeringInterrupt vs. Polled
• Polling: more efficient mechanism than interrupts – when event timing is known a priori– otherwise extra polling and spin locks less efficient
• Interrupts: significant context switch overhead– more efficient overall when event timing not known a priori
• Hybrid– interrupt for first event– then polled for sequence of expected events
Interrupt vs. Polling 4H
Interrupts provide the ability to react to asynchronous events, but are expensive operations. Polling can be used when a protocol has knowledge of when information arrives.
KU EECS 881 – High-Performance Networking – Principles
– 40 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-79
© James P.G. SterbenzITTC
Protocol LayeringInterface Scalability
• Interlayer interfaces relate to performance and scale – parameter values: max values and range– control fields (e.g. hierarchy and source routes)
• Interfaces need to balance– efficient encoding for current and near future networks vs.– scalability for future
• base value and multiplier e.g. TCP window scaling option• concatenation of fields e.g. MPLS label stack
Interface Scalability Corollary 4I
Interlayer interfaces should support the scalability of the network and parameters transferred among the application, protocol stack, and network components.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-80
© James P.G. SterbenzITTC
Protocol LayeringExample2.1 ATM and the Revenge of Layering
• 5-6 layers:– TCP over
– IP over– AAL-5 over– ATM over
– SONET– OTN
• ATM supposed to be fast packet switching but– IP/ATM interface completely incompatible– 40B TCP ACK takes two 48B ATM cells
ADU
TCP ADU
TCP ADU IP
0 IP TC 0 P A–
TCP
0 –D– 1 –U CS
TCP ADU IP CS
TOH 0 IP TC 0 P A– 0 –D– 1 –U CS
IP
ATM AAL-5
ATM
SONET
KU EECS 881 – High-Performance Networking – Principles
– 41 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-81
© James P.G. SterbenzITTC
State and HierarchyState Management
• Corollaries– hard state vs. soft state vs. stateless tradeoff– aggregation and reduction of date transfer– hierarchy corollary– scope of information tradeoff– assumed initial conditions– minimise control overhead
State Management Principle 5
The mechanisms for installation and management of state should be carefully chosen to balance fast, approximate, and coarse-grained against slow, accurate and fine-grained.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-82
© James P.G. SterbenzITTC
State and HierarchyHard vs. Soft State
• Hard state– predictable & deterministic– latency to establish
• Stateless– resilient to failure– overhead per data unit
• Soft state– intermediate mechanism– state accumulation
without latency– resiliance to failures
Hard State vs. Soft State vs. Stateless Tradeoff 5A
Balance the tradeoff between the latency to set up hard state on a per connection basis versus. the per data unit overhead of making stateless decisions or of establishing and maintaining soft state.
KU EECS 881 – High-Performance Networking – Principles
– 42 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-83
© James P.G. SterbenzITTC
State and HierarchyAggregation of State
• State aggregation benefits– reduces amount of information stored– reduces bandwidth used for state transfer
• State aggregation cost– loss of precision and fine-grained control
Aggregation and Reduction of State Transfer 5B
Aggregation of state reduces the amount of information stored. Reducing the rate at which state information is propagated through the network reduces bandwidth and processing at network nodes, which comes at the expense of finer-grained control with more precise information.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-84
© James P.G. SterbenzITTC
State and HierarchyHierarchy
• Hierarchyaggregates &abstracts– full info intracluster– abstracted below
• Scalability
Hierarchy Corollary 5C
Use hierarchy and clustering to manage complexity by abstracting and aggregating information to higher levels and to isolate the effects of changes within clusters.
α
A1
2 3
4
B
3
1 2
C9
6 8
4
β Y 4
ℵ
KU EECS 881 – High-Performance Networking – Principles
– 43 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-85
© James P.G. SterbenzITTC
State and HierarchyScope of Information
• Scope and accuracy of information tradeoff– local information: less accurate
• quickly accessible
– global scope: more accurate• more overhead
– delay to access or– overhead in keeping globally synchronised
Scope of Information Tradeoff 5D
Make quick decisions based on local information when possible. Even if you try to make a better decision with more detailed global state, by the time information is collected and filtered,the state of the network may have changed.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-86
© James P.G. SterbenzITTC
State and HierarchyScope of Information
• Scope of information tradeoff– quick decisions based on local state vs.– accurate decisions based on more detailed and global state
• by the time this is gathered, it may be irrelevant
Scope of Information Tradeoff 5E
Make quick decisions based on local information when possible. Even if you try to make a better decision with more detailed global state, by the time information is collected and filtered,the state of the network may have changed.
KU EECS 881 – High-Performance Networking – Principles
– 44 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-87
© James P.G. SterbenzITTC
State and HierarchyControl Overhead
• Signalling and control overhead is necessary• Keep low to maximise ability to transport data
Minimise Control Overhead 5F
The purpose of a network is to carry application data. The processing and transmission overhead introduced by control mechanisms should be kept as low as possible to maximise the fraction of network resources available for carrying applicationdata.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-88
© James P.G. SterbenzITTC
Control MechanismsLatency
• Low latency control mechanisms• Corollaries
– minimise round trips– exploit local knowledge– anticipate future state– separate control mechanisms
Control Mechanism Latency Principle 6
Effective network control depends on the availability of accurate and current information Control mechanisms must operate within convergence bounds that are matched to the rate of change in the network, as well as latency bounds to provide low interapplication delay.
KU EECS 881 – High-Performance Networking – Principles
– 45 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-89
© James P.G. SterbenzITTC
Control MechanismsRound Trips
• Techniques to minimise control round trips– hop-by-hop acknowledgements– parameter ranges
• desired – minimum acceptable• desired, alternate
– overlap of control and data
Minimise Round Trips 6A
Structure control messages and the information they convey to minimise the number of round trips required to accomplish data transfer.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-90
© James P.G. SterbenzITTC
Control MechanismsExploit Local Knowledge
• Exert control from entity that has knowledge– reduces number of E2E control message signalling latencies– delegate control to entity with knowledge
• e.g. fast resource reservation by transaction server
– move knowledge to control entity• e.g. server push
Exploit Local Knowledge 6B
Control should be exerted by the entity that has the knowledge needed to do so directly and efficiently.
KU EECS 881 – High-Performance Networking – Principles
– 46 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-91
© James P.G. SterbenzITTC
Control MechanismsAnticipate Future State
• Anticipate future state– proactive control before needed– quick reactive control without E2E control signalling– e.g. predictive algorithm with periodic E2E convergence
Anticipate Future State 6C
Anticipate future state so that actions can be taken proactivelybefore before repair needs to be performed on the network that affects application performance.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-92
© James P.G. SterbenzITTC
Control MechanismsOpen vs. Closed Loop Control
Open- vs. Closed Loop Control 6D
Use open-loop control based on knowledge of the network path to reduce the delay in closed loop convergence. Use closed-loop control to react to dynamic network and application behaviour.
open loop
parameters
data
closed loop
initial parameters
data
E2E feedback
KU EECS 881 – High-Performance Networking – Principles
– 47 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-93
© James P.G. SterbenzITTC
Control MechanismsSeparate Control Mechanisms
• Combined control mechanisms frequently suboptimal– e.g. combined TCP error/flow/congestion control
bad for wireless links
• Control mechanisms should be– distinct– if combined, distinct information explicitly conveyed
• e.g. explicit congestion and error notification
Separate Control Mechanisms 6E
Control mechanisms should be distinct, or the information conveyed explicit enough, so that the proper action is taken.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-94
© James P.G. SterbenzITTC
Distribution of Application DataDistributed Data
• Distribute data among applications to– minimise latency and amount of data exchanged– allow incremental processing of data as it arrives
• Corollaries– partitioning and structuring of data– location of data
Distributed Data Principle 7
Distributed applications should select and organise the data they exchange to minimise the amount of data transferred and the latency o transfer and allow incremental processing of data.
KU EECS 881 – High-Performance Networking – Principles
– 48 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-95
© James P.G. SterbenzITTC
Distribution of Application DataPartitioning and Structuring
• Partition & structure data distributed application data– to minimise amount of data transferred– to minimise latency of transfer
Partitioning and Structuring of Data 7A
Whenever possible, data should be partitioned and structured to minimise the amount of data transferred and the latency in communication.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-96
© James P.G. SterbenzITTC
Distribution of Application DataLocation
• Locate data near application that uses it– replicate data when possible (trade M vs. L)
• e.g. mirroring and caching
– anticipate transfer request in advance (reduce L and peak B)• e.g. prefetching
Location of Data 7B
Data should be positioned close to the application that needs itto minimise the latency of access. Replication of data help to accomplish this.
KU EECS 881 – High-Performance Networking – Principles
– 49 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-97
© James P.G. SterbenzITTC
Protocol Data UnitsProtocol Data Units
• PDU size and structure critical to performance• Corollaries
– PDU size and granularity– PDU control field structure– control field scalability
Protocol Data Unit Principle 8
The size and structure of PDUs are critical to high-bandwidth, low-latency communication.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-98
© James P.G. SterbenzITTC
Protocol Data UnitsSize and Granularity
• Small vs. large PDUS– statistical multiplexing efficiency vs. more time to process
• Fixed vs. variable size– easier to process vs. more flexible– granularity: byte, word, power-of-2, end system buffer
PDU Size and Granularity 8A
The size of PDUs is a balance of a number of parameters that affect performance. Trade the statistical multiplexing benefitsof small packets against the efficiency of large packets.
KU EECS 881 – High-Performance Networking – Principles
– 50 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-99
© James P.G. SterbenzITTC
Protocol Data UnitsControl Field Structure
• Header/trailer structure has performance impact– simple encoding (bit vector vs. code points)– byte/octet granularity and alignment– fixed length fields when possible
• offset value when variable length necessary
PDU Control Field Structure 8B
Optimise PDU header and trailer fields for efficient processing.Fields should be simply encoded, byte aligned, and fixed length when possible. Variable length fields should be prepended with their length.
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-100
© James P.G. SterbenzITTC
Protocol Data UnitsControl Field Scalability
• Scalability of control fields– large linear range requires many bits– value and multiplier allows scaling over orders of magnitude
Scalability of Control Fields 8C
PDU header and trailer fields and structure should be scalable with data rate and bandwidth-×-delay product.
KU EECS 881 – High-Performance Networking – Principles
– 51 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-101
© James P.G. SterbenzITTC
Preliminaries and PrinciplesPP.4 Design Techniques
PP.0 PreliminariesPP.1 Fundamental axiomsPP.2 Drivers and constraints PP.2 Design principles and tradeoffsPP.4 Design techniques
2.4.1 Scaling time and space2.4.2 Cheating and masking the speed of light2.4.3 Specialised hardware implementation2.4.4 Parallelism and pipelining2.4.5 Data structure optimisation2.4.6 Latency reduction
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-102
© James P.G. SterbenzITTC
Design TechniquesScaling in Time and Space
• Goal: reduce interapplication delay (I.1)• Speed up the clock
– Moore’s law is our friend– but non-uniform advances in technology
• e.g. memory access time not keeping up with CPU clock rate
• Move communicating apps and data closer together– constrained by physical location of users and resources– some optimisations are possible
KU EECS 881 – High-Performance Networking – Principles
– 52 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-103
© James P.G. SterbenzITTC
Design TechniquesCheating and Masking the Speed of Light
• Goal: reduce interapplication delay effects of c• Techniques
– prediction– caching and mirroring– prefetching and presending– autonomy– code mobility
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-104
© James P.G. SterbenzITTC
Design TechniquesSpecialised Hardware Implementation
• Goal: reduce delays of individual components• Techniques
– implement components in hardware rather than software– implement modules in specific fast hardware technologies
• e.g. SRAM, TCAM
KU EECS 881 – High-Performance Networking – Principles
– 53 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-105
© James P.G. SterbenzITTC
Design TechniquesParallelism and Pipelining1
• Goal: reduce delays of subsystems and components• Parallelism: implement multiple parts in parallel
– need to understand dependencies
• Examples– packet level parallelism (may need to control skew)– functional parallelism
error detect
decrypt decompress
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-106
© James P.G. SterbenzITTC
Design TechniquesParallelism and Pipelining2
• Goal: reduce delays of subsystems and components• Pipelining: divide processing into stages• Examples
– functional pipelines– packet pipelines
• Possible to do both– parallel pipelines
f1 f2a f2b f3 f4
packet4 packet3 packet2 packet5 packet1
packet4 packet3 packet2 packet5
packet4 packet3 packet5
packet4 packet5
packet5
packet1
packet2 packet1
packet3 packet2 packet1
packet4 packet3 packet2 packet1
t i
m e s t e p
KU EECS 881 – High-Performance Networking – Principles
– 54 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-107
© James P.G. SterbenzITTC
Design TechniquesData Structure Optimisation
• Goal: reduce processing overhead• Techniques
– optimise format for fast processing• fixed size, byte aligned, bit vectors, etc.
– optimise location of structures• header vs. trailer
– optimise PDU size• efficiency and overhead• match unit of processing in components
– e.g word, register, cache line, switch buffer
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-108
© James P.G. SterbenzITTC
Design TechniquesLatency Reduction: Cut Through
• Goal: reduce store-and forward and copy overhead• Cut-through paths
– provide bypass path to cut-through buffers when possible
cut-through path
buffered path
KU EECS 881 – High-Performance Networking – Principles
– 55 –
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-109
© James P.G. SterbenzITTC
Design TechniquesLatency Reduction: Remapping
• Goal: reduce store-and forward and copy overhead• Remapping
– per-byte data copy adds significant delay– virtual copy of data by pointer manipulation
a
b
a
b
26 August 2010 KU EECS 881 – High-Speed Networking – Principles HSN-PP-110
© James P.G. SterbenzITTC
Preliminaries and PrinciplesAcknowledgements
Some material in these foils comes from the textbook supplementary materials:
• Sterbenz & Touch,High-Speed Networking:A Systematic Approach toHigh-Bandwidth Low-Latency Communicationhttp://hsn-book.sterbenz.org