Novel Function Placement of Congestion Control Building Blocks in the Internet
description
Transcript of Novel Function Placement of Congestion Control Building Blocks in the Internet
1
Novel Function Placement of Congestion Control Building Blocks in the
Internet
Kartikeya Chandrayana
2
Outline
• Review• Randomized TCP • Uncooperative Congestion Control• virtual AQM• Conclusions
3
Congestion Control
• Internet Meltdown – Need for congestion control.
• Congestion Avoidance and Control– End system based techniques.
• TCP
– Network based solutions• Active Queue Management (AQM) e.g. RED
4
TCP
• Transmission Control Protocol– Protocol used to transport data– Source: Send a packet, Receiver: Acknowledge the
packet
• Almost all applications (90%) use TCP• What rate to send ?
– No way of knowing what is the available bandwidth
• Probe for bandwidth– In some time “T” send w packets– If Acks for all w packets are rcvd then
• Send w+1 packets next time
– Else • Send w/2 packets
5
End-System Based Solution
• TCP + Drop-Tail Queuing.• TCP’s performance suffers on Drop-
Tail queues.– Synchronization
• Congestion window of different flows increase and decrease simultaneously
– Burst losses– Bias against flows with large RTT– Full Queues– Phase Effects
• Only a section of flows get dropped all the time
– Lockout Effect• Few flows monopolize the buffer space
6
Active Queue Management
• Proactively Manage Queues – Drop packet before queue overflows– Small queues
• Probabilistic Dropping – Introduces randomization in network
• Early Congestion Indication• Protect TCP Flows
– CBR flows, selfish flows
• e.g. RED (and variants), REM, AVQ, CHOKe
7
Random Early Drop (RED)
MinthMaxth
avg: average queue length (EWMA) if avg < Minth then queue packet
if avg > Maxth then drop packet
else, probabilistically drop/accept packet.
Head
Accept Drop/Mark Probabilistically Accept
8
AQM Continued
• Have parameters which require configuration– e.g. Threshold to probabilistically drop packets
• Configuration Parameters are generally a function of link capacity, number of flows etc.– Small operating region– RED can perform worse than Drop-Tail queues
• AQMs are not deployed on the Internet
Internet Works with Drop-Tail QueuesProblems with Drop-Tail Queues Persist
9
Review: Possible Solutions
Some Buffer Mgmt. Scheme
Users
End System Based Solution: Use same congestion control
algorithm
Network
Routers
Network Based Solution: Use AQM/Scheduler in the network
Limitation
How do we verify the trust ?
Constrains the choice of congestion control algorithms
AQM Placement Required at every router.
Limitations
May require exchange of control information between all AQMs/Schedulers in the network.
Generally only provides Max-Min Fairness.
Most Solutions do not work with a Drop Tail queue Network
What are the alternate architectural responses ?
10
Proposed Solution
Users
Network
Core Routers
Uncooperative Congestion Control
Virtual AQM
Edge Routers Any queue mgmt algorithm
Drop Tail/RED etc.
Minimal Changes/upgrades in the
network
Big, Fast Routers, Millions of Flows, Giga Bytes of Data
First place where network can verify trust
Medium Sized Routers, Manageable number of
flow/data
Randomized TCP
Emulate Many Beneficial Properties of RED
Protect TCP Flows, Manage Queues
De-couple congestion control tasks from their placement
11
Outline
• Review• Randomized TCP• Uncooperative Congestion Control• virtual AQM• Conclusions
12
Randomized TCP
• Randomize the packet sending times = (1+x) RTT/W– X : Uniform(-1, 1)
• Always observe packet conservation
TCP Randomized TCP
13
Benefits of Randomized TCP
• End-System solution for introducing randomization in the network
• Emulates many beneficial properties of RED– Breaks synchronization– Spreads losses over time
• Independent losses
– Removes Phase Effects– Removes Bias against large RTT flows– Reduces burst losses
• Competes fairly with TCP Reno
14
Randomized TCP: Bias against large RTT flows
TypeThroughput (pkts/sec)
% Share of the
bottleneckLoss (%)
Long Reno 132.05 28 1.2
Short Reno 333.58 72 0.3
Long Reno 215.20 44 0.3
Short Random 277.86 56 0.3
Long Random 214.89 47 0.5
Short Random 242.03 53 0.5
Long Reno-RED 216.05 46 0.3
Short Reno-RED 256.80 54 0.3
Single Bottleneck, Ideal Share: Long (43%), Short(57%)
60 ms
80 ms
2 Mbps
8 Mbps
15
Randomized TCP: Phase Effects
8 Mbps5 ms
8 Mbps5 ms
0.8 Mbps100 ms
Randomized TCP removes phase effects
16
• Randomized TCP competes fairly with TCP Reno• Removal of Phase Effects, Bias against large RTT
flows, synchronization– Other Single bottleneck setups– Multi-bottleneck setups
• Even one Randomized TCP flow improves performance
• Randomized TCP reduces burst losses• Randomized TCP improves performance of other
window based rate control schemes– Binomial Congestion Control
Randomized TCP: More Results
Randomized TCP can emulate many beneficial properties of REDWe can decouple management of synchronization, phase effect,
bias against large RTT flows, burst losses from AQM design
17
Outline
• Review• Randomized TCP• Uncooperative Congestion
Control• virtual AQM• Conclusions
18
New Congestion Control Schemes
• Application needs have changed– TCP not suitable– Different congestion control protocols
• Real-Player, Windows Media, Quake, Half-Life etc.
• Linux, FreeBSD Boxes came along– Make your own TCP.– If receive w acks then put w+5 packets in next RTT
• TCP send w+1 packets in next RTT
– If congestion put 3w/4 packets in next RTT• TCP send w/2 packets in next RTT
19
Classification
• Responsive – React to congestion indication by cutting
down its rate– e.g. TCP (and its variants)– Selfish/Mis-Behaving
• Maybe
• Un-responsive – Do not react to congestion indications– e.g. UDP, CBR– Selfish/Mis-Behaving
• Always
20
Responsive vs Un-responsive
1 Mbps
UDP Source Sending at 600Kbps
Bandwidth left For TCP
1 MbpsResponsive Selfish
Source
Consistently looks at increasing it’s share
21
Selfish Responsive Flows: Impact
Drop Tail Queue20 ms 20 ms0.8 Mbps 0.8 Mbps
8 Mbps5 ms
TCP Flows shut out
Traffic Volume Based Denial-of-Service Attack
22
Possible Solutions
• Everyone uses TCP• TCP Friendliness
– Any rate control scheme gets the same throughput as TCP under same operating conditions.
– x 1/sqrt(p) (x: rate, p : packet loss probability)
• Network Based Solutions– Use Active Queue Management (AQM)
• e.g. Random Early Drop (RED)– Minth, Maxth, p, Qavg
• FRED, CHOKe etc.– Require Deployment at ALL routers
23
AQM: Effect of Misbehavior
RED
RED Queue20 ms 20 ms0.8 Mbps 0.8 Mbps
8 Mbps5 ms
RED Helps: Though unfair sharing persists
24
Other TCP Like Schemes• TCP - Every RTT
• W(t+1) = W(t) + ( = 1) if no loss• W(t+1) = (1-)W(t) ( = 0.5) otherwise
• Time-Invariant Schemes– Control parameters do not change with time
• Utility function does not change with time
– Increase : /f(W) f(W) > 0– Decrease: (1-)*g(W) 0 < g(W) < 1– TCP Friendly Schemes
• f(W)g(W) = W– Binomial Congestion Control Schemes
• Increase: /Wk(t) , Decrease: (1-)Wl(t)• TCP Friendly Schemes given by k+l = 1
25
Other TCP Like Schemes• Time Invariant Schemes
– Aggressive Selfish schemes: > 1 > 0.5• f(W)g(W) < W• e.g Increase: , Decrease: W0.5(t)
• Time Variant Schemes– Control Parameters change with time (t) > 0 (t) > 0 – Increase: 1/Wk(t) , Decrease: Wl(t)
• k(t) + l(t) = 0
26
Consequences..
Users can choose their rate control scheme Rate Control Scheme rate allocation. Aggressive Rate Control More Rate Incentive for users to misbehave. But majority of users are responsible. Traffic-Volume based denial-of-service attack
Assume (for now) the network’s standard CC scheme is TCP
Any scheme which gets more rate than TCP is uncooperative
27
Detour: Congestion Control-Optimization Frameworks
• Utility Functions– Economics– One function can
capture a group of rate control schemes.
– TCP-Friendly schemes imply• U(x) -1/x
x (Rate)
U(x)
10 20 1M
1M + 10
28
Detour: Congestion Control-Optimization Frameworks
• Users choose congestion control algorithm Choose a Utility Function TCP : U(x) -1/x CC Scheme Utility function
• Every user maximizes his own utility function• Distributed optimization.
• Network imposes capacity constraints• Total input rate cannot exceed capacity• Communicates to users the price of using link• Price : loss rate, mark (ECN), delay
• Users use this price to update their rate
29
Optimization Framework: TCP
• TCP tries to minimize delay• Equilibrium allocation (fairness)
– Minimum Potential Delay Fairness
• Max-Min Fairness– U(x) = –1/xN (N )
• Proportional Fairness (TCP Vegas)– U(x) = log(x)
Max -1/xs
s.t. (xs – Cl) 0, for all l
30
Work in the Utility Function Space
Key Design Objectives:• Deployment Ease
• Retain existing link price update rules.
No changes in the core.• Retain existing user’s rate
updation rules. Allows users to chose
rate control protocol.
• Should work with either drop or marking based network.
• Should work on a network of Drop Tail queues.
U1
U2
Us
Conformant
Non Conformant
U1,U2 define the conformance space
U
x (Rate)
Selfish
Map user’s Utility Function to Conformant Space
x (Rate)
U1 = U2 = -1/x
Us
Non ConformantU
TCP Friendliness
Map
Map user’s Utility Function to Conformant Space
31
• User s is described by: – xs: Rate, Us: Utility function, q: end-to-end price – xs = Us'-1(q)
– If source was using Uobj then rate would be: xs = Uobj'-1(q)
• Communicate to user the price qnew : qnew = Us' (Uobj'-1(q))
• Now user’s update algorithm looks like xs = Us'-1(qnew)
xs = Uobj'-1(q)
Appears as if user is maximizing Uobj !
Map user’s utility function to some (or range of) objective utility function
Us Uobj , Uobj [U1 , U2 ]
How? By Penalty Function Transformation
32
Core Network (No Changes)
Any queue mgmt algorithm Drop Tail/RED etc.
Core Routers
Edge Routers
Edge Based Re-Marking Agent
Maps utility function
Manages Selfish Flows. (Decouple it from AQM design)
Provides Service differentiation (Map users to different utility functions).
Users
Free to choose their congestion control algorithm
Either marking or dropping
Idea: Remap @Edge, Not in every Router
Decouple Management of Selfish Flows from AQM Design
33
What do we need to make it work ?
• Estimate utility function– Currently using Least Squares, Recursive LS– Needs only estimates of sending and loss rates
• Estimate loss/mark rate– Currently using EWMA, WALI methods of TFRC
• Need to identify misbehaving flows.– Smart Sampling in Netflow, Sample & Hold etc
34
Utility Function Estimation
• Increase: /xk(t) , Decrease xl(t)• Utility function (n = k+l)
– U = - /(Rn (xR)n)– U -1/xn
– U’(x) = p– log(p) = log(nK) – (n+1)log(x)
•Use linear least squares to estimate n
35
Results: Single Bottleneck
Drop Tail RED/ ECN Enabled
x Mbps
4x Mbps
20 ms
5 ms
TCP Reno, U=-1/x
Mis-Behaving (U=-1/x0.5)
36
Results: Multi-Bottleneck (Drop Tail)
Framework prevents volume based denial of service attack.
Without Re-Mapping
TCP Flows shut out
With Re-Mapping
Drop-Tail Queue20 ms 20 ms0.8 Mbps 0.8 Mbps
8 Mbps5 ms
TCP Reno, U=-1/x
Selfish (U=-1/x0.5) Selfish (U=-1/x0.5)
37
Results: Multi-Bottleneck (RED)
Framework improves fair sharing of network
Without Re-Mapping With Re-Mapping
RED Queue20 ms 20 ms0.8 Mbps 0.8 Mbps
8 Mbps5 ms
TCP Reno, U=-1/x
Selfish (U=-1/x0.5) Selfish (U=-1/x0.5)
38
Results: Multi-Bottleneck in an ECN Enabled Network
Ideal Case No Re-Mapping
With Re-Mapping
RED Queue20 ms 20 ms0.8 Mbps 0.8 Mbps
8 Mbps5 ms
TCP Reno, U=-1/x
Selfish (U=-1/x0.5) Selfish (U=-1/x0.5)
Congestion Response Conformance
39
Utility Function Estimation Results
x Mbps
4x Mbps
20 ms
5 ms
TCP Reno, U=-1/x
Mis-Behaving (U=-1/x0.5)
N = 0.6, (Ideal: N=0.5) N = 0.8, (Ideal: N=1.0)
Can estimate the exponent with a very small sample set
40
More Results• Background Traffic
– Web (http) Traffic– Single/Multi Bottleneck scenarios
• Cross Traffic– Reverse path congestion– Especially important with RED– Multi-Bottleneck scenarios
• Comparison with other AQM schemes
• Differentiated Services
41
Outline
• Review• Randomized TCP• Uncooperative Congestion Control• virtual AQM• Conclusions
42
virtual AQM: Definitions
R1
R2 R3
R4
Stream F
Stream G
Stream H
I1
E1
I1- R1 - R2 - R3 - E1 : Path
43
virtual AQM: Definitions
• Path Capacity : Minimum Link Capacity on a Path– Send a pair of back-to-back packets through Priority
Queues
• Path Demand : Demand on a Path• Send a packet train through data queue
a + cCeff = 8*S/c
a
D = C - Ceff
Cross-Traffic
C = 8*S/
S Bytes
44
virtual AQM: Algorithm : network utilization ( < 1)• Calculate virtual path capacity as
• Cv = * path Capacity
• Idea : Match Demand to Virtual path capacity at the network edge
• For every path – For every packet
• Drain virtual buffer as (tn-tn-1)* Cv
• Increase count of virtual buffer• If virtual buffer overflows Drop(Mark) packets
< 1 => At Steady State total input rate is less than the network capacity => smaller steady state queue
45
virtual AQM: Results
Demand Estimation
vAVQ
Drop Tail AVQ vAVQ
Average Q Size 18.69 13.62 12.22
Throughput (Mbps)
1.6 1.5 1.6
Fairness 0.067 0.03 0.06We can decouple management of bottleneck queue from AQM design
46
virtual AQM: Results
Drop Tail Queue20 ms 20 ms0.8 Mbps 0.8 Mbps
8 Mbps5 ms
Demand Estimation
vAVQ
vAVQ
Drop Tail AVQ vAVQ vAVQ*
Average Q Size (First Bottleneck)
18.00 11.46 15.48 14.02
Average Q Size (Second Bottleneck)
17.97 10.86 15.72 14.66
47
Conclusions
• Network based congestion avoidance and control solutions are not deployed
• De-couple congestion control task from it’s placement– Deployable architectures– Can get many beneficial properties of network based
solutions
• Randomized TCP– End-System based solution– Can reduce synchronization, phase effects, bias against
large RTT flows, burst losses– Emulate many beneficial properties of RED (AQM).
48
Conclusions• Un-Cooperative Congestion Control
– Edge Based Solution– De-couple management of selfish flows from AQM design– Edge-based transformation of price can handle
misbehaving flows
– No changes in the core
– Works with packet drop or packet marking (ECN)
– Independent of buffer management algorithm
• virtual AQM– Edge-based proposal for managing bottleneck queues
– For any path using packet probes find capacity and demand
– Mark (drop) packets to match demand to path capacity
– Results depend on estimation, length of virtual buffer
– Initial Conceptual Prototype Presented
49
References
• Kartikeya Chandrayana, Sthanunathan Ramakrishnan, Biplab Sikdar and Shivkumar Kalyanaraman, “On Randomizing the Sending Times in TCP and other Window Based Algorithms”, Conditional Accept for Journal of Computer Networks
• Kartikeya Chandrayana and Shivkumar Kalyanaraman, “Uncooperative Congestion Control”, ACM SIGMETRICS 2004, Also under submission to IEEE Transactions on Networking.
• Kartikeya Chandrayana and Shivkumar Kalyanaraman, “On Impact of Non-Conformant Flows on a Network of DropTail Gateways”, IEEE GLOBECOM 2003
• K. Chandrayana, Y. Xia, B. Sikdar and S. Kalyanaraman, “A Unified Approach to Network Design and Control with Non-Cooperative Users”, RPI Networks Lab Tech Reoprt, ECSE-NET-2002-1, March 2002
50
Randomized TCP: Synchronization
Bandwidth TCP RenoRandomized
TCP
3 Mbps 0.4254 0.1721
4 Mbps 0.3152 0.1604
5 Mbps 0.6700 0.0799
x Mbps
4x Mbps
20 ms
5 ms
Randomized TCP reduces/removes synchronization
51
virtual AQM: Improvements
Drop Tail vAVQ vAVQ*
Average Q Size 18.69 12.22 10.94
Throughput (Mbps)
1.6 1.6 1.5
Fairness 0.067 0.06 0.05
52
Simple Differentiated Services
Multi-Bottleneck Setup: All flows are TCP Flows Objective: Increase the share of long flow by 10%
Differentiated Services: Map users to different utility functions Edge Based
53
Users
Network
Routers
Placement
Destination
Mark Packets
MarkAcks
MarkAcks
Drop Packets
54
Re-Marker Design
• Implemented it in Network Simulator• Estimation of loss rate• Estimation of throughput• Get utility function estimate• Compute the Re-Marking function• Appropriately Mark/Drop packets.
– Can also Mark Acks
• Different Algorithm for CBR flows.