Post on 16-Dec-2015
Predictable Performance Predictable Performance Optimization for Wireless NetworksOptimization for Wireless Networks
Lili Qiu Lili Qiu University of Texas at AustinUniversity of Texas at Austin
lili@cs.utexas.edulili@cs.utexas.edu
Joint work with Joint work with
Yi Li, Yin Zhang, Ratul Mahajan, and Eric RoznerYi Li, Yin Zhang, Ratul Mahajan, and Eric Rozner
ACM SIGCOMM 2008ACM SIGCOMM 2008August 21, 2008August 21, 2008
2
MotivationMotivation• Wireless networks are becoming ubiquitous• Managing wireless networks is hard
• Our goal: develop systematic techniques to optimize wireless performance
Predict if given sending rates are achievable
Perform what-if analysis
Optimize sending rates for different objectives
Wireline Wireless
3
0 200 400 600 800
1000 1200 1400
0 1000 2000 3000 4000 5000
Thro
ughp
ut (K
bps)
Sending rate (Kbps)
bad-goodgood-bad
Unpredictability of wireless networksUnpredictability of wireless networks
Need predictable wireless performance optimization.
S
S
R
R
D
DSS
50%
100%
100%
50%
bad-good
good-bad
4
Model-driven optimization frameworkModel-driven optimization framework
Network measureme
nt
Network model
Optimization
Traffic demands prescribed
flow rates
Performance objectives: - Maximize fairness, total throughput, …
Routing
5
Existing models are insufficientExisting models are insufficient• Asymptotic performance bounds [GP00,LB+01,GT01,GV02]
– Cannot model any specific networks
• Conflict graph based model [JPPQ03]– Assumes perfect scheduling and overestimates 802.11
performance– Requires an exponential number of constraints
• 802.11 DCF models [Bianchi00,KA+05,GLC06,GSK05 QZWH+07,KDG07]
– Not general: restricted topologies or traffic demands– Cannot be easily incorporated into optimization
procedure
Need a better 802.11 network model for optimization.
6
Our network modelOur network model• Provide a compact characterization of feasible sol
ution space to facilitate optimization• Simple: O(N) constraints for N links• Flexible and accurate
– Handle asymmetric link loss rate– Handle asymmetric interference– Handle hidden terminals– Handle heterogeneous, multihop traffic demands
Network measureme
nt
Network model
Throughput constraints
Loss rate constraints
Sending rate constraints
7
Throughput constraintsThroughput constraints• Divide time into variable-length slot
(VLS)– 3 types of slots:
idle slot, transmission slot, deferral slot
j ijjjijiislotj
iiii TDTT
pEPg
)1(
)1(
Expected payloadtransmission time
Probability of starting tx in a slot
Success probability
Expected duration of a variable-length slot
8
Loss rate constraintsLoss rate constraints • Inherent and collision loss are independent • Inherent loss
– Based on one-sender broadcast measurement
• Collision loss– Synchronous loss
• Two senders can carrier sense each other• Occur when two transmissions start at the same time
– Asynchronous loss• At least one sender cannot carrier sense the other• Occur when two transmissions overlap
9
Sending rate feasibility Sending rate feasibility constraintsconstraints
• 802.11 unicast
– Random backoff interval uniformly chosen [0,CW]
– CW doubles after a failed transmission until CWmax, and restores to CWmin after a successful transmission or when max retry count is reached
– CW(pi): the expected contention window size under packet loss rate pi [Bianchi00]
• Sending rate feasibility constraints
2/)(1
10
ipCWi
DIFS Data TransmissionRandomBackoff
ACKTransmission
SIFS
10
Extensions to the basic modelExtensions to the basic model• RTS/CTS
– Add RTS and CTS delay to VLS duration– Add RTS and CTS related loss to loss rate constraints
• Multihop traffic demands– Link load routing matrix e2e demand– Routing matrix gives the fraction of each e2e demand th
at traverses each link• TCP traffic
– Update the routing matrix:
where reflects the size & frequency of TCP ACKsackdataTCP RRR
11
Model-driven optimization frameworkModel-driven optimization framework
Network measureme
nt
Network model
Optimization
Traffic demands prescribed
flow rates
Performance objectives: - Maximize fairness, total throughput, …
Routing
12
Flow throughput feasibility testingFlow throughput feasibility testing• Test if given flow throughput are achievable• Challenge: strong interdependency• Our approach: iterative procedure
Initializeτ= 0 and p = pinherent
Check feasibility
constraintsConverged?
noyesEstimate τ from throughput and p
Estimate p from throughput andτ
Output:feasible/infeasible
Input: throughput
13
Fair rate allocationFair rate allocationInitialization: add all demands to unsatSet
Scale up all demands in unsatSet until some demand is saturated or scale1
Output X
Move saturated demands from unsatSet to X
if (unsatSet≠)
if (scale 1)yes
no
yes
no
14
Total throughput maximizationTotal throughput maximization• Formulate a non-linear optimization problem
(NLP)
• Solve NLP using iterative linear programming
*0
2/)(1
10
)1(
)1(..
max
d
i
xx
pCW
TDT
pEPxRts
x
d
i
jjjijslot
jj
iii
ddid
dd
Sending rate is feasible
E2e throughput is bounded by demand
Link load is bounded bythroughput constraints
Maximize total throughput
15
Evaluation methodologyEvaluation methodology• Model validation
– How to quantify over-prediction error?• Verify if prescribed rates are achievable
– How to quantify under-prediction error?• Scale up all prescribed rates by a common factor
• Performance optimization– Fairness maximization: Jain’s fairness index– Total throughput maximization
• This talk: testbed results only– 19 mesh nodes at UTCS building; up to 7 hops– Extensive simulation results are in the paper
16
Optimization schemesOptimization schemes• Our rate optimization• No rate optimization (current practice)• Conflict graph based optimization
– Plug conflict graph model to our framework
– Conflict graph assumes perfect scheduling [JPPQ03]• Represent each wireless link with a vertex• Draw an edge between the vertices if the
corresponding links interfere• Derive clique constraints – all links in a clique
in the CG cannot be active together
17
Baseline: conflict graph modelBaseline: conflict graph model
0
2
4
6
8
10
12
1 2 3 4 5 6 7 8 9 10 11Actua
l throu
ghpu
t (Mbp
s)
Estimated throughput (Mbps)
0 2 4 6 8
10 12 14
0 2 4 6 8 10 12 14Actua
l throu
ghpu
t (Mbp
s)
Estimated throughput (Mbps)
CG model significantly over-estimates sending rates.
UDP TCP
y=0.8x
y=x y=x
y=0.8x
18
Model validation: UDP trafficModel validation: UDP traffic
0 1 2 3 4 5 6 7 8 9
10
0 1 2 3 4 5 6 7 8 9 10Actua
l throu
ghpu
t (Mbp
s)
Estimated throughput (Mbps)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
1
0 0.2 0.4 0.6 0.8 1
Frac
tions
of r
unsRatios between actual and estimated throughput
scale=1.0scale=1.1scale=1.2scale=1.5
1) Most estimated rates are achievable within 20%.2) Rates scaled up by just 10% become unachievable.
y=x
y=0.8x
19
Model validation: TCP trafficModel validation: TCP traffic
0 1 2 3 4 5 6 7 8
0 1 2 3 4 5 6 7 8Actua
l throu
ghpu
t (Mbp
s)
Estimated throughput (Mbps)
0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9
1
0 0.2 0.4 0.6 0.8 1Fr
actio
ns of
runs
Ratios between actual and estimated throughput
scale=1.0scale=1.1scale=1.2scale=1.5
Our model is accurate for TCP traffic.
y=x
y=0.8x
20
0
0.2
0.4
0.6
0.8
1
0 2 4 6 8 10 12 14 16Fa
irness
inde
x
Num of Flows
wo/ opt.CG opt.Our opt.
0
0.2
0.4
0.6
0.8
1
0 2 4 6 8 10 12 14 16
Fairn
ess in
dex
Num of Flows
wo/ opt.CG opt.Our opt.
Maximizing fairnessMaximizing fairnessUDP TCP
Fairness index is close to 1 under our scheme, while it degrades quickly in other schemes.
21
0.5 1
1.5 2
2.5 3
3.5 4
4.5 5
5.5
0 2 4 6 8 10 12 14 16
Throu
ghpu
t
Num of Flows
wo/ opt.CG opt.Our opt.
Maximizing total throughputMaximizing total throughputUDP
Our scheme significantly increases total throughput.
1 1.5
2 2.5
3 3.5
4 4.5
5
0 2 4 6 8 10 12 14 16Th
rough
put
Num of Flows
wo/ opt.CG opt.Our opt.
TCP
22
ConclusionsConclusions• Main contributions
– Predictable wireless performance optimization• A simple yet accurate wireless network model• Effective model-driven optimization algorithms
– Demonstrate their effectiveness through testbed experiments and simulation
• Future work– Handle dynamic traffic and topologies– Use passive measurement to seed our
model
Thank you!Thank you!
24
0.5 1
1.5 2
2.5 3
3.5 4
4.5 5
5.5
0 2 4 6 8 10 12 14 16
Throu
ghpu
t (Mbp
s)
# Flows
wo/ opt.w/ our opt.
0.5 1
1.5 2
2.5 3
3.5 4
4.5 5
0 2 4 6 8 10 12 14 16Th
rough
put (M
bps)
# Flows
wo/ opt.w/ our opt.
Impact on different routing Impact on different routing schemesschemes
Our scheme helps all routing schemes considered.
TCPUDP
25
TCP Pathologies under no rate TCP Pathologies under no rate controlcontrol
S1
S2
R
D1
D2
No Rate Limit (Mbps)
Rate Limit
0.805, 0.740 1.066, 1.064
TCP cannot set the rate that maximizes throughput.