802.11 fairness ii-2. What is Multi-Rate? Ability of a wireless card to automatically operate at...
-
Upload
mitchell-gyles-wheeler -
Category
Documents
-
view
218 -
download
0
Transcript of 802.11 fairness ii-2. What is Multi-Rate? Ability of a wireless card to automatically operate at...
802.11 fairness
ii-2
What is Multi-Rate?• Ability of a wireless card to automatically operate at
several different bit-rates(e.g. 1, 2, 5.5, and 11 Mbps for 802.11b)
• Part of many existing wireless standards(802.11b, 802.11a, 802.11g, WiBro…)
• Virtually every wireless LAN card in use today employs multi-rate
Due to large SNR variation in radio propagation, many wireless communication technologies support multiple bit ratesFor example, if SNR is good, we can choose a modulation scheme with many symbols such as 64QAMIf SNR is bad, the receiver may not be able to differentiate many kinds of symbols, so a low bit rate modulation scheme like BPSK will be a better choiceIn addition to modulation techniques, we can also adjust the coding rate of FEC
Digital Modulation• Phase Shift Keying (PSK):
– Pros: • Less susceptible to noise• Bandwidth efficient
– Cons:• Require synchronization in frequency and phase
complicates receivers and transmitter
t
There are three kinds of digital modulation: ASK, FSK, PSK
Shift keying means switching. E.g. ASK uses two amplitudes but is vulnerable to noise
FSK requires almost twice BW, which leaves us PSK only
• BPSK (Binary Phase Shift Keying):– bit value 0: sine wave
– bit value 1: inverted sine wave
– very simple PSK
– low spectral efficiency
– robust, used in satellite systems
Q
I01
Phase Shift Keying
11 10 00 01
Q
I
11
01
10
00
A
t
• QPSK (Quadrature Phase Shift Keying):– 2 bits coded as one symbol– needs less bandwidth compared to BPSK– symbol determines shift of sine wave– Often also transmission of relative, not
absolute phase shift: DQPSK - Differential QPSK
Phase-state diagram, constellation diagram
• Quadrature Amplitude Modulation (QAM): combines amplitude and phase modulation
• It is possible to code n bits using one symbol– 2n discrete levels
• bit error rate increases with n
0000
0001
0011
1000
Q
I
0010
φ
a
Quadrature Amplitude Modulation
• Example: 16-QAM (4 bits = 1 symbol)
• Symbols 0011 and 0001 have the same phase φ, but different amplitude a. 0000 and 1000 have same amplitude but different phase
• Used in Modem
64 QAM
• 64 - Quadrature Amplitude Modulation– 6 bits per symbol– Also uses quadrature carrier– Each carrier is multiplied by
+7, +5, +3, +1, -1, -3, -5, or -7 (amplitude modulation)
– 64 possible combinations of the two multiplied carriers
802.11a Rates: Modulation and Coding
To sum up, modulation determines the number of bits per symbol and coding rate tunes the ratio of real data bits and redundant bits802.11a adopts OFDM that has 48 subcarriers. Each subcarrier has 250K symbol rateThe rightmost column aggregates data bits for the OFDM symbol that is made up of 48 subcarriers
In the case of the last row, 216 * 250K becomes 54Mbps
Throughput vs. Distance for 802.11a
This plots the measurement results of 802.11a throughput for each bit rate as the distance increasesThe highest bit rate (54Mbps) is possible only with a few 10s of meters
Whereas, the lowest bit rate (6Mbps) maintains its throughput up to almost 200 m
In this slide each throughput collapse signifies that TX errors occurs substantially after some distance
802.11b Frame Exchange Duration
0 1 2 3 4 5 6 7 8 9 10 11 12 13 14
1.0
2.0
5.5
11.0R
ate
(M
bp
s)
Medium Time (milliseconds)
MAC Overhead Data4.55 Mbps
3.17 Mbps
1.54 Mbps0.85 Mbps
Medium Time consumed to transmit 1500 byte packet
This slide shows how much TX time is taken by each bit rate of 802.11b
Obviously, we have to try high bit rate TXs if possible. However, if SNR is not so good, the receiver fails to receive the data correctly.How can we find out the best bit rate (or modulation scheme) for the current link condition?
Rate adaptation (ARF)• Selects the rate to use for a packet: ARF and RBAR• Auto Rate Fallback (ARF)
– Adaptive based on success/failure of previous packets• senders attempt to use higher transmission rate after consecutive successes• revert to lower rate after failures
– Simple to implement– Doesn’t require the use of RTS CTS or changes to 802.11 spec
ARF is the simplest one that adapts its rate depending on channel quality
Maybe ARF is most widely implemented in real products
What happens is the transmitter keeps on counting the number of successful TXs
If that number reaches a certain threshold, it raises the link rate
If a TX error happens, it falls back to lower rate
Let me first introduce two rate adaptation schemes
Rate adaptation (RBAR)• Receiver Based Auto Rate (RBAR)
– receiver measures channel quality • SNR measurement of RTS
– piggyback in CTS– sender decides transmission rate according to this information– Faster & more accurate in changing channel– Requires some tweaks to the header fields
RBAR tries to measure the channel quality more instantly
The RX measures the SNR of the RTS packetThen it reports the SNR result to the TX in the CTS frame
The TX decides the appropriate link rate from this feedback
The demerit of this scheme is that it requires changing the CTS packet format, not to mention using RTS/CTS
MAC Layer Fairness Models• Per Packet Fairness: throughput fairness• Temporal Fairness: If two adjacent senders
are continuously attempting to send packets, they should be able to send for the same amount of medium time.
• In single rate networks these are the SAME!
We saw this slide already. Note that per-packet fairness is also called throughput fairness assuming the same payload size in a packet
Temporal vs. Throughput Fairness• Equivalent in single-rate networks• Throughput fairness results in significant inefficiency in multi-
rate networks
Example user 1
user 2
access point
user 3
Suppose there are three stations and users 2 and 3 are in good link conditions while user 1 suffers from poor radio link
Temporal vs. Throughput Fairnessuser 1
user 2access point
user 3
Throughput Fair
user 1
user 2 DATA
DATA
user 3 DATA
Even 1 user with low transmission rate results in
a very low network throughputIn this slide, the x axis is time, not the byte length of the MPDU. Actually the packet size is equal but the TX time is different
Temporal vs. Throughput Fairnessuser 1
user 2access point
user 3
Temporal Fair
user 1
user 2 DATA
DATA
user 3 DATA
DATA DATA DATA
DATA DATA DATA
DATA DATA
DATA DATA
Same time-shares of the channel for different flows, also higher throughput
For temporal fairness, each node should occupy the same amount of time as other nodesTemporal fairness not only rectifies the unsuitable throughput fairness but also increases the overall throughput
Temporal Fairness Example
802.11
Packet Fairness
OAR
Temporal Fairness
11 Mbps Link 0.896 3.533
1 Mbps Link 0.713 0.450
Total Throughput
1.609 3.983
1 Mbps
11 Mbps
1 Mbps
11 Mbps
Per Packet Fairness
Temporal Fairness
Let me illustrate per packet fairness and temporal fairness. There are two nodes: one node with 11Mbps link and the other with 1Mbps linkWhen per packet fairness is ensured, two flows will TX a data frame alternately to make the number of TXs equalHowever, for temporal fairness, the time for each flow to use the medium should be the same. So 11Mbps link flow will send more frames for the same time duration
Opportunistic Auto Rate (OAR)• Observation:
– Coherence time (duration where hosts have better channel quality) is at least several packet time
• Idea– If the channel is of high quality, user can transmit
multiple packets– Temporal fairness vs. throughput fairness
Now we will talk about OAR which performs rate adaptation with the objective of temporal fairnessThe key rationale behind OAR is the coherence time during which a channel quality is stable at least several TX timesIf the channel quality is good, the station can TX multiple frames in a row
In this way, a station with high link quality can occupies the channel for the same time as a station with low quality
OAR - Implementation Issues• How to estimate channel condition
– Use ARF, RBAR
• How to transmit several packets– Utilize 802.11 fragmentation– set more fragments bit– clear fragment number subfield
OAR is link estimation scheme-agnostic
Then it uses the fragmentation technique in the current 802.11 standard
The sequence control field in 802.11 MAC header consists of two parts
OAR - Benefits• Channel is better utilized, then better
throughput• No RTS/CTS for subsequent packets• Reduce contention time per packet• Time fairness
So OAR can ensure temporal fairness
Compared to RBAR, it requires one RTS/CTS exchange for a series of fragments
0
5
10
15
20
25
30
35
40
126 51 76
101
126
151
176
201
226
251
276
301
326
351
376
401
426
451
476
Time (sec)
SNR
• Received signal:
superposition of different reflections,
with different delays and attenuations
Motivation• Wireless channel is variable
• Coherence time
chan
nel
gai
n
time
The channel condition can be highly fluctuating over time
It is partly because the received signal is the sum of different EM waves
However, when we look at channel condition on the order of TX times, we can observe a relatively stable link
Why is it named opportunistic?
• Maintain temporal shares of different flows
• Exploit the variations inherent in wireless channel to increase throughput
chan
nel
gai
n
time
user 1
user 2
user 1
user 2
If the channel condition of each user is dynamically changing, we can opportunistically exploit this variation
Opportunistic Auto Rate (OAR)• Main observation: Coherence time in order of multiple
packet transmissions time– If a node accesses the channel and has a good channel, let it
keep it longer
• Given a node with channel access, determine number of packets to transmit as a function of channel quality
• OAR: High throughput, while maintaining temporal fairness properties of single rate IEEE 802.11
The key observation underlying OAR is that the coherence time is around time for multiple TXsIf a node’s radio link is in good condition, let it go for a duration proportional to its link quality
Pkts Rate Pkts Rate Pkts Rate
802.11 1 2 1 2 1 2
802.11b 1 2 1 5.5 1 11
OAR 1 2 3 5.5 5 11
Protocol
Channel Condition
BAD MEDIUM GOOD
• Rates in IEEE 802.11b: 2, 5.5, and 11 Mbps
OAR Protocol
Rate Base
RateTx • Number of packets transmitted by OAR ~
This slide illustrates how OAR is working
Suppose the base rate (minimum rate) is 2 Mbps
The number of packets consecutively transmitted by a node is proportional to the ratio of its current TX rate to the base rate
RTS
RBAR Protocol
source
destination
ACKCTS
Pkts Rate Pkts Rate Pkts Rate
802.11 1 2 1 2 1 2
802.11b 1 2 1 5.5 1 11
OAR 1 2 3 5.5 5 11
Protocol
Channel Condition
BAD MEDIUM GOOD
DATA
Receiver controls the sender’s transmission rate
Control messages sent at Base Rate
Reservation Sub-Header
Reservation subheader is nothing but the NAV
DATA DATARTSACKCTSACK
OAR Protocol
source
destination
Pkts Rate Pkts Rate Pkts Rate
802.11 1 2 1 2 1 2
802.11b 1 2 1 5.5 1 11
OAR 1 2 3 5.5 5 11
Protocol
Channel Condition
BAD MEDIUM GOOD
Once access granted, it is possible to send multiple packets if the channel is good
Reservation Sub-Header
Observation IContention time per packet is the same for RBAR and
single-rate IEEE802.11
Performance ComparisonIEEE 802.11
R
C A
D1Transmitter
Receiver
R
C A
D1 R
C A
D1Transmitter
Receiver
RBAR
In RBAR, the frames in yellow box are for the flow with high link quality while the frames in skyblue box are for low rate flowEven though RBAR can let each node select the appropriate bit rate, only one packet is TXed each time
Observation I Time spent in contention per packet is the same for
RBAR and single-rate IEEE802.11Transmitter
Receiver
OAR
Performance Comparison
IEEE 802.11
R
C A
D1Transmitter
Receiver
R
C A
D1 R
C A
D1Transmitter
Receiver
RBARObservation II OAR contends for the same total time as singe-rate
IEEE 802.11 but transmits more data
R
C A
D1
A
D2
A
D3
Transmitter
Receiver
OAR
Performance Comparison
IEEE 802.11
R
C A
D1Transmitter
Receiver
R
C A
D1 R
C A
D1Transmitter
Receiver
RBAR
Observation III OAR holds high-quality channels for multiple
transmissions
R
C A
D1
A
D2
A
D3
Fairness 101• Network is logically just a bundle of resources
– resource is often bit rate or BW
• A number of stations are contending for resource– What can we do about it?– FIFO– Fairness– maximization
Fairness 101 means this is an introductory material about fairness
Network is often abstracted by a service. Providing service requires a resource
If there are multiple nodes contending for the same resource, we should do something about it, which means we have to schedule how to use resource for these usersFIFO is often implemented in real network systems for simplicity
In networks, resource is typically a bit rate (or BW) of a link,
However, as the service requirements is becoming diverse and complicated, how to distribute or schedule the link bit rate is a vital issue The popular objectives of packet scheduling are to ensure fairness and to maximize throughput and so on
The problems of FIFO queues1. In order to maximize its chances of success, a
source has an incentive to maximize the rate at which it transmits.
2. (Related to #1) When many flows pass through it, a FIFO queue is “unfair” – it favors the most greedy flow.
3. It is hard to control the delay of packets through a network of FIFO queues.
Fair
ness
Dela
y
Guara
nte
e
The FIFO queue has many drawbacks
It implicitly incites a contender to send more traffic to get more share of the resourcebecause FIFO services a flow with more traffic than other flows
Also, it is hard to guarantee delay or throughput for each flow as the number of other flows or their traffic increases
Fairness
1.1 Mb/s
10 Mb/s
100 Mb/s
A
B
R1C
0.55Mb/s
0.55Mb/s
What is the “fair” allocation: (0.55Mb/s, 0.55Mb/s) or (0.1Mb/s, 1Mb/s)?
e.g. an http flow with a given(IP SA, IP DA, TCP SP, TCP DP)
The definition of fairness is often somewhat confusing
In this scenario, user A keeps on sending traffic at 10Mbps and user B pumps 100Mbps to the same router R1 with the link rate 1.1Mbps
Fairness
1.1 Mb/s
10 Mb/s
100 Mb/s
A
B
R1 D
What is the “fair” allocation?0.2 Mb/sC
Now we add one more user C who requests data rate of 0.2 Mbps
What is the fair allocation or scheduling of link rate to these three users?
When we talked about 802.11 fairness, we assume every user has saturated traffic (infinite traffic request)
Max-Min Fairness: A common way to allocate BW to flows
N flows share a link of rate C. Flow f wishes to send at rate W(f), and is allocated rate R(f).1. Pick the flow, f, with the smallest requested rate.2. If W(f) < C/N, then set R(f) = W(f). 3. If W(f) > C/N, then set R(f) = C/N.4. Set N = N – 1. C = C – R(f).5. If N > 0 goto 1.
The most widely accepted standard is max-min fairness
If the requested rate is smaller than the equal share of the link, C/N, the flow can be assigned what it requested
Otherwise, it will be assigned its fair share, which is “what is left of C” over the number of flows
A user with small demand will get his/her share while a user with large demand will split the rest evenly
1W(f1) = 0.1
W(f3) = 10R1
C
W(f4) = 5
W(f2) = 0.5
Max-Min Fairness: An example
Round 1: Set R(f1) = 0.1
Round 2: Set R(f2) = 0.9/3 = 0.3
Round 3: Set R(f4) = 0.6/2 = 0.3
Round 4: Set R(f3) = 0.3/1 = 0.3
First of all, we have to line up the flows in the increasing order of W(f)
As f1 requests small rate, it can get what it wants. Then C becomes 0.9 and N is 3
Max-Min Fairness• How can an Internet router “allocate”
different rates (BW) to different flows? • First, let’s see how a router can allocate the
“same” rate to different flows…
Suppose a router should allocate or schedule the rates (shares) to all the flows by the previous max-min fairness?How can we design a router to embody this functionality?
For simplicity, let’s start with the same rate allocation case
Fair Queueing1. Packets belonging to a flow are placed in a FIFO.
This is called “per-flow queueing”.2. FIFOs are scheduled one bit at a time, in a round-
robin fashion. 3. This is called Bit-by-Bit Fair Queueing.
Flow 1
Flow NClassification Scheduling
Bit-by-bit round robin
In reality, the router will service each flow on a packet-by-packet basis, not bit-by-bit service. However, we will start with bit-by-bit serviceThe reason for bit-level round is to minimize the deviation (uneven service time) among the shares of flows
Weighted Bit-by-Bit Fair Queueing• Likewise, flows can be allocated different
rates by servicing a different number of bits for each flow during each round.
1R(f1) = 0.1
R(f3) = 0.3R1
C
R(f4) = 0.3
R(f2) = 0.3
Order of service for the four queues:… f1, f2, f2, f2, f3, f3, f3, f4, f4, f4, f1,…
Also called “Generalized Processor Sharing (GPS)”
If different rates are allocated for each flow, this is called weighted fair queueing
In this example, for one bit service of f1, other flows will receive service for three bits
Packetized Weighted Fair Queueing (WFQ)
Problem: We need to serve a whole packet at a time
Solution: • Determine what time a packet, p, would complete if we
served flows bit-by-bit.
Call this the packet’s finishing time, Fp
• Serve packets in the order of increasing finishing time.
Theorem: Packet p will depart before Fp+ TX (Pmax)Also called “Packetized Generalized Processor Sharing (PGPS)”
If all the bits of a packet, p, can be serviced at Fp by bit-by-bit WFQ.
Then,with Packetized WFQ, the packet’s service will be finished no later than Fp + TX(Pmax)
Understanding bit by bit WFQ 4 queues, sharing 4 bits/sec of bandwidth, Equal Weights
Weights : 1:1:1:1
1
1
1
1
6 5 4 3 2 1 0
B1 = 3
A1 = 4
D2 = 2 D1 = 1
C2 = 1 C1 = 1
Time
1
1
1
1
6 5 4 3 2 1 0
B1 = 3
A1 = 4
D2 = 2 D1 = 1
C2 = 1 C1 = 1
A1B1C1D1
A2 = 2
C3 = 2
Weights : 1:1:1:1
D1, C1 Depart at R=1A2, C3 arrive
Time
Round 1
Weights : 1:1:1:1
1
1
1
1
6 5 4 3 2 1 0
B1 = 3
A1 = 4
D2 = 2 D1 = 1
C2 = 1 C1 = 1
A1B1C1D1
A2 = 2
C3 = 2
A1B1C2D2
C2 Departs at R=2Time
Round 1Round 2
Understanding bit by bit WFQ 4 queues, sharing 4 bits/sec of bandwidth, Equal Weights
Weights : 1:1:1:1
1
1
1
1
6 5 4 3 2 1 0
B1 = 3
A1 = 4
D2 = 2 D1 = 1
C2 = 1 C1 = 1
A1B1C1D1
A2 = 2
C3 = 2
A1B1C2D2
D2, B1 Depart at R=3
A1B1C3D2
Time
Round 1Round 2Round 3
Weights : 1:1:1:1
Weights : 1:1:1:1
1
1
1
1
6 5 4 3 2 1 0
B1 = 3
A1 = 4
D2 = 2 D1 = 1
C2 = 1C3 = 2 C1 = 1
C1D1C2B1B1B1D2D2A 1A1A 1A 1
A2 = 2
C3C3A2A2
Departure order for packet by packet WFQ: Sort by finish round of packetsTime
Sort packets
1
1
1
1
6 5 4 3 2 1 0
B1 = 3
A1 = 4
D2 = 2 D1 = 1
C2 = 1 C1 = 1
A1B1C1D1
A2 = 2
C3 = 2
A1B1C2D2
A1 Depart at R=4
A1B1C3D2A1C3A2A2
Time
Round 1Round 2Round 3Round 4
C3,A2 Departs at R=6
56
The use of WFQ for (weighted) fairness• WFQ can be used to provide different rates to
different flows.• Most routers today implement WFQ and can be used
to give different rates to different flows. (Not used much yet).
• Different definitions of a flow are possible: Application flow, all packets to a destination, all packets from a source, all http packets, the CEO’s traffic, … etc.
Normally, a flow is defined as a stream of packets between two applications and endpoints
Fairness Comparison
Source of this slide and the following: Andrzej Duda
I: # of linksni: # of flows over link in0: # of flows over all the links