A Scheduling Technique for Bandwidth Allocation in Wireless Personal Communication Networks Nikos...
-
Upload
lewis-eaton -
Category
Documents
-
view
213 -
download
0
Transcript of A Scheduling Technique for Bandwidth Allocation in Wireless Personal Communication Networks Nikos...
A Scheduling Technique for Bandwidth Allocation in Wireless Personal Communication Networks
Nikos Passas , Nikos Loukas , and Lazaros Merakos
T ECHNICAL U NIVERSITY OF C RETE
Nikos Passas , Nikos Loukas , and Lazaros Merakos
Future generation wireless personal communication networks (PCN) are expected to provide multimedia capable wireless extensions of fixed ATM/B-ISDN.
Paper presents
·A method for transmission scheduling in PCN (similar to the technique of “virtual leaky bucket “ developed for fixed ATM networks)
· Introduces two alternative priority mechanisms for the sharing of the available bandwidth
T ECHNICAL U NIVERSITY OF C RETE
Goals
• Fair and efficient treatment of various types of traffic on the air interface
• Supporting two kinds of sources :
Constant - bit - rate (CBR) voice and
Variable - bit - rate (VBR) video [ensuring that bandwidth allocation is consistent with
their declarations at connection setup]
with different traffic characteristics and service requirements
T ECHNICAL U NIVERSITY OF C RETE
The PCN terminal of the future will be able to integrate voice, video and data services
will have to coexist with fixed ATM / B-ISDN
Design objectives
flexible multiservice capability (voice, data and multimedia)
good QoS for many service types
compatibility with future ATM/B-ISDN networks
low terminal cost/complexity/power consumption and
efficient, scalable and moderate cost network architecture
An important system design issue in PCN is the selection of a suitable bandwidth sharing technique
Advanced techniques must be employed including
Call Admission Control (CAC)
(PCN user requirements and available resources are of the same nature as in fixed ATM)
Bandwidth enforcement and sharing mechanisms
(must be incorporated in the medium access control (MAC) protocol of the wireless environment)
Limitation of the radio medium make efficiency and fairness of such techniques more critical than in fixed ATM
To avoid inconsistencies and provide a common platform to users, regardless of their connection point, it is essential to consider compatibility with future fixed ATM networks
T ECHNICAL U NIVERSITY OF C RETE
Cellular environment :
each cell consists of one BS and a number of MSs
the number of MSs in cell changes dynamically as they move from one cell to another
Sources :
voice sources transmitting at constant rate when they are active, and
video sources transmitting at variable rate
T ECHNICAL U NIVERSITY OF C RETE
• MSs can be thought of as advanced mobile telephones (e.g.,
videophones) equipped with micro-cameras and mini displays,
capable for voice-video applications
T ECHNICAL U NIVERSITY OF C RETE
Operate on Uplink channel from MSs to the BS of their cell different bands { Downlink channel from BS to MSs
• Uplink channel multiple access control protocol used in conjunction with the scheduling technique
• Downlink channel is not a multiple access channel
• The access control protocol used for controlling transmissions on the uplink channel not only has to enable MSs to share it efficiently with high statistical multiplexing gain, but also to provide MSs with QoS guarantees similar to those in fixed ATM networks.
T ECHNICAL U NIVERSITY OF C RETE
QoS guarantees are accomplished through the combined use of
an appropriate connection admission control (CAC) scheme, which ensures that no new MS connections are admitted if by doing so the QoS of already existing connections cannot be
guaranteed, and
A “scheduler”, located at the BS , which is responsible for allocating the uplink channel to
the MSs in accordance with the QoS agreed upon admission.
T ECHNICAL U NIVERSITY OF C RETE
Framework of the uplink access control protocol within which the scheduler will operate
• Uplink channel is organized as a TDMA-based system
• Each cell is a “hub-based” system since all communications between MSs are done through their BS (the hub)
• Channel time is subdivided into fixed length TDMA slot frames.
• Slots in each frame are dynamically allocated by the BS to the MSs on the basis of transmission requests received from active MSs during the previous frame, and the QoS agreed upon at connection setup.
Each TDMA frame is subdivided into Nr request slots and Nd data slots (Nr, Nd assumed constant - in general they may vary depending on the number and the kind of active sources)
The length of the data slots is selected to be equal to an ATM “cell” (48 bytes data, 5 bytes header), plus an additional radio - specific header, which depends on the specific physical and MAC layer protocols used on the radio interface
T ECHNICAL U NIVERSITY OF C RETE
Request slots in one uplink frame
are used by the MS sources to inform the BS about the data slots they need in the next uplink frame
are expected to be short, compared to data slots, since the only information they have to include is the source’s ID and the number of the requestsed data slots
Nr request slots per frame are shared by the active MSs, in accordance with a random access protocol (e.g., the slotted ALOHA protocol , or the stack protocol)
Nr can be chosen large enough so that the probability of an allocation request being transmitted successfully on its first attempt is close to unity, without substantial overhead
T ECHNICAL U NIVERSITY OF C RETE
Requests and allocation of data slots :
A source transmits its allocation request for the next frame to the BS in a request slot, and waits for an acknowledgement on the downlink before the beginning of the next
frame
If a collision occurs, the source will not receive the acknowledgement and, if its request does not correspond to a packet that has already expired, it will attempt to retransmit it
in the next frame
After receiving all the request slots of a frame, the BS must decide on how to allocate the Nd data slots of the next uplink frame
Before the beginning of the next frame , BS sends an allocation acknowledgement to all sources , notifying them about the slots that they have been assigned
T ECHNICAL U NIVERSITY OF C RETE
Voice data traffic is considered CBR and is given priority over VBR video traffic
Requests from voice sources are satisfied first without competition from video traffic
requests
For requests that come from video sources a mechanism similar to the leaky bucket and the
virtual leaky bucket is used.
In order to enter a fixed ATM network, an ATM cell must first obtain a token from a token pool.
A token pool for each video source is located at the BS.
If there are no available tokens, in the leaky bucket, the cell must wait until a new token is
generated .
Tokens are generated at a fixed rate equal to the mean cell rate of the source
The size of the token pool depends on the burstiness of the source
The state of each pool gives an indication about how much of the declared bandwidth,
the corresponding source has consumed at any instance of time
T ECHNICAL U NIVERSITY OF C RETE
Difference between the leaky bucket and the virtual leaky bucket
In the virtual leaky bucket when the pool is empty, an arriving cell, rather than waiting (as in the leaky bucket) is permitted to enter the network with violation tag in its header.
Violating cells are the first to be discarded if they later arrive at a congested network node.
A major difference between the leaky bucket and the mechanism of this paper when each source has its own connection line with the network,
is that in TDMA system the traffic from all sources is multiplexed in a common radio channel all requesting packets cannot enter the network, at least not immediately.
The acceptance rate of video sources in the channel is limited to the number of slots per frame minus those slots dedicated to the voice sources.
A priority mechanism must be introduced, to decide how the available channel capacity will be allocated to the competing requests from different video sources.
The unaccepted requests will have to wait until their priority becomes higher or until they become expired.
T ECHNICAL U NIVERSITY OF C RETE
The paper introduces two mechanisms, which are based on the state of the token pools and the current requests from all sources.
The main objectives of these mechanisms are :to guarantee fair treatment of all sources under heavy traffic conditions, based on declarations
at connection setup, and to permit sources to transmit over their negotiated throughput, when capacity is available
Priority Mechanism A is based on the philosophy that the source which has more tokens compared to its requests has higher priority, since it is below its declarations, and therefore the system should try to
satisfy its requests as soon as possible.
Si source
Pi the state that the token pool of Si will be in , if all of its requests are satisfied
Ti the number of tokens in the pool of Si at the time a request slot from source Si arrives
Ri the number of requests declared in that slot
Pi = Ti - Ri
T ECHNICAL U NIVERSITY OF C RETE
Let assume that M sources have requested slots for the next frame , with priorities P1, P2, …., PM and let P1 P2 …. PM
The mechanism will first try to allocate slots in the next frame for all requests of source S1, since it has the highest priority. When all requests of source S1 are satisfied and if there are still available
slots in the next frame, source S2 will be selected , then S3 and so on, until the requests of all sources are satisfied, or until all the available slots of the next frame have been allocated.
In case the priorities of some sources are equal , the source with the most requests is serviced first.
Example
If for source Sk and Sl,
Pk = Pl and Rk>Rl
the mechanism will first allocate slots for all requests of source Sk and then for all requests of source Sl.
In the special case where Pk = Pl and Rk = Rl (leading to Tk=Tl),
the mechanism randomly chooses one source to service first.
T ECHNICAL U NIVERSITY OF C RETE
Priority Mechanism A, seems reasonable since it is based on both negotiations made at connection setup, expressed by the token pools, and current needs, expressed by the request slots
of each source. A possible weakness is that , when a source becomes active after a long idle period, it will probably take all the slots it requests, since its token pool is almost full, resulting in
many temporary denials for other sources.
Priority Mechanism B
Tries to solve the abovementioned problem of Priority Mechanism A , by gradually allocating slots, based on the state of the token pool of each source.
The available slots of the frame are spread to more sources avoiding abrupt denials , which can affect the QoS offered to the end user.
Let S1, S2, …. SM be the sources requesting slots in one frame and T1 T2 …. TM the corresponding tokens. The mechanism starts by gradually allocating T1 - T2 slots to source S1 (with the assumption that there are that many requests and available slots). If
T1 = T2 no slots are allocated at this state. Then, it allocates T2 - T3 slots to source S1, and
T2 - T3 slots to source S2 (allocating one slot at a time to sources S1 and S2 in a round robin fashion), T3 - T4 slots to source S1, S2 and S3 and so on, until all requests are
satisfied, or until all the available slots of the next frame have been allocated. …continue
T ECHNICAL U NIVERSITY OF C RETE
For every slot allocated to a source , the corresponding token variable is decremented by one ensures the fair treatment of all sources since, even if a source is assigned many slots in one
frame, it will have lower priority in the following ones.
Results
In both mechanisms no request is blocked if slots are available. Even if a source ’s priority (mechanism A) or token variable (mechanism B) is negative, available slots are
allocated to the source, according to mechanisms A and B .
The proposed technique is more similar to the virtual leaky bucket method, than of the leaky bucket.
Simulation ModelChannel speed C = 1,92 Mb/sec
Frame length L = 12 msec
Data slot size = 53 bytes (48 bytes payload) to fit an ATM cell
A frame can contain
slotsdata 54424
23040
bits8*53
12msec*1,92Mb/sec
sizeslotdata
L*CNd
T ECHNICAL U NIVERSITY OF C RETE
The length of request slots was set to 12 bits, 6 bits for the source’s ID, and
6 bits for the number of requests
There are two kind of sources :
CBR voice sources, producing 32 Kb/sec (1 slot/frame) on active state
VBR video sources, with mean rate μ =128 Kb/sec, peak rate = 512 Kb/sec, deviation = 64 Kb/sec and autocovariance C(T) = 2 e-aT (a=3.9 sec -1)
To model the traffic from video sources independent discrete-time batch Markov arrival process (D-BMAP) was used.
Time-of-expiry
for both voice and video packets was chosen to be between 2 and 3 frames
In all examples :
the number of voice sources was equal to the number of video sources, since we have to do with MSs as videophones, each having one voice and one video source.
T ECHNICAL U NIVERSITY OF C RETE
Ploss is: long term average fraction of packets lost (due to time-of-expiry violation) from all
sources combinedThe two mechanisms induce the same Ploss since the total number of slots allocated per frame is the same (in both) , and packets are lost if the corresponding requests are not
granted in the next frame.
Equivalent bandwidth is :
a unified metric representing the effective bandwidth of a connection based on its parameters declared at connection setup
For Example
With Ploss =10 -3 and the previous mentioned parameters
Equivalent bandwidth = 349.44 Kb/sec or 10.92 slots/frame
For 5 active sources (CBR) 53-5 = 48 slots
48/10.92 = 4,39 video sources (VBR)
T ECHNICAL U NIVERSITY OF C RETE
The utilization of the available bandwidth was found identical for both mechanisms, because no request blocking is performed when slots are available
How lost packets are spread in time ?
The variance of denials of source Si is considered as the variance in time of
the number of requests that are denied.
Di,k the number of slots requested by Si to be allocated in frame k but denied by the scheduler due to slot unavailability
Di(n) the mean variance of Di,k
V Di(n) the sample variance of Di,k
n
n
k 1
k Di,
Din
n
k 1
) Di-k (Di,
Di V
T ECHNICAL U NIVERSITY OF C RETE
Priority Mechanism B results in milder variations of denials compared to those of Priority Mechanism A.
This is because Mechanism B tries to spread the slots of a frame to more sources than Mechanism A .
Smaller denials can more easily absorbed by the end user.
Large denials of Mechanism A can result in temporary degradation in quality, which can be rather annoying to the end user
A promising idea towards combining the two mechanisms is a method that gradually allocates slots for each source as in mechanism B and uses the priorities in Mechanism A.