CHAPTER 4 ACTIVE QUEUE MANAGEMENT -...

28
74 CHAPTER 4 ACTIVE QUEUE MANAGEMENT 4.1 INTRODUCTION QoS in the existing and emerging applications in the Internet has been a big challenge to the Internet Programmer. Real time applications such as audio/video conversations and on demand movies are interactive, have delay constraints or require high throughput. There were many mechanisms proposed in the past to enable the internet to support applications with varying service requirements. These include admission control, queue management algorithms and scheduling. Admission control is meant to prevent network overload so as to ensure service for applications already admitted. It is deployed at the network edges where a new flow is either admitted, when network resources are available or rejected otherwise. Active queue management has been proposed to support end-to-end congestion control in the Internet. The aim is to anticipate and signal congestion before it actually occurs. Scheduling determines the order in which the packets from different flows are served. A network with quality of service has the ability to deliver data traffic with a minimum amount of delay in an environment in which many users share the same network. Low-priority traffic is "drop eligible," while high-priority traffic gets the best service. However, if the network does not have enough bandwidth, even high-priority traffic may not get through.

Transcript of CHAPTER 4 ACTIVE QUEUE MANAGEMENT -...

Page 1: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

74

CHAPTER 4

ACTIVE QUEUE MANAGEMENT

4.1 INTRODUCTION

QoS in the existing and emerging applications in the Internet has

been a big challenge to the Internet Programmer. Real time applications such

as audio/video conversations and on demand movies are interactive, have

delay constraints or require high throughput. There were many mechanisms

proposed in the past to enable the internet to support applications with varying

service requirements. These include admission control, queue management

algorithms and scheduling. Admission control is meant to prevent network

overload so as to ensure service for applications already admitted. It is

deployed at the network edges where a new flow is either admitted, when

network resources are available or rejected otherwise. Active queue

management has been proposed to support end-to-end congestion control in

the Internet. The aim is to anticipate and signal congestion before it actually

occurs. Scheduling determines the order in which the packets from different

flows are served.

A network with quality of service has the ability to deliver data

traffic with a minimum amount of delay in an environment in which many

users share the same network. Low-priority traffic is "drop eligible," while

high-priority traffic gets the best service. However, if the network does not

have enough bandwidth, even high-priority traffic may not get through.

Page 2: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

75

Traffic engineering, which enables QoS, is about making sure that the

network can deliver the expected traffic loads.

Multimedia applications require better Quality of service which can

be provided by properly classifying and scheduling the packets, shaping the

Traffic, managing the bandwidth, and reserving the resources. All these

techniques require the fundamental task packet classification at the router, to

categorize the packets into flows. The end-to-end congestion control in the

Internet requires some form of feedback information from the congested link

to the source of data traffic, so that they can adjust their rates of sending the

data. Therefore explicit feedback mechanism, such as Active Queue

Management algorithms can be used in the network .

A QoS-enabled network is composed of various functions for

providing different types of service to different packets such as rate controller,

classifier, scheduler, and admission control. The scheduler function of them

determines the order in which packets are processed at a node and/or

transmitted over a link. The order in which the packets are to be processed is

determined by the congestion avoidance and packet drop policy (also called

Active Queue Management) at the node. Here D-QoS can be applied

regardless of existing network settings.

As the volume of traffic and number of simultaneously active flows

on Internet backbones increases, the problem of recognizing and addressing

congestion within the network becomes increasingly important. There are two

major approaches to managing congestion. One is to manage bandwidth

through explicit resource reservation and allocation mechanisms. The other

approach is based on management of the queue of outbound packets for a

particular link. This latter approach has recently become the subject of much

interest within the Internet research community. For example, there is an

increasing focus on the problem of recognizing and accommodating “well-

Page 3: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

76

behaved” flows that respond to congestion by reducing the load they place on

the network.

We are investigating a different approach: the development of

active queue management policies using network processor that attempt to

balance the performance requirements of continuous media applications.

Specifically, we are experimenting with extensions to the RED queue

management scheme for providing better performance for UDP flows without

sacrificing performance for TCP flows. Thus, independent of the level of TCP

traffic, a configurable number of tagged UDP connections are guaranteed to

be able to transmit packets at a minimum rate.

The goals of our approach, termed Class-Based Thresholds (CBT),

are similar to other schemes for realizing “better-than-best-effort” service

within IP, including packet scheduling and prioritization schemes such as

Class-Based Queuing (CBQ). While we recognize packet scheduling

techniques such as CBQ as the standard by which to measure resource

allocation approaches, we are interested in determining how close we can

come to the performance of these approaches using thresholds on a FIFO

queue rather than scheduling. Moreover, we believe our design, using an

active queue management approach to be simpler and more efficient than

other schemes.

4.2 QUEUE MANAGEMENT TECHNIQUES

Queuing Mechanism

To implement QoS in the network nodes, the major function that

must be addressed is queuing mechanism. Queuing is a process of storing

packets for subsequent processing [Ferguson and Huston, 1998]. Typically,

queuing occurs when packets are received by a network node and also occurs

prior to forwarding the packets to the next node. Thus, the network is

Page 4: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

77

considered to operate in a store-and-forward manner. Queuing is one of the

key components for achieving QoS. Even though the queuing may happen

twice on a network node, the one for output traffic is normally focused on

since it has crucial role in traffic management and QoS related issues.

Queuing mechanism can be used to isolate the traffic allowing a level of

control over those traffic classes, thus, suitable for traffic management.

Apart from the queuing mechanism used, the queue length which is

a configurable characteristic for a queue also plays an important role in

providing QoS. Correct configuration on the queue length is difficult due to

the randomness of network traffic. A queue length which is too deep results in

long, and possibly unacceptable, delay, jitter, which may consequently lead to

failure of data transmission. On the other hand, if a queue length is too

shallow, the network may exhibit packet drops that, of course, affect the

performance of the applications. This section presents a number of queuing

mechanisms that are used for QoS implementation in this study.

First-In, First-Out (FIFO)

FIFO is also known as First-Come, First-Served (FCFS). It is

considered a standard method for store-and-forward mechanism. It provides

no concept of priority or classes of traffic. It operates on a single queue where

the transmission of packets departing the queue is in the same order as their

arrival order regardless of bandwidth consumption or the associated delays.

FIFO is said to have a tail drop characteristic as an arriving packet is dropped

when the queue is full. Since all traffic is treated equally, non-responsive

traffic can fill a big part of the queue and consume a large portion of the

available buffer, and also bandwidth. Non-responsive traffic refers to the

traffic of applications that do not have data rate adaptation scheme, usually

those running over UDP. As more bandwidth is consumed by these

applications, other traffic may be dropped due to the completion of the queue.

Page 5: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

78

FIFO can be effectively used on a queue with high transmission capacity,

resulting in little and acceptable delay and minimal congestion. However, it

provides no guarantee on traffic differentiation and, therefore, cannot be used

for QoS implementation.

Class-Based Queuing (CBQ)

CBQ [Floyd and Jacobson 1995] is a traffic management algorithm

developed by the Network Research Group at Lawrence Berkeley National

Laboratory, as a variant of priority queuing. This mechanism divides traffic

into a hierarchy of classes and specifies how they share bandwidth in a

flexible manner. Each class has its own queue and can represent an individual

flow or an aggregation of flows. This algorithm intends to prevent complete

resource denial to any particular class of service.

CBQ has two key attributes associated with each class, priority

level and borrowing property. Priority levels allow higher priority queues to

be serviced first within the limits of their bandwidth allocation so that the

delay for time-sensitive traffic classes is reduced. Borrowing property of a

class is a mechanism used to distribute excess bandwidth. A class with

borrowing property may borrow bandwidth, according to the hierarchy, when

it needs more bandwidth. Available bandwidth is granted to higher priority

classes before lower priority ones. The flexibility feature of CBQ, however,

has posed additional overhead that may affect queuing performance.

Absolute Priority Queuing

In absolute priority queuing (Figure 4.1), the scheduler prefers

queues with high priority, forwarding packets from queues with lower priority

happens only when the higher priority queues are empty. Therefore, the

packets put into the queue with the highest priority achieve the best service,

Page 6: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

79

while packets sent to queues with lower priority have to be satisfied with the

left resources. The basic working mechanism is as follows: the scheduler

always scans the queues from their highest to lowest priority and transmits

packets if available. Once a packet has been sent, the scheduler starts again at

the queue with the highest priority. This queuing mechanism is useful to give

a specific flow or service class the absolute priority over other traffic, which

is important for bandwidth and delay critical data.

Figure 4.1 Absolute priority queuing

As long as high priority packets do not exceed the outgoing

bandwidth, the packets are forwarded with a minimal delay. Setting up

Differentiated Services networks, this algorithm might be used to provide an

expedited forwarding (EF) service. Of course the expedited forwarding traffic

has to be policed as misbehaving EF senders would be able to completely

starve the bandwidth of traffic classes with less priority.

4.3 ACTIVE QUEUE MANAGEMENT (AQM)

An implementation of proposed active queue management system

for enforcing QoS guarantees can make use of the following NP hardware

support. Hardware-supported flow control is invoked when a packet is

enqueued in ingress and egress. The transmit probabilities used for flow

Page 7: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

80

control are taken from a table stored in fast access memory, i.e., the Transmit

Probability table. The key to index into the transmit probability table is

composed of packet header bits (e.g. IETF DiffServ code point). The access

to these header bits is supported by a header parser hardware-assist.

Furthermore, detailed flow accounting information about individual ordered

loads is provided via hardware counters. Such counters as well as queue level

indicators and queue depletion indicators at various locations in a NP can be

used to update the transmit probability memory (e.g. ingress general queue,

ingress target NP-queue, egress fl queue, egress port queue). Traditional

active queue management systems use general queue levels to determine

transmit probabilities. However, such an approach is not feasible for high-

speed routers with significantly varying ordered loads.

4.3.1 Application of AQM to Multimedia Applications

Our AQM provides preferential treatment to multimedia

applications by providing lower packet loss using strategically placed packet

marking and active queue management functions. Placement of packet

marking and AQM is recommended at particular places where congestion can

occur. Most notably those places could be at bottleneck links between Internet

Service Providers (ISP’s), in regional networks (e.g. lower tier ISP’s,

corporate networks, or non-profit regional educational networks) or in some

types of access networks (e.g. DSL, cable modem or wireless).

The implementation of AQM for high priority flow seeks to drop

the packets of low priority flow as much as needed to allow as much high

priority traffic as possible to proceed through the router. The goal is to

provide lower packet loss for high priority traffic so that multimedia

communications can proceed more dependably. It should be noted, however,

that multimedia traffic in general does not need to have better delay or jitter

performance than normal traffic.

Page 8: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

81

Figure 4.2 Dropping probability for different traffic flows

The use of AQM, therefore, is meant to create lower packet loss for

high priority traffic by acting as a filtering mechanism to perform prioritized

packet discarding (Fig. 4.2). The filtering also seeks, however, to provide the

best service possible to low priority traffic by not unnecessarily favoring

emergency or high priority traffic.

4.3.2 Differentiated Services Traffic Conditioning Using Network

Processor

QoS guarantees for diversified user traffic can be achieved in two

ways: (1) using overlay networks with bandwidth over provisioning, or (2)

using intelligent traffic management functions. Most service providers build

their networks today using the first approach, in which they assign different

types of traffic to different networks (e.g., voice and data) and provide

sufficient additional bandwidth in the network to avoid the need for QoS

mechanisms. This is, however, an expensive solution. The second approach

involves using intelligent traffic management functions such as classification,

metering, marking, policing, scheduling, and shaping.

Page 9: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

82

For the latter approach, a NP may include specialized-units to

support some of these functions in hardware (Figure 4.3).

Figure 4.3 Network processor support for traffic management

A packet classifier classifies packets based on the content of some

portion of the packet header according to some specified rules and identifies

packets that belong to the same traffic flow. A predefined traffic profile then

specifies the properties (such as rate and burst size) of the selected traffic flow

by the classifier. The traffic profile has preconfigured rules for determining

whether a packet conforms to the profile.

A traffic meter measures the rate of traffic flow, selected by the

classifier, against the traffic profile to determine whether a particular packet is

conforming to the profile.

A marker marks a packet based on the metering result. The marker

has rules for marking packets based on whether they exceed the contracted

traffic profile. There are several standards and proprietary schemes for packet

marking. For example, Diffserv defines a single rate three color marker

(srTCM) which marks IP packets either green, yellow or red according to

three traffic parameters: Committed Information Rate (CIR), Committed

Burst Size (CBS), and Excess Burst Size (EBS). The srTCM-marking

Classifier Meter Marker PolicerPacket Flow

Ingress

Protocol Processing SchedulerShaper

Egress

Page 10: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

83

scheme marks an IP packet green if it does not exceed CBS, yellow if it

exceeds CBS but not EBS, and red otherwise. These markings are then used

to implement congestion control and queuing policies as established by the

network administrator.

During periods of congestion, the policer discards packets entering

the NP input queues. Packets are discarded in a fair fashion based on their

marking (e.g., red packets first, yellow packets next and green packets last)

when the input queues exceed some threshold values. As a result, fairness of

resource usage is achieved since conforming-traffic is forwarded and non-

conforming traffic is discarded. A congestion avoidance algorithm can be

used for IP flows to control the average size of input queues. Such schemes

start discarding packets from selected traffic flows when the input queues

begin to exceed some threshold values.

Packet scheduling schemes are used to intelligently control how

packets get serviced once they arrive at the NP output ports. To meet the

various QoS requirements of a multi-service platform, a scheduler may

support a mix of flexible queuing disciplines

The shaper smoothes the traffic flow exiting the system in order to

bring the traffic flow into compliance with the contracted traffic profile. This

also allows the network administrator to control the traffic rate of an outbound

interface.

4.3.3 Class-Based Thresholds

In this Class-Based Thresholds (CBT) algorithm, we define a set of

traffic classes: responsive (i.e., TCP), multimedia, and other. For each class a

threshold is specified that limits the average queue occupancy by that class. In

Page 11: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

84

CBT, when a packet arrives at the router it is classified into one of these

classes. Next, the packet is enqueued or discarded depending on that class's

recent average queue occupancy relative to the class's threshold value. Our

approach is to isolate high priority flows from the effects of all other flows by

constraining the average number of low priority packets that may reside

simultaneously in the queue. We also wanted to isolate classes of low priority

traffic from one another, specifically isolating continuous media traffic from

all other traffic. To do this we tag continuous media streams before they reach

the router so that they can be classified appropriately. These flows are either

self-identified at the end-system or identified by network administrators.

4.3.4 Packet Classification

DiffServ redefines and restructures the Type of Service (TOS) octet

of the IPv4 packet header or the Traffic Class octet in IPv6’s. The field is

called a differentiated services field or DS field. In DS field, it contains a 6-bit

Differentiated Services Code Point (DSCP) and a currently 2-bit unused field.

DSCP is mainly used for packet processing (Figure 4.4) in the microengine.

Details on the layout of these fields, as shown in Figure 4.5, are standardized

and specified.

Figure 4.4 Packet processing in NP

Packet classification is performed on each packet by using either

only the values of the code point or a combination of the code point with one

or more header fields, which are source address, destination address, protocol

Page 12: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

85

ID, source port number, destination port number and other information such

as incoming interface.

Figure 4.5 Packet classification based on TOS value

Packets from various flows are classified into a set of predefined

classes according to the value of their code points. The semantic of each code

point can be taken from a set of mandatory values or it can be based on local

meaning specified by network administrators.

4.3.5 Traffic Class Identification

Statistics are maintained for these classes of traffic and their

throughput are constrained during times of congestion by limiting the average

number of packets they can have enqueued. Untagged packets are likewise

constrained by a (different) threshold on the average number of untagged

Page 13: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

86

packets enqueued. These thresholds only determine the ratios between the

classes when all classes are operating at capacity (and maintaining a full

queue). When one class is operating below capacity other classes can borrow

that classes unused bandwidth. Whenever a packet arrives, it is classified as

being either tagged (continuous media), or untagged (all others). For the

other classes, the average number of packets enqueued for that class is

updated and compared against the class threshold. If the average exceeds the

threshold, the incoming packet is dropped. If the average does not exceed the

threshold then the packet is subjected to the RED discard algorithm. Thus a

low priority packet can be dropped either because there are too many packets

from its class already in the queue or because the RED algorithm indicates

that this packet should be dropped. Priority packets are only dropped as a

result of performing a RED discard test. In all cases, although packets are

classified, there is still only one queue per outbound link in the router and all

packets are enqueued and dequeued in a FIFO manner.

RED discard test: In addition, the weighted average queue length

used in the RED algorithm is computed by counting only the number of high

priority packets currently in the queue. This prevents an increase in

multimedia traffic from increasing the drop rate for a conformant tagged flow

and it prevents the presence of the tagged and untagged packets from driving

up the RED average, resulting in additional drops of priority packets. For

example, with thresholds of 10 packets for tagged and 2 packets for untagged

traffic, these packets alone can maintain an average queue size of 12. Since

these flows do not respond to drops, the resulting probabilistic drops only

affect priority traffic, forcing priority flow to constantly respond to

congestion. Removing these packets from the average reduces their impact on

the high priority traffic. Thus when high-bandwidth low priority flows are

active our first variant reduces to reserving high priority flows a minimum

Page 14: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

87

number of queue slots and performing the RED algorithm only on them. In

this way, we increase the isolation of the classes.

Table 4.1 CBT with RED for all flows

Queue Size(100)

Queue Size(200)

Queue Size(300)

Queue Size(500)

Average

Queue Length 25 46 69 108

Percentage of

Dropped Packets5.5% 4.6% 2.9% 1.3%

We refer to the main CBT algorithm described above in Table 4.1

as “CBT with RED for all flows.” We also consider the effect of not

subjecting the low priority flows to a RED discard test. The motivation here

comes from the observation that since the low priority flows are assumed (in

the worst case) to be unresponsive, performing a RED discard test after

policing the flow to occupy no more than its maximum allocation of queue

slots provides no benefit to the flow and can penalize a flow even when that

class is conformant. While the additional RED test may benefit high priority

flows, they are already protected since the low priority flows are policed.

4.3.6 QoS Provisioning

The simplest way to provide useful QoS is for each queue to have

an aggregate minimum bandwidth guarantee min and an aggregate bandwidth

maximum max. Hence 0 - min - max. If multiple minimum band-width

guarantees are represented by constituent flows in a queue, then these must be

summed to achieve a queue minimum. As network end-to-end paths with

Page 15: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

88

guarantees are added or deleted, ensuring the guarantees of the minimums at

each NP in the network is the complex yet hidden work of network control.

The implemented active queue management system uses modest

amounts of control theory to adapt transmit probabilities to current ordered

loads. A signal is declared that defines the existence of excess bandwidth.

This signal can use flow measurements, queue occupancy, rate of change of

queue occupancy, or other factors. If there is excess bandwidth, then flows

that are not already at their maxs are allowed to increase linearly. Otherwise,

the rates allocated to flows not already below their mins must decrease

exponentially.

For example, let us suppose four strict-priority flows in egress are

all destined to the same 100 Mbps target port as shown in Table 4.2. All

bandwidth units are in Mbps. If the four flows offer rates of 15, 15, 50, and

100 (so sum = 180), then the correct allocation should be: all Flow0, Flow1,

and Flow2 traffic should be transmitted, and of Flow3 one fifth of the packets

should be transmitted (packets selected randomly). Furthermore, the above

allocation should be reached quickly and with low queue occupancy.

Table 4.2 Example of flows for bandwidth allocation

Flow label Type Minimum Maximum Priority

Flow0 real-time 10 30 0 (High)

Flow1 Non-real-time 20 40 1

Flow2 Non-real-time 0 100 2

Flow3 Non-real-time 0 100 3 (Low)

Page 16: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

89

There are virtually no costs for implementing the active queue

management system in the data plane of the NP because the header parser

hardware-assist and hardware flow control support can be configured

according to the new scheme. The implementation costs in the control plane

are about 300 cycles to update a single transmit probability value. This update

is triggered in fixed time intervals for each queue at each output port. These

costs cover line-speed forwarding and normal control operation. It was

possible to make this advanced networking function swiftly operational as a

standard feature on the NP which demonstrates the flexibility of NPs.

4.4 SIMULATION STUDY

The Workbench of IXP2400 provides packet simulation of media

bus devices as well as simulation of network traffic. There is a feature under

the packet simulation called Packet Simulation Status, shown in figure 4.6

enables the observation of the number of packets being received into the

media interface and number of packets being transmitted from the fabric

switch interface.

Figure 4.6 Packet Simulation status

Multimedia streams are artificially simulated with UDP flows since

voice and video applications use UDP rather than TCP. Traffic generated in

this work consists of RTP/UDP packets, UDP packets (with large TTL and

Page 17: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

90

with less TTL) and TCP packets. All these packets are uniformly distributed

in the traffic. Traffic is generated with a constant rate of 1000Mb/s and inter-

packet gap is set to 96 ns.

To make sure that various traffic streams (Figure 4.7) are not

synchronized, each flow begins its transmission at a random time within the

first 40 msec. The QoS setting in D-QoS is automatically adjusted to offer the

interruption according to the buffer capacity of the flows.

Figure 4.7 Creating various traffic streams

When voice and video packets need to pass through router

(IXP 2400) will be treated and processed at the highest priority as it is real

time traffic. The first priority will be for the Expedited Forwarding (EF)

which handles real time application packets. This minimize the packet delay

for the real-time data packets in IPv6 DiffServ Network to maximize the

throughput of the data like voice, video etc in which delay is not tolerable.

When EF traffic got some break then second priority for Assured Forwarding

(AF) traffic will be forwarded.

In this process both incoming traffic and stored packets in the

memory will be moved if there is any. The third priority packets are buffered

accordingly, when there are no packets or traffic for EF or AF then the

packets for third priority means best effort will be forwarded.

Page 18: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

91

The simulator is configured to generate and inject packet streams to

ports 0 and 1 of the device 0. Packet Receive Microengine has been interfaced

with the Media Switch Fabric Interface (MSF). The packets are injected

through Media switch Fabric Interface and Receive Buffer (RBUF)

reassembles incoming mpackets (packets with length 64 bytes). For each

packet, the packet data is written to DRAM, the packet meta-data is written to

SRAM. The Receive Process is executed by microengine (ME0) using all the

eight threads available in that microengine. The packet sequencing is

maintained by executing the threads in strict order.

The packet buffer information is written to a scratch ring

(Figure 4.8) for use by the packet processing stage communication between

pipeline stages is implemented through controlled access to shared ring

buffers as shown in the code given below.

Figure 4.8 Packet information in Scratch ring

Code for scratchpad put buffer

void scratch_ring_put_buffer( unsigned int ring_number, ring_data_t data) {

_declspec(scratch) void* ring_addr = (void*)(ring_number << 2); SIGNAL ring_signal; __declspec(sram_write_reg) ring_data_t packet; packet = data; scratch_put_ring(&packet,

Page 19: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

92

ring_addr, sizeof(packet) / sizeof(unsigned int), ctx_swap, &ring_signal);

}

The forwarding functionality can be implemented in two ways,

through fast path and through slow path. In fast path, a hashing table is

maintained for known data streams to enable fast packet processing. The

hashing table is updated at fixed interval of 50ms so that the dynamic

performance of the algorithm is maintained. While in slow path, calculations

are performed for per-packet basis to find the best possible next-hop at that

instant.

Classifier microblock is executed by microengine (ME1) that

removes the layer-3 header from the packet by updating the offset and size

fields in the packet Meta descriptor. The packets are classified into different

traffic flows based on the IP header and are enqueued in the respective queue.

Then the Packet Transmit microblock segments a packet into m-packets and

moves them into transmitting buffers (TBUFs) for transmitting the packets

over the media interface through different ports. The egress packet streams

are forwarded to the output ports 0 to 4 of the device

In a NE, upon receipt of a Protocol Data Unit (PDU), the following

actions occur. The header of the PDU is parsed (HdrP) and the type of the

PDU is identified (PktID) to decide which forwarding table to search. The

PDU is classified with class label (CL) to establish the flow and class of the

PDU and possibly its priority. The packet Transmitter micro block monitors

the MSF to see if the TBUF threshold for specific ports has been exceeded as

shown in Figure 4.9.

Page 20: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

93

Figure 4.9 Transmitting Buffer Values

If the TBUF value for the output port 0 (real time packets) exceeds,

here we are allowing the packets to transmit until it reaches the MAX TBUF

value instead of dropping the packets at the tail end (Figure 4.10). If it still

reaches the max threshold (85% of the Queue length) forwarding ME starts

forwarding it to the other queues which are dedicated to low priority flows.

Figure 4.10 Drop probability of various flow (Dashed line-High priority, Dotted line Low Priority)

Page 21: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

94

If it happens for other non priority flows ME threads stops

transmitting on that port and any requests to transmit packets on that port are

queued up in local memory. The Packet transmitter micro block periodically

updates the classifier with information about how many packets have been

transmitted.

If no further treatment is required then the AQM function decides

whether to forward or discard the PDU according to the output queues fill

levels and the PDU’s class and/or priority. The scheduler (SCH) ensures that

the PDU’s transmission is prioritized according to its relative class/priority.

The Traffic Shaper (TS) ensures that the PDU is transmitted according to its

QoS profile and/or agreements.

The queuing mechanism used within this interruption mode aims to

give higher priority to the privileged flow. Other flows previously classified

into DiffServ’s EF and AF classes are transmitted with the lowest priority. To

provide different services for Time-Sensitive (EF) flows and non-time

sensitive (AF) flows, the two lowest priority levels are reserved for each one

of these two flow types. The time-sensitive flows previously classified into

DiffServ’s EF class are placed in the second lowest priority level, while other

flows previously belonging to DiffServ’s AF classes are given the lowest

priority level. Thus, EF traffic receives lower-loss, lower-delay and lower-

jitter comparing to AF traffic.

4.5 RESULT AND DISCUSSION

In this simulation experiments, Throughput and Queue Loss Ratio

are measured for three kinds of flows (i.e.) Assured Forwarded (AF) flow,

Best Effort (BE) Flow and Active Flow. Here, voice packets are considered as

Page 22: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

95

Active Flow, Image packets are considered as Assured Forwarded flow and

other text packets are considered as Best Effort flows.

Figure 4.11 Throughput for three kinds of flows

Simulation experiments are conducted by varying Transmit rate

from 0.5 Mbps to 10Mbps and the Throughput is measured for the above

mentioned three flows. This performance is depicted in Figure 4.11. In all the

three flows, the Throughput is increased while increasing the Transmit rate.

The Throughput for Assured forwarded flow is higher than the Best Effort

flow and lower than the Active Flow. The Throughput for Active flow is

higher than both Assured forwarded flow and Best Effort flow.

Simulation experiments are conducted by varying Transmit rate

from 0.5 Mbps to 40Mbps and the Queue Loss Ratio is measured for the

above mentioned three flows. This performance is depicted in figure 4.12. In

Page 23: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

96

all the three flows, the Queue loss ratio is increased while increasing the

Transmit rate. The Queue loss ratio for Assured forwarded flow is higher than

the Active flow and lower than the Best Effort Flow. The Queue loss ratio for

Active flow is lesser than both Assured forwarded flow and Best Effort flow.

Figure 4.12 Queue Loss Ratio for three kinds of flows

The Simulation experiments are conducted with 10 no. of active flows

with Packet size 800 bytes. The Packet loss ratio is measured for different

Transmission rate 10 Mbps, 20 Mbps, 30 Mbps and 40 Mbps. The

Performance is depicted in Figure 4.13. This figure shows that the packet loss

ratio is increased while increasing the no. of active flow in the entire

transmission rate 10 Mbps, 20 Mbps, 30 Mbps and 40 Mbps

Page 24: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

97

Figure 4.13 Packet loss ratio in active flow for different transmission

rate with packet size 800 bytes

The Simulation experiments are conducted with 10 no. of active

flows with Transfer rate 30 Mbps. The Packet loss ratio is measured for

different Packet Sizes 200 Bytes, 50 Bytes, 800 Bytes and 1200 Bytes. The

Performance is depicted in Figure 4.14. This figure shows that the packet loss

ratio is increased while increasing the no. of active flow in all Packet Sizes

200 Bytes, 50 Bytes, 800 Bytes and 1200 Bytes.

Packet Size = 800 bytes

Page 25: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

98

Figure 4.14 Packet loss ratio in active flow for different packet size with transfer rate 30 Mbps

The Simulation experiments are conducted in active flow for both

with and without AQM. Here, the packet lengths varied from 100 bytes to

1600 bytes and Throughput, Delay and Packet loss are measured for both with

and without AQM.

The performance of Throughput is depicted in Figure 4.15. The

figure shows that the Throughput is varied in both with and without AQM

while varying the packet length. But the throughput with AQM is higher than

the Throughput without AQM.

Transfer Rate = 30 Mbps

Page 26: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

99

Figure 4.15 Throughputs for both with and without AQM

The performance of Delay is depicted in Figure 4.16. The figure

shows that the Delay is varied in both with and without AQM while varying

the packet length. But the Delay with AQM is lesser than the Delay without

AQM and RED [27].

Figure 4.16 Delay for both with and without AQM

Page 27: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

100

Figure 4.17 Packet loss (%) for both with and without AQM

The performance of Packet loss (%) is depicted in Figure 4.17. The

figure shows that the Packet loss (%) is varied in both with and without AQM

while varying the packet length. But Packet loss (%) with AQM is lesser than

the Packet loss (%) without AQM.

4.6 RELATED CONTRIBUTIONS

In this Chapter, we investigated the problem of AQM in multimedia

streams on a NP-based active router. Some of the open issues in the NP space

include software durability, which they share with many other specialized,

embedded systems. Therefore, it is currently nearly impossible to write code

that would easily port to a different family. For code running in the data

plane, use of a smart optimizing compiler permits to write relatively

architecture-independent code. With appropriate, evolving libraries, key sub

functions can be transferred progressively and seamlessly from software

implementation to hardware, maintaining backwards compatibility. In the

Page 28: CHAPTER 4 ACTIVE QUEUE MANAGEMENT - Shodhgangashodhganga.inflibnet.ac.in/bitstream/10603/25554/9/09... · 2018. 7. 2. · 4.3 ACTIVE QUEUE MANAGEMENT (AQM) An implementation of proposed

101

control plane, standard interfaces are being developed and joined with

capabilities for dynamic NP feature discovery to allow NP-independent code.

Stepping back to see the big picture, we can conclude that

versatility is one of NPs strongest features. With comparably little effort, it is

possible to implement new features to dramatically increase the value offered

by a network device, and to offer new and powerful functions, often

combined from amongst a wide variety of existing building blocks. We

believe that in the networking world, this makes NPs a strong competitor to

ASICs for all but the highest-performance network devices and thus expect

their use to grow dramatically in the near future. The simulation results shows

that the Active Network technique indeed improves the overall utilization of

network resources, and decreases the packet loss rate and queuing delay.

Additionally, evaluation results demonstrate that Active Network mechanism

is most effective when the high-priority queues suffer serious congestion

while resources are still available in low-priority queues.