Implementing QoS in IP Networks - Nikolaos Tossiou

146
MSC DATA COMMUNICATION SYSTEMS DEPARTMENT OF ELECTRONIC & COMPUTER ENGINEERING BRUNEL UNIVERSITY INTRACOM S.A. IMPLEMENTING QUALITY OF SERVICE IN IP NETWORKS by Nikolaos Tossiou September 2002 A dissertation submitted in partial fulfillment of the requirements for the degree of Master of Science

Transcript of Implementing QoS in IP Networks - Nikolaos Tossiou

Page 1: Implementing QoS in IP Networks - Nikolaos Tossiou

MSC DATA COMMUNICATION SYSTEMS

DEPARTMENT OF ELECTRONIC & COMPUTER ENGINEERING

BRUNEL UNIVERSITY

INTRACOM S.A.

IMPLEMENTING QUALITY OF SERVICE IN IP NETWORKS

by

Nikolaos Tossiou

September 2002

A dissertation submitted in partial fulfillment of the

requirements for the degree of Master of Science

Page 2: Implementing QoS in IP Networks - Nikolaos Tossiou

MSC DATA COMMUNICATION SYSTEMS

DEPARTMENT OF ELECTRONIC & COMPUTER

ENGINEERING

BRUNEL UNIVERSITY

INTRACOM S.A.

IMPLEMENTING QUALITY OF SERVICE IN IP NETWORKS

by

Nikolaos Tossiou

supervised by

Mr. Nasheter Bhatti

September 2002

A dissertation submitted in partial fulfillment of the requirements for the degree of Master of Science

Page 3: Implementing QoS in IP Networks - Nikolaos Tossiou

Copyright © 2002 by

Nikolaos Tossiou & Intracom S.A.

Page 4: Implementing QoS in IP Networks - Nikolaos Tossiou

To my parents, Olga and Vassilios Tossiou

Page 5: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

- i -

TABLE OF CONTENTS

ABSTRACT .......................................................................................

1

ACKNOWLEDGEMENTS .....................................................................

3

CHAPTER 1: IP QUALITY OF SERVICE: AN OVERVIEW ..................... 1.1. INTRODUCTION ............................................................................... 1.2. THE INTERNET: A BIG PICTURE ..................................................... 1.3. DEFINITION OF QOS ....................................................................... 1.4. APPLICATIONS REQUIRING QOS ..................................................... 1.5. QOS MECHANISMS ........................................................................

1.5.1. CLASSIFICATION/MARKING .................................................... 1.5.2. POLICING/SHAPING ................................................................. 1.5.3. SIGNALING .............................................................................. 1.5.4. QUEUE SCHEDULING/CONGESTION MANAGEMENT ............... 1.5.5. QUEUE MANAGEMENT/CONGESTION AVOIDANCE ................. 1.5.6. TRAFFIC ENGINEERING ...........................................................

1.6. QOS ARCHITECTURES .................................................................... 1.6.1. INTEGRATED SERVICES ........................................................... 1.6.2. DIFFERENTIATED SERVICES ....................................................

1.7. MULTI-PROTOCOL LABEL SWITCHING ........................................... 1.8. IP QOS: A CRITICAL EVALUATION ...............................................

4 4 8 9

15 16 17 19 22 23 26 30 31 31 32 36 37

CHAPTER 2: CASE STUDY: DESIGN AND IMPLEMENTATION OF A

QOS-ENABLED NETWORK ............................................................... 2.1. IDENTIFICATION OF APPLICATIONS ................................................ 2.2. DEFINITION OF QOS CLASSES ....................................................... 2.3. ROLES OF INTERFACES .................................................................. 2.4. CASE STUDY NETWORK TOPOLOGY .............................................. 2.5. LABORATORY NETWORK TOPOLOGY ............................................ 2.6. NETWORK DEVICES ....................................................................... 2.7. PROTOCOLS AND QOS MECHANISMS ............................................

2.7.1. QOS MECHANISMS FOR EDGE INTERFACES ........................... 2.7.2. QOS MECHANISMS FOR CORE INTERFACES ..........................

2.8. CONFIGURATION OF NETWORK DEVICES .......................................

44 46 47 49 53 55 56 60 61 70 70

Page 6: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

- ii -

CHAPTER 3: MEASUREMENT INFRASTRUCTURE FOR VALIDATING

QOS OF A NETWORK ........................................................................... 3.1. TRAFFIC GENERATION AND MEASUREMENT TOOLS .......................... 3.2. DESIGN OF VALIDATION TESTS ......................................................... 3.3. COLLECTION AND ANALYSIS OF RESULTS .........................................

84 84 88 91

CHAPTER 4: CONCLUSIONS ................................................................. 4.1. SUMMARY ......................................................................................... 4.2. CONCLUSIONS ................................................................................... 4.3. RECOMMENDATIONS FOR FURTHER STUDY .......................................

99 99

101 103

BIBLIOGRAPHY ...................................................................................

105

REFERENCES .......................................................................................

109

NOTATION ..........................................................................................

113

APPENDIX ........................................................................................... PART I – INTERIM REPORT ....................................................................... PART II – INTERIM REPORT GANTT CHART .............................................. PART III – FINAL PROJECT TIME PLAN ..................................................... PART IV – FINAL PROJECT GANTT CHART ............................................... PART V – IETF RFC SPECIFICATIONS .....................................................

115 115 133 134 135 136

Page 7: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

- iii -

FIGURE 1-1: THE PROCESS OF QOS ......................................................................

FIGURE 1-2: IP PRECEDENCE AND DSCP IN THE TOS BYTE ................................

FIGURE 1-3: EXAMPLE OF TRAFFIC RATE .............................................................

FIGURE 1-4: POLICING ..........................................................................................

FIGURE 1-5: SHAPING ...........................................................................................

FIGURE 1-6: RSVP PROCESS ................................................................................

FIGURE 1-7: GLOBAL SYNCHRONIZATION ............................................................

FIGURE 1-8: WRED ALGORITHM .........................................................................

FIGURE 1-9: TOS BYTE .......................................................................................

FIGURE 1-10: DS BYTE .......................................................................................

FIGURE 2-1: EDGE AND CORE INTERFACES ..........................................................

FIGURE 2-2: SIMPLIFIED ROUTER SCHEMATIC ......................................................

FIGURE 2-3: INTERFACE INGRESS SCHEMATIC ......................................................

FIGURE 2-4: INTERFACE EGRESS SCHEMATIC .......................................................

FIGURE 2-5: CASE STUDY NETWORK TOPOLOGY .................................................

FIGURE 2-6: LABORATORY NETWORK TOPOLOGY ...............................................

FIGURE 2-7: QPM MAIN SCREEN .........................................................................

FIGURE 2-8: DEVICE PROPERTIES PAGE ...............................................................

FIGURE 2-9: DEVICE CONFIGURATION LISTING ...................................................

FIGURE 2-10: INTERFACE PROPERTIES PAGE ........................................................

FIGURE 2-11: POLICY EDITOR ..............................................................................

FIGURE 2-12: CREATED POLICIES .........................................................................

FIGURE 2-13: POLICY COMMANDS .......................................................................

FIGURE 2-14: QDM MAIN SCREEN ......................................................................

FIGURE 3-1: NETMEETING SCREENSHOT ..............................................................

FIGURE 3-2: IP TRAFFIC SCREENSHOT .................................................................

FIGURE 3-3: SNIFFER PRO SCREENSHOT ...............................................................

14

18

20

21

22

23

27

29

33

33

51

52

52

53

54

55

72

72

73

73

74

79

80

82

85

86

87

LIST OF FIGURES

Page 8: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

- iv -

LIST OF TABLES

TABLE 1-1: APPLICATIONS REQUIRING QOS ........................................................

TABLE 1-2: IP PRECEDENCE VALUES, BITS AND NAMES .....................................

TABLE 1-3: DSCP CLASS SELECTORS ..................................................................

TABLE 1-4: AF PHB ............................................................................................

TABLE 1-5: DSCP TO IP PRECEDENCE MAPPINGS ...............................................

TABLE 2-1: CASE STUDY CLASSES OF SERVICE ....................................................

TABLE 2-2: NETWORK DEVICES, DESCRIPTIONS AND IOS VERSIONS ..................

TABLE 2-3: INTERFACES, IP ADDRESSES AND INTERFACE LINK SPEEDS ..............

TABLE 2-4: ROUTER LOOPBACK INTERFACE IP ADDRESSES ................................

TABLE 2-5: RECOMMENDED CAR NB AND EB SETTINGS ...................................

TABLE 3-1: FIRST TEST: TCP BEST-EFFORT .........................................................

TABLE 3-2: FIRST TEST: TCP BEST-EFFORT & TCP ASSURED ............................

TABLE 3-3: FIRST TEST: ALL TRAFFIC FLOWS WITHOUT VIDEOCONFERENCE ......

TABLE 3-4: FIRST TEST: ALL TRAFFIC FLOWS WITH VIDEOCONFERENCE .............

TABLE 3-5: FIRST TEST: ALL TRAFFIC FLOWS WITHOUT VIDEOCONFERENCE

AFTER FIRST POLICY CHANGE ............................................................

TABLE 3-6: FIRST TEST: ALL TRAFFIC FLOWS WITH VIDEOCONFERENCE AFTER

FIRST POLICY CHANGE .....................................................................

TABLE 3-7: FIRST TEST: ALL TRAFFIC FLOWS WITH VIDEOCONFERENCE AFTER

SECOND POLICY CHANGE .................................................................

TABLE 3-8: SECOND TEST: ALL TRAFFIC FLOWS WITH VIDEOCONFERENCE .........

15

19

34

35

35

48

58

59

59

67

91

92

92

93

94

94

95

96

Page 9: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

1

ABSTRACT

Traditionally the Internet has provided users with a “best-effort” service

without guarantees as to whether or when data is delivered. Data is processed as

quickly as possible regardless of its nature. With the advent of new technologies

and developments, user demands as well as network and Internet applications

have become more complex, thus requiring better services than “best-effort”.

Quality of Service (QoS) is the specification of a level of service, which is

required from the network. It is defined by specifying certain service parameters,

such as delay, loss, throughput and jitter. QoS is implemented by a set of

architectures that tackle this problem by providing the means necessary to reserve

resources, differentiate between different traffic types that require different

servicing properties and manage the flow of data from source to destination based

on specific characteristics, thus ensuring actual as well as timely delivery,

according to requirements. These architectures achieve this by utilizing various

mechanisms for traffic classification, policing, signaling, queue scheduling and

queue management. All these mechanisms will be presented and discussed in

detail.

The two Internet QoS architectures that will be examined are Integrated

Services (IntServ) and Differentiated Services (DiffServ).

The main function of IntServ is to reserve resources (such as bandwidth or

maximum delay) along the path of a flow. A flow can be defined as a stream of

data from a source to a destination with specific characteristics such as source and

destination address, source and destination port numbers and protocol identifier.

The main purpose of DiffServ is to divide flows into different classes and

treat them according to their pre-specified requirements as well as enforce specific

policies.

Page 10: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

2

MPLS (Multi-Protocol Label Switching) is a transport technology for IP

networks that possesses certain features, which may assist the provision of QoS.

MPLS labels packets in a way that makes the analysis of the IP header at

intermediate points between source and destination unnecessary, thus reducing

processing overhead greatly and significantly speeding up traffic flow. Also,

MPLS may be used for implementing explicit routing of certain traffic flows

through an IP network, thus realizing traffic engineering, which is a mechanism

for improving service quality.

Implementation, analysis, experiments and derived results will be based

mainly on the DiffServ architecture. The relevant mechanisms will be

implemented exclusively on Cisco® Systems network equipment and all results

will be relevant to Cisco® configurations and QoS implementations.

This dissertation is aiming at investigating the mechanisms, algorithms,

protocols and technologies that have been defined for implementing QoS in IP

networks, identifying current state-of-the-art availability and operation of QoS

features and capabilities in current commercial network devices and finally

experimenting with such capabilities of network devices in a networking lab

environment by designing, implementing and validating a network provided with

QoS mechanisms to derive conclusions as to the usefulness and feasibility of QoS

in modern IP networks.

Page 11: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

3

ACKNOWLEDGEMENTS

This dissertation has been completed with the help of a lot of people.

Without their input, comments and guidance it would have been impossible.

First and foremost I would like to thank my academic supervisor, Mr.

Nasheter Bhatti, who has provided me with useful comments and guided me to the

right direction throughout the duration of this dissertation.

Secondly, I would like to thank Mr. Evangelos Vayias, my industrial

supervisor and mentor at Intracom S.A., who has helped me with lots of technical

details and provided me with a variety of technical documents. He was also eager

to comment on the dissertation and guide me to the right direction in the practical

part of the dissertation. This dissertation would have been impossible without his

support.

I would also like to thank all the people at Intracom S.A., especially the

Cisco® Professional Services team, who have all been supportive throughout my

stay at the company and helped me out whenever I had a problem or needed

guidance and assistance.

Also, I would like to thank the Chachalos family, who has generously

provided me with a home to stay in Athens. I owe them a lot.

Last but not least I would like to wholeheartedly thank my parents.

Without them neither my studies nor this dissertation would have been possible.

They were always by my side, providing me with any kind of support I required.

May they always be healthy.

Page 12: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

4

CHAPTER 1: IP QUALITY OF SERVICE: AN OVERVIEW

1.1. INTRODUCTION

The exponential growth of the Internet, networks in general and network

users around the globe, has led to the emergence of a significant number of new

user requirements and demands. New applications have emerged to meet the

users’ needs and provide them with new services. Such applications include e-

commerce, Voice-over-IP (VoIP), streaming media (audio and video),

teleconferencing and other bandwidth-intensive applications. This situation has

led to high demand for a network architecture that will provide appropriate

facilities and mechanisms to ensure proper operation of these new applications

[1].

Traditionally, the Internet does not have the ability to distinguish between

different service classes, therefore being unable to cope with different

requirements for different applications. All traffic flows are treated as “best-

effort”, which means that there is no mechanism to ensure actual delivery, nor is it

possible to ensure timely delivery of specific traffic flows that would require it.

There is no doubt that the dominating protocol used in networks nowadays –

besides the Internet of course – is the Internet Protocol (IP). Fundamentally, it is a

best-effort protocol; that is why it cannot guarantee the timely or lossless delivery

of data packets. Reliable delivery of data packets at the destination is the

responsibility of the Transmission Control Protocol (TCP), which is located just

above the IP in the well-known seven-layer Open Systems Interconnection (OSI)

reference model. In other words, traffic – no matter what its type and requirements

– is processed as quickly as possible and there is no guarantee as to timely or

actual delivery [2].

Page 13: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

5

This lack of guarantees together with the new user demands and

requirements has led to the emergence of Quality of Service (QoS) network

architectures and mechanisms in today’s IP networks.

The objective of this dissertation is to investigate these architectures and

mechanisms by analyzing the way they operate and distinguish their usability

under different conditions and for different scenarios, to identify current

availability and operation of QoS architectures and mechanisms in current

commercial network devices, such as routers and switches, and to experiment

with the QoS capabilities and features of such devices in a networking laboratory

environment by designing, implementing and validating a network that utilizes

QoS mechanisms. This will lead to a conclusion as to which QoS mechanisms are

better for different kinds of traffic flows and at which point they should be

applied. Finally, it will lead to a conclusion which QoS architecture is more

feasible, viable and useful for today’s IP networks and why.

The investigation of QoS architectures and mechanisms is a vast subject,

which is of very high interest. QoS in the Internet is a hotly debated issue, because

several frameworks have been proposed to provide different service classes and

different levels of guarantees to different users, according to their needs. In other

words, QoS is most certainly going to become one of the basic and most widely

used set of mechanisms and architectures for any network infrastructure because

the Internet is establishing itself as a global multiservice infrastructure [3].

This dissertation will investigate the main QoS mechanisms such as queue

management, queue scheduling, traffic policing/shaping and signaling and the

QoS architectures that utilize these mechanisms (DiffServ and IntServ). These

will be applied and implemented on current state-of-the art network equipment in

order to investigate their operation, feasibility and usefulness.

Page 14: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

6

The dissertation will not deal with specialized mechanisms that are

available only for specific traffic flows and on specific network equipment.

Furthermore, it will not investigate economic aspects of QoS that arise, i.e.

different pricing policies from Internet Service Providers (ISPs) for different

service offerings. These will be mentioned, but not analyzed in detail. Such issues

are beyond the scope of the dissertation.

At the end of this dissertation the reader should be able to distinguish

between the different QoS architectures and which mechanism should be used for

different traffic flows under what circumstances, i.e. which mechanism is more

appropriate for what kind of traffic flow and what type of requirements.

Furthermore, by comparing the two different QoS architectures, the reader should

be able to tell which one is more scalable or appropriate for today’s modern IP

networks, although – as we will see later on – both architectures have their

purpose and field of usage. Nevertheless, the tests that will be carried out will lead

to the conclusion that one of the two architectures is preferable for any kind of

network and will discuss the reasons.

The starting point will be to present the current state of the Internet to

justify the existence and need for QoS by identifying the various ways the Internet

is used today. We will then proceed by defining the term “Quality of Service”,

identifying applications that require QoS and discussing the various QoS

mechanisms and architectures available. Chapter 1 will be concluded with the

author’s critical evaluation of the two QoS architectures. In Chapter 2 we will

present and discuss a case study of a network topology on which the tests will be

carried out and results will be collected. Finally, in Chapter 3 we will present the

experiments and tests that have been carried out at the laboratory as well as

analyze all collected results. Finally, we will conclude in the last chapter by

Page 15: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

7

summarizing the results and linking them with the theoretical part as well as by

giving direction for further works on this topic.

The whole project has been carried out at Intracom S.A. in Peania, Greece,

at the Department of Information Systems, Cisco® Professional Services team.

Intracom is the largest company in the Telecommunications and Information

Technology industry in Greece with a significant international presence. It has

over 4000 employees and is a certified Cisco® Golden Partner and Professional

Services partner. The Cisco® Professional Services team consists of

Cisco-certified network engineers (CCNP level and above). The networking

laboratory is fully equipped with latest state-of-the-art networking technology

and is available for the conduct of experiments and simulations, both in the

scope of commercial projects, as well as for research and development purposes.

The project was carried out in the context of a study on the subject

according to the company’s requirements. The practical part – topology design

etc. – was based on larger project carried out by the company for a customer. As a

result, network equipment in the laboratory was not always available for testing

and this was the main limitation of the dissertation.

One further limitation was the subject itself. Because it is so vast, it was

impossible to cover all its aspects in such short time, especially in the practical

part. However, the author tried to give an overview and a critical evaluation of all

relevant material.

All tests and results were based on a case study which was implemented in

the networking laboratory. It was carried out solely by the writer with technical

help and guidance by the industrial mentor, Mr. Evangelos Vayias, who helped

mostly with the practical part of this dissertation, i.e. router configurations,

topology design, test execution and collection of results.

Page 16: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

8

1.2. THE INTERNET: A BIG PICTURE

The Internet consists of many interconnected networks of different types,

shapes and sizes, which use different technologies. Local Area Networks (LANs),

Metropolitan Area Networks (MANs) and Wide Area Networks (WANs) are

interconnected by a backbone. LANs and MANs are usually corporate, campus or

citywide networks; WANs are usually ISP or large-scale corporate networks [4].

As stated above, all these networks are interconnected using different protocols,

mechanisms and link layer technologies. However, when referring to the Internet,

the common denominator of these networks is the Internet Protocol (IP) suite that

is uniform for all these network types. Today, IP forms the foundation for a

universal and global network. However, this raises several issues for ISPs as well

as enterprise and corporate networks. IP’s problem is that it is a connectionless

technology and therefore cannot differentiate between different traffic types or

make certain guarantees for different types of applications [5]. QoS architectures

attempt to resolve this issue. This dissertation is based on the IP protocol suite and

investigates QoS mechanisms based upon IP, i.e. it will deal only with QoS

provision at Layers 2 and 3 and will not go beyond these layers.

With the growth of the Internet new applications emerged that required

specific service quality and together with additional user demands it became

apparent that best-effort service is not enough and that certain mechanisms were

needed to distinguish between different traffic classes and provide QoS according

to user requirements. However, there is one opinion, which claims that new

technologies, such as optical fiber and Dense Wavelength Division Multiplexing

(DWDM) will make bandwidth so abundant and cheap, that QoS will become

unnecessary, i.e. there will be no need for mechanisms that provide QoS because

bandwidth will be almost “infinite” [6]. The reply to this opinion is that no matter

Page 17: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

9

how much bandwidth will be made available, new applications will be developed

to utilize it and once again make the implementation of QoS mechanisms and

architectures necessary. Also, high bandwidth availability cannot be ensured in all

areas throughout a network. Bottleneck points will always exist, where QoS

mechanisms will have to be applied.

The main driving forces of QoS are basically different user demands for

different traffic classes and the ability of ISPs to offer different service levels –

and, as a result, different pricing policies – to their clients according to

agreements. These agreements are called Service Level Agreements (SLAs),

which specify what type of service will be provided to the client (i.e. premium,

assured or best-effort) and how the client will have access to the bandwidth he/she

needs. In other words, it specifies the QoS level the client is to receive and the ISP

is to offer.

1.3. DEFINITION OF QOS

Firstly, there is the need to define the term “Quality of Service” and what

is meant by it. The new generation of applications, such as real-time voice and

video, has specific bandwidth and timing requirements, such as delay tolerance

and higher end-to-end delivery guarantees than regular best-effort data [7]. These

new types of data streams cannot operate properly with the traditional best-effort

service. QoS mechanisms attempt to resolve this issue in various ways that will be

discussed later on.

There are no agreed quantifiable measures that unambiguously define QoS

as perceived by a user. Terms, such as “better”, “worse”, “high”, “medium, “low”,

“good”, “fair”, “poor”, are typically used, but these are subjective and therefore

cannot be always translated precisely into network level parameters that can

Page 18: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

10

subsequently be used for network design. Furthermore, QoS is also heavily

dependent upon factors such as compression algorithms, coding schemes, the

presence of higher layer protocols for security, data recovery, retransmission and

the ability of applications to adapt to network congestion or their requirements for

synchronization [8].

However, network providers need performance metrics that they can agree

upon with their peers (when exchanging traffic), and with service providers

buying resources from them with certain performance guarantees. The following

five network performance metrics are considered the most important in defining

the QoS required from the network [9]:

Availability: Ideally, a network should be available 100% of the time, i.e.

it should be up and running without any failures. Even a high-sounding

figure as 99.8% translates into about an hour and a half of downtime per

month, which may be unacceptable for a large enterprise. Serious carriers

strive for 99.9999% availability, which they refer to as “six nines” and

which translates into a downtime of 2.6 seconds per month.

Throughput: This is the effective data transfer rate measured in bits per

second. It is not the same as the maximum capacity (maximum

bandwidth), or wire speed, of the network. Sharing a network lowers the

throughput that can be achieved by a user, as does the overhead imposed

by the extra bits included in every packet for identification and other

purposes. A minimum rate of throughput is usually guaranteed by a

service provider (who needs to have a similar guarantee from the network

provider). A traffic flow, such as Voice-over-IP (VoIP), requires a

minimum throughput capacity in order to operate correctly and efficiently,

otherwise it becomes virtually unusable.

Page 19: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

11

Packet loss ratio: Packet loss specifies the percentage of packets lost

during transport [10]. Network devices, such as switches and routers,

sometimes have to hold data packets in buffered queues when a link gets

congested. If the link remains congested for too long, the buffered queues

will overflow and data will be lost, since the devices will have to drop

packets at the end of the queue, also known as tail drop. The lost packets

must be retransmitted, adding to the total transmission time. In a well-

managed network, packet loss will typically be less than 1% averaged over

a month. It is obvious that real-time applications may be very sensitive to

packet loss, since any drop would require retransmission of the lost

packets and would interfere with the real-time transmission. Such

applications demand packet loss guarantees from the network.

Delay (latency): The elapsed time for a packet to travel from the sender,

through the network, to the receiver is known as delay. The end-to-end

delay consists of serialization delay, propagation delay and switching

delay. Serialization delay (also called transmission delay) is the time it

takes for a network device to transmit a packet at the given output rate and

depends on link bandwidth as well as packet size. Propagation delay is the

time it takes for a transmitted bit to reach its destination and it is limited

by the speed of light. Finally, switching delay refers to the time it takes for

a network device to start transmitting a packet after its reception [11].

Unless satellites are involved, the end-to-end delay of a 5000km voice call

carried by a circuit-switched telephone network is about 25ms. For the

Internet, a voice call may easily exceed 150ms of delay because of signal

processing (digitizing and compressing the analog voice input) and

congestion (queuing). For a real-time application, such as voice

Page 20: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

12

communication, packets have to be delivered within a specific delay

bound, otherwise they become useless for the receiver.

Jitter (delay variation): Jitter refers to the variation of the delay on

individual packets of the same flow, i.e. one packet may be delayed more

or less than another. In other words, it is the variation in end-to-end transit

delay. This has many causes, including variations in queue length,

variations in the processing time needed to reorder packets that arrived out

of order because they traveled over different paths and variations in the

processing time needed to reassemble packets that were segmented by the

source before being transmitted. Jitter may affect applications that need a

constant flow of data from transmitter to receiver in order to maintain

smooth operation. Any variations in this flow may cause the application

performance to degrade up to a point that it becomes unusable.

It is apparent that newly emerged applications require specific guarantees

regarding timeliness and actual delivery. QoS realized and implemented by is a

set of technologies, architectures, mechanisms and methods that attempt to

provide the resources the applications require without compromising average

throughput and network performance in general. The main purpose of these

mechanisms is to deliver guaranteed and differentiated services to any IP-based

network. Since the Internet is made up of many different link technologies and

physical media, end-to-end QoS in IP-based networks is mainly attained at Layer

3, the network layer.

A general overview of QoS is illustrated below in Figure 1-1. Applications

requiring QoS (mission-critical data, audio/video teleconferencing etc.), which are

time- and delay-sensitive, real-time, jitter- and loss-intolerant, will utilize one of

the two specific QoS architectural models, IntServ or DiffServ. These

Page 21: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

13

architectures in turn apply specific mechanisms to provide QoS. These

mechanisms deal with traffic classification, traffic shaping and policing, resource

allocation (signaling), queue scheduling/congestion management and queue

management/congestion avoidance. It is possible, although not necessary, to apply

more than one mechanism or a specific combination thereof to provide better

QoS. This depends mostly on the type of traffic that will pass through the network

as well as network equipment and topology. Ultimately, it is the network

manager’s decision to define which mechanisms should be applied at which

network locations to provide efficient QoS.

For realizing end-to-end QoS, such mechanisms are implemented at the

Network Layer. However, they may be supported by relevant functions at lower

layers. Furthermore, MPLS is employed at this stage (if chosen). This is where

possible packet header switching takes place. All the above mechanisms and

architectures will be discussed in detail later on.

Finally, the traffic is transmitted by using any type of Link Layer

technology, such as Asynchronous Transfer Mode (ATM), Digital Subscriber

Line (DSL) or Ethernet.

In order to provide end-to-end QoS, all network devices along the route

must support QoS functions, otherwise incoming traffic will be serviced using

traditional best-effort service until the next hop along the route that supports QoS.

Furthermore, there is a difference on which mechanisms should be employed in a

QoS-enabled network and it depends on the router’s location, i.e. if it is an edge or

a core router and if it is an edge or a core interface. These details will be discussed

in chapter 2.

Figure 1-1 below illustrates the QoS process:

Page 22: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

14

INTERNET PROTOCOL (IP)

QoS

ME

CH

AN

ISM

S

APPLICATIONS

Voice-over-IP E-mail Mission

critical dataVideo-

conference VPNFTP

QoS ARCHITECTURES

IntServ DiffServ

TRANSMISSION MECHANISMS (PHYSICAL LAYER)

ATM DSL OtherSONET EthernetModem

Time-sensitive, real-time, delay-, loss- and

jitter-intolerant

Classification/Marking (IP Precedence, DSCP)

Policing/Shaping

Queue Scheduling/Congestion Management (WFQ, CBWFQ, MDRR)

Signaling (RSVP)

Queue Management/Congestion Avoidance (RED, WRED)

Traffic Engineering

MPLS

Fig. 1-1: The Process of QoS

The above schematic gives a basic idea of how QoS is employed in a

network, from application level to the physical layer. The first step is to identify

the applications that require QoS, which will be discussed below.

Page 23: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

15

1.4. APPLICATIONS REQUIRING QOS

A vast number of network applications exist nowadays and are used on a

daily basis by home or business users around the world. While most applications

work more or less satisfactory with the traditional best-effort service, the new

generation of applications requires QoS in order to operate correctly and

efficiently.

In general, such applications are real-time audio/video applications (e.g.

teleconferencing, streaming audio/video), e-commerce, transaction and mission-

critical applications, such as database and client/server applications. All these

applications have in common the fact that they are sensitive to a smaller or greater

extent to the four QoS metrics mentioned above (throughput, loss, delay and

jitter). Table 1-1 illustrates different kinds of applications and their sensitivities

regarding QoS [12].

Traffic Type Sensitivities

Throughput Loss Delay Jitter

E-mail (MIME) Low to High - Low Low

Real-time Voice (e.g. VoIP) Low High High High

Real-time Video High High High High

Telnet Low - Medium Low

File transfer (FTP) High - Low Low

E-commerce, transactions Low - High Medium

Casual browsing Low - Medium Low

Serious browsing Medium - High Low

Table 1-1: Applications Requiring QoS

The transport protocol all applications, except real-time voice and video, is

TCP. TCP handles losses with retransmissions, which eventually increases delay.

Page 24: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

16

Thus, what is perceived by applications when packets are lost is an increase in

end-to-end delay. Moreover, the throughput sensitivity of e-mail ranges from low

to high, because it depends on what kind of e-mail is being sent. If it is a simple

text message, then throughput sensitivity is low, but if a large attachment (e.g.

5MB) is to be sent, then throughput sensitivity is high.

Note that network availability is a prerequisite for all of the above factors

regardless of the application, therefore it is not presented in the table, as it is equal

for every application.

After observing the above table, it becomes apparent that some

applications are more sensitive than others and therefore require increased QoS

guarantees so as to perform satisfactorily. For example, e-mail is sensitive only to

packet loss, which is, moreover, handled by the reliable transfer offered by TCP,

whereas real-time video is very sensitive to throughput, delay and jitter and

sensitive to packet loss. This shows that every application has different QoS

requirements and must be treated differently if it is to operate within its acceptable

limits. The QoS mechanisms that match each application’s requirements are

discussed below.

1.5. QOS MECHANISMS

As mentioned above, there are many different QoS mechanisms that

perform different tasks under different circumstances. These mechanisms can be

implemented either separately or in combination. It always depends on the

objective and on what purpose they should serve as well as what kind of

applications will be used. Furthermore, it should be noted that no single transport

or network-layer mechanism will provide QoS for all flow types and that a QoS-

Page 25: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

17

enabled network has to deploy a number of mechanisms in order to meet the

broad range of requirements [13].

The main QoS mechanisms are:

Packet classification/marking: Multi-Field (MF) or Behavior-Aggregate (BA)

classification (IP precedence or DiffServ Code Point (DSCP) marking).

Traffic conditioning: Policing, shaping.

Resource Reservation/Signaling: Resource Reservation Protocol (RSVP).

Queue Scheduling/Congestion Management: Weighted Fair Queuing (WFQ),

Class-Based WFQ (CBWFQ), Modified Deficit Round Robin (MDRR).

Queue Management/Congestion Avoidance: Random Early Detection (RED),

Weighted RED (WRED).

Each one of these mechanisms serves a specific purpose as will be made

clear later on and should be employed under specific circumstances. In other

words, a mechanism would provide better QoS than another for a specific type of

application. The mechanisms deal with the four mentioned QoS metrics and each

one “specializes” in a specific area and performs different functions. The

mechanisms have to be set up in every network device (router or switch) along the

traffic route and either at the incoming and/or at the outgoing interface so as to

provide end-to-end QoS. The QoS architectures that will be discussed later refer

to the QoS of a network as a whole. A discussion and explanation of the QoS

mechanisms follows.

1.5.1. CLASSIFICATION/MARKING

Packet classification is a mechanism that assigns packets to a certain pre-

specified class based on one or more fields in that packet. This procedure is also

called packet marking or coloring, where certain fields in the packet are marked,

Page 26: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

18

specifying a class for that packet [14]. There are two different types of

classification for IP QoS, which include the following:

IP flow identification based on source IP address, destination IP address,

IP protocol field, source port number and destination port number.

Classification based on IP precedence bits or DSCP field.

Packets can be marked to indicate their traffic class. The first method is a

general and common approach identifying traffic flows so as to mark them to

assign them to a specific traffic class. IP precedence and/or DSCP field are more

effective methods for identifying, classifying and marking traffic flows and enable

a much wider range of QoS functions.

The IP precedence field is located in the packet’s IP header and indicates

the relative priority with which the packet should be handled, ranging from

“routine” (best-effort) to “critical” (highest priority). Furthermore, there are

another two classes, which are used for network and internetwork control

messages. The IP precedence field is made up of three bits in the IP header’s Type

of Service (ToS) byte, while DSCP uses six bits. Figure 1-2 shows the ToS byte

structure and illustrates IP precedence and DSCP locations within the ToS byte:

0

Precedence

DSCP

1 2 3 4 5 6 7

Fig. 1-2: IP Precedence and DSCP in the ToS Byte

Page 27: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

19

Table 1-2 shows the different IP precedence values, bits and names [15]:

IP Precedence Value IP Precedence Bits IP Precedence Names

0 000 Routine

1 001 Priority

2 010 Immediate

3 011 Flash

4 100 Flash Override

5 101 Critical

6 110 Internetwork Control

7 111 Network Control

Table 1-2: IP Precedence Values, Bits and Names

Packet marking can be performed either by the application that sends the

traffic or by a node in the network. The ToS byte has been specified by the IETF

in [16].

DSCP is discussed in detail in section 1.6.2 because it is the basis of

DiffServ.

1.5.2. POLICING/SHAPING

Service providers and network users agree on a profile for the traffic that

is to receive certain QoS. This traffic profile usually specifies a maximum rate for

the traffic which is to be admitted into the network at a certain QoS level, as well

as some degree of the traffic’s burstiness.

The user’s traffic transmitted to the network has to conform to this profile

in order to receive the agreed QoS. Non-conforming traffic is not guaranteed to

receive this QoS and may be dropped.

Page 28: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

20

Traffic conditioning is a set of mechanisms that permit on one hand the

service provider to monitor and police conformance to the traffic profile, on the

other hand the user to shape transmitted traffic to this profile.

Traffic conditioning is usually employed at network boundaries, i.e. where

traffic first enters or exits the network. Traffic conditioning is implemented in

network access devices, i.e. routers and switches that support QoS mechanisms

[17].

As the traffic enters the network, it has a rate that fluctuates over time.

Figure 1-3 illustrates this:

Traffic

Time

Profilelimit

TrafficRate

Fig. 1-3: Example of Traffic Rate

In order to provide QoS, traffic has to conform to the pre-specified and

pre-agreed (between customer and ISP) traffic profile.

Policing is the mechanism that allows the network to monitor traffic

behavior (e.g. burstiness) and manage throughput by dropping out-of-profile

packets. In essence, policing discards misbehaving traffic in order to avoid

network resources being exhausted or other QoS classes being starved. This will

Page 29: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

21

ensure that network resources, as allocated, will provide the agreed QoS to the

agreed traffic. The policing function in a router is often called a dropper, because

it essentially drops out-of-profile packets, i.e. packets that exceed the agreed

policy throughput. Figure 1-4 illustrates the policing mechanism:

Traffic

Time

Profilelimit

TrafficRate

Fig. 1-4: Policing

Policing is used not only to drop out-of-profile packets, but also to re-mark

them and indicate to dropping mechanisms downstream that they should be

dropped ahead of the in-profile packets.

Shaping, on the other hand, allows the network user to enforce bandwidth

policies by buffering excess traffic rate and smoothing bursts. This increases link

utilization efficiency, as packets are not dropped, but in a way “spread” across the

available bandwidth. Shaping, as the word itself implies, shapes traffic in order to

meet a pre-defined profile, i.e. to be in-profile. Figure 1-5 shows how shaping

works:

Page 30: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

22

Traffic

Time

Profilelimit

TrafficRate

Fig. 1-5: Shaping

Shaping is commonly used where speed-mismatches exist (e.g. going from

a HQ site with a T1/E1 connection to a Frame Relay network, down to a remote

site with a 128Kbps connection).

1.5.3. SIGNALING

The signaling mechanism in an IP context essentially refers to the

Resource Reservation Protocol (RSVP), which is an integral part of the IntServ

architecture, discussed in section 1.6.1. RSVP has been initially proposed in [18]

and defined in [19].

RSVP was specified as a signaling protocol for applications to reserve

resources. The signaling process is illustrated in Figure 1-6. The sender sends a

“PATH” message to the receiver (1) specifying the characteristics of the traffic.

Every intermediate router along the path forwards the “PATH” message to the

next hop (2) determined by the routing protocol. Upon receiving a “PATH”

message (3), the receiver responds with a “RESV” message (4) to request

Page 31: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

23

resources for the flow. Every intermediate router along the path can reject or

accept the request of the “RESV”, depending on resource availability (5). If the

request is rejected, the router will send an error message to the receiver, and the

signaling process will terminate. If the request is accepted, link bandwidth and

buffer space are allocated for the flow (6) and the related flow state information

will be installed in the router [20].

RSVP Cloud

PATH (1)

PATH (2)

PATH (3)

RESV (4)

RESV (5)RESV (6)

SenderReceiver

Fig. 1-6: RSVP Process

The problem with RSVP is that the amount of state information increases

proportionally with the number of flows, a fact that places a huge storage and

processing overhead on the routers, therefore it does not scale well in the Internet

core; and secondly, the requirements on routers is high, because they all must

implement and support RSVP, admission control, MF classification and packet

scheduling. These problems are further discussed in section 1.6.1.

1.5.4. QUEUE SCHEDULING/CONGESTION MANAGEMENT

Packet delay control is an important purpose of Internet QoS. Packet delay

consists of three parts: propagation, transmission and queuing delay. Propagation

delay is given by the distance, the medium and the speed of light. The per-hop

Page 32: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

24

transmission delay is given by the packet size divided by the link bandwidth. The

queuing delay is the waiting time a packet spends in a queue before it is

transmitted. This delay is determined mainly by the scheduling policy.

Besides delay control, link sharing is another important purpose of queue

scheduling. The aggregate bandwidth of a link can be shared among multiple

entities, such as different organizations, multiple protocols (TCP, UDP (User

Datagram Protocol)) or multiple services (FTP, telnet, real-time streams). An

overloaded link should be shared in a controlled way, while an idle link can be

used in any proportion.

Although delay and rate guarantee provisioning is crucial for queue

scheduling, it still needs to be kept simple because it must be performed at packet

arrival rates. Scheduling can be performed on a per-flow or a per-traffic-class

basis. The result of a combination of these two is a hierarchical scheduling [21].

The most important scheduling mechanisms, besides traditional first-come first-

served scheduling, are the following:

Weighted Fair Queuing (WFQ): WFQ is a sophisticated queuing

process that requires little configuration, because it dynamically

detects traffic flows between applications and automatically manages

separate queues for those flows. WFQ is described in [22] and [23]. In

WFQ terms flows are called conversations. A conversation could be a

Telnet session, an FTP transfer, a video stream or some other TCP or

UDP flow between two hosts. WFQ considers packets part of the same

conversation if they contain the same source and destination IP address

and port numbers and IP protocol identifier. When WFQ detects a

conversation and determines that packets belonging to that

conversation need to be queued, it automatically creates a queue for

Page 33: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

25

that conversation. If there are many conversations passing through the

interface, WFQ manages multiple conversation queues, one for each

unique conversation and sorts packets into the appropriate queues

based on the conversation identification information. Because WFQ

creates queues and sorts packets into the queues automatically, there is

no need for manually configured classification lists [24].

Class-Based Weighted Fair Queuing (CBWFQ): CBWFQ is a

variation of WFQ. It supports the concept of user-defined traffic

classes. Instead of queuing on a per-conversation basis, CBWFQ uses

predefined classes of traffic and then divides the link’s bandwidth

among these classes in order to control their QoS. CBWFQ is not

automatic like WFQ, but it provides more scalable traffic queuing and

bandwidth allocation [25].

Modified Deficit Round Robin (MDRR): MDRR is a variant of the

well-known Round-Robin scheduling scheme adapted to the

requirements of scheduling variable-length units, such as IP packets.

MDRR is an advanced scheduling algorithm. Within a MDRR

scheduler, each queue has an associated quantum value, which is the

average number of bytes served in each round, and a deficit counter

initialized to the quantum value. Each non-empty flow queue is

serviced in a round-robin fashion by transmitting at most quantum

bytes in each round. Packets in a queue are serviced as long as the

deficit counter is greater than zero. Each serviced packet decreases the

deficit counter by a value equal to its length in bytes. A queue can no

longer be served after the deficit counter becomes zero or negative. In

Page 34: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

26

each new round, each nonempty queue’s deficit counter is incremented

by its quantum value [26].

1.5.5. QUEUE MANAGEMENT/CONGESTION AVOIDANCE

Another important aspect of QoS is to control packet loss. This is achieved

mainly through queue management. Packets get lost for two reasons: either they

become corrupted in transit or are dropped at network nodes – such as routers –

due to network congestion, i.e. because of overflowing buffers. Loss due to

damage is rare (<<0.1%), therefore packet loss is often a sign of network

congestion. To control and avoid network congestion, certain mechanisms must

be employed both at network end-points and at intermediate routers. At network

endpoints, the TCP protocol, which uses adaptive algorithms such as slow start,

additive increase and multiplicative decrease, performs this task well. Inside

routers, queue management must be used so as to ensure that high priority packets

are not dropped and arrive at their destination properly.

Traditionally, packets are dropped only when the queue is full. Either

arriving packets are dropped (tail drop), the packets that have been in the queue

for the longest time period are dropped (drop front) or a randomly chosen packet

is discarded from the queue. TCP stacks react to multiple packet drops by severely

reducing their window sizes, thus causing severe interruption to higher layer

applications. In addition, multiple packet drops can lead to a phenomenon called

“global synchronization” whereby TCP stacks become globally synchronized

resulting in a wave-like traffic flow. Figure 1-7 shows the effect of global

synchronization:

Page 35: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

27

Throughput

Time

Queue Utilization(100%)

Tail dropTail drop

Fig. 1-7: Global Synchronization

Three TCP traffic flows start at different times at low data rates. As time

passes, TCP increases the data rate. When the queue buffer fills up, several tail

drops occur and all three traffic flows drop their throughput abruptly. This is

highly inefficient, because the more TCP flows there are, the sooner the buffers

will fill up and thus the sooner a tail drop will occur, disrupting all TCP flows.

Another problem is that after the first tail drop it is very likely that all three TCP

flows will start almost simultaneously, which will cause the buffers to fill up

quickly and lead to another tail drop.

Furthermore, there are two drawbacks with drop-on-full, namely lock-out

and full queues. Lock-out describes the problem that a single connection or a few

flows monopolize queue space, preventing other connections from getting room in

the queue. The “full queue” problem refers to the tendency of drop-on-full

policies to keep queues at or near maximum occupancy for long periods. Lock-out

causes unfairness of resource usage while steady-state large queues result in

longer delay times.

Page 36: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

28

To avoid these problems, active queue management is required, which

drops packets before a queue becomes full. It allows routers to control when,

which and how many packets to drop [27]. The two most important queue

management mechanisms are the following:

Random Early Detection (RED): RED randomly drops packets queued on

an interface. As a queue reaches its maximum capacity, RED drops

packets more aggressively to avoid a tail drop. It throttles back flows and

takes advantage of TCP slow start. Rather than tail-dropping all packets

when the queue is full, RED manages queue depth by randomly dropping

some packets as the queue fills past a certain threshold. As packets drop,

the applications associated with the dropped packets slow down and go

through TCP slow start, reducing the traffic destined for the link and

providing relief to the queue system. If the queue continues to fill, RED

drops more packets to slow down additional applications in order to avoid

a tail drop. The result is that RED prevents global synchronization and

increases overall utilization of the line, eliminating the sawtooth utilization

pattern [28]. RED is described in [29].

Weighted Random Early Detection (WRED): Weighted RED (WRED) is

a variant of RED that attempts to influence the selection of packets to be

discarded. There are many other variants of RED, mechanisms that try to

prevent congestion before it occurs by dropping packets according to

specific criteria (traffic conforms/does not conform), thus ensuring that

mission-critical data is not dropped and gets priority. WRED is a

congestion avoidance and control mechanism whereby packets will be

randomly dropped when the average class queue depth reaches a certain

minimum threshold. As congestion increases, packets will be randomly

Page 37: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

29

dropped (and with a rising drop probability) until a second threshold

where packets will be dropped with a drop probability equal to the mark

probability denominator. Above max-threshold, packets are tail-dropped.

Figure 1-8 below depicts the WRED algorithm:

Packet DropProbability

AverageQueue SizeMin1

1

0Max1 Min2 Max2

WRED Values:min=20max=50prod=1

WRED Values:min=55max=70prod=10

Fig. 1-8: WRED Algorithm

WRED will selectively instruct TCP stacks to back-off by dropping

packets. Obviously, WRED has no influence on UDP based applications (besides

the fact that their packets will be dropped equally).

The average queue depth is calculated using the following formula:

new_average = (old_average * (1-2-e) + (current_queue_depth * 2-e)

The “e” is the “exponential weighting constant”. The larger this constant,

the slower the WRED algorithm will react. The smaller this constant, the faster

the WRED algorithm will react, i.e. the faster it will begin dropping packets after

the specified threshold has been reached.

Page 38: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

30

The exponential weighting constant can be set on a per-class basis. The

min-threshold, max-threshold and mark probability denominator can be set on a

per precedence or per DSCP basis.

The mark probability denominator should always be set to 1 (100 % drop

probability at max-threshold).

It should be stressed that tuning QoS parameters is never a straightforward

process and the results are depending on a large number of factors, including the

offered traffic load and profile, the ratio of load to available capacity, the behavior

of end-system TCP stacks in the event of packet drops etc. Therefore, it is

strongly recommended to test these settings in a testbed environment using

expected customer traffic profiles and to tune them, if required.

1.5.6. TRAFFIC ENGINEERING

Traffic engineering is not an actual mechanism, but can be considered as

the process of designing and setting up a network’s control mechanisms (e.g.

routing) in such a way that all network resources are used effectively, i.e. the

effective design and engineering of the network paths, traffic load balancing,

creating redundancy etc. In other words it refers to the logical configuration of the

network. The task of traffic engineering is to relieve congestion and to prevent

concentration of high priority traffic. It essentially provides good QoS even

without the implementation of another mechanism.

Effective traffic engineering provides QoS by recognizing congested or

highly loaded network paths and redirecting traffic through idle network paths,

thus avoiding congestion, optimizing network load balancing and effectively

utilizing the available network resources.

Page 39: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

31

1.6. QOS ARCHITECTURES

QoS architectures are architectural models proposed and established by the

Internet Engineering Task Force (IETF) that utilize the above mentioned

mechanisms (or combinations thereof) to provide the required services.

1.6.1. INTEGRATED SERVICES

The IntServ architecture was an effort by the IETF to expand the Internet’s

service model in order to meet the requirements of emerging applications, such as

voice and video. It is the earliest of the procedures defined by the IETF for

supporting QoS.

Its main purpose is to define a new enhanced Internet service model as

well as provide the means for applications to express end-to-end resource

requirements with support mechanisms in network devices [30].

IntServ assigns a specific flow of data to a so-called “traffic class”, which

defines a certain level of service. It may, for example, require only “best-effort”

delivery, or else it might impose some limits on the delay. Two services are

defined for this purpose: guaranteed and controlled load. Guaranteed service,

defined in [31], provides deterministic delay guarantees, whereas controlled load

service, defined in [32], provides a network service similar to that provided by a

regular best-effort network under lightly loaded conditions [33].

An integral part of IntServ is RSVP. Once a class has been assigned to the

data flow, a “PATH” message is forwarded to the destination to determine

whether the network has the available resources (transmission capacity, buffer

space etc.) needed to support that specific Class of Service (CoS). If all devices

along the path are found capable of providing the required resources, the receiver

generates a “RESV” message and returns it to the source indicating that the latter

Page 40: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

32

may start transmitting its data. The procedure is repeated continually to verify that

the necessary resources remain available. If the required resources are not

available, the receiver sends an RSVP error message to the sender. The IETF

specified RSVP as the signaling protocol for the IntServ architecture. RSVP

enables applications to signal per-flow QoS requirements to the network [34].

Theoretically, this continuous checking of available resources means that

the network resources are used very efficiently. When the resource availability

reaches a minimum threshold, services with strict QoS requirements will not

receive a “RESV” message and will know that the QoS is not guaranteed.

However, although IntServ has some attractive aspects, it does have its

problems. It has no means of ensuring that the necessary resources will be

available when required. Another problem is that it reserves network resources on

a per-flow basis. If multiple flows from an aggregation point all require the same

resources, the flows will nevertheless all be treated individually. The “RESV”

message must be sent separately for each flow. This means that as the number of

flows increases, so does the RSVP overhead and thus the service level is

degrading. In other words, IntServ does not scale well, and so wastes network

resources [35].

1.6.2. DIFFERENTIATED SERVICES

DiffServ is an architecture that addresses QoS requirements in a

connectionless environment. DiffServ’s specification can be found in [36]. Its

main purpose is to standardize a set of QoS building blocks with which providers

can implement QoS enhanced IP services. DiffServ QoS is meant to be

implemented at the network edge by access devices and then supported across the

backbone by DiffServ-capable routers. Since it operates purely at Layer 3,

Page 41: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

33

DiffServ can be deployed on any Layer 2 infrastructure. DiffServ and non-

DiffServ routers and services can be mixed in the same environment.

DiffServ eliminates much of the complexity that developed from IntServ’s

use of the Resource Reservation Protocol (RSVP) to set up and tear down QoS

reservations. DiffServ eliminates the need for RSVP and instead makes a fresh

start with the existing Type of Service (ToS) field found in the header of IPv4 and

IPv6 packets, as illustrated below. IP precedence resides in bits P2-P0, ToS is bits

T3-T0 and the last bit is reserved for future use (currently unused).

Fig. 1-9: ToS Byte

The ToS field was originally intended by IP’s designers to support

capabilities much like QoS – allowing applications to specify high or low delay,

reliability, and throughput requirements – but was never used in a well-defined,

global manner. But since the field is a standard element of IP headers, DiffServ is

reasonably compatible with existing IP equipment and can be implemented to a

large extent by software/firmware upgrades.

DiffServ takes the ToS field (eight bits), renames it as the DS (DiffServ)

byte, specified in [37], or DSCP and restructures it to define IP QoS parameters.

Figure 1-10 illustrates the DS byte, where the DSCP is made up of bits DS5-DS0

and the last two bits are currently unused:

Fig. 1-10: DS Byte

P2 P1 P0 T3 T2 T1 T0 CU

DS5 DS4 DS3 DS2 DS1 DS0 CU CU

Page 42: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

34

The DSCPs defined thus far by the IETF are the following (please note

that class selector DSCPs are defined to be backward compatible with IP

precedence) [38]:

Class Selectors DSCP Value

Default 000000

Precedence 1 001000

Precedence 2 010000

Precedence 3 011000

Precedence 4 100000

Precedence 5 101000

Precedence 6 110000

Precedence 7 111000

Table 1-3: DSCP Class Selectors

The DSCP will determine how routers handle a packet. Instead of trying

to identify and manage IP traffic flows, as RSVP must do, DiffServ brings QoS

consideration to the normal IP practice of packet-by-packet, hop-by-hop routing.

Service decisions are made according to parameters defined by the DS byte, and

are manifested in Per-Hop Behavior (PHB). For example, the default PHB in a

DiffServ network is traditional best-effort service, using first-in first-out (FIFO)

queuing. Other PHBs are defined for higher CoSs. One example is expedited

forwarding (EF). The EF PHB has been defined in [39]. EF defines premium

service and the recommended DSCP value is 101110. When EF packets enter a

DiffServ router, they are meant to be handled in short queues and quickly serviced

with top priority to maintain low latency, packet loss, and jitter. Another PHB,

one that allows variable priority but still ensures that packets arrive in the proper

order, is assured forwarding (AF). The specification of AF can be found in [40].

Page 43: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

35

AF defines four service levels, with each service level having three drop

precedence levels. As a result, AF PHB recommends 12 code points, as shown in

Table 1-4 [41]:

Drop Precedence Class 1 Class 2 Class 3 Class 4

Low 001010 010010 011010 100010

Medium 001100 010100 011100 100100

High 001110 010110 011110 100110

Table 1-4: AF PHB

DiffServ standardizes PHB classification, marking and queuing

procedures, but it leaves equipment vendors and service providers free to develop

PHB flavors of their own. Router vendors will define parameters and capabilities

they think will furnish effective QoS controls and service providers can devise

combinations of PHB to differentiate their qualities of service [42].

As mentioned above, DSCP is backward-compatible to IP precedence.

Table 1-5 illustrates DSCP to IP precedence mappings:

Table 1-5: DSCP to IP Precedence Mappings

DSCP IP Precedence

0-7 0

8-15 1

16-23 2

24-31 3

32-39 4

40-47 5

48-55 6

56-63 7

Page 44: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

36

1.7. MULTI-PROTOCOL LABEL SWITCHING

MPLS is a strategy for streamlining the backbone transport of IP packets

across a Layer 3/Layer 2 network. Although it does involve QoS issues, that is not

its main purpose. MPLS is focused mainly on improving Internet scalability

through better Traffic Engineering. MPLS will help to build backbone networks

that better support QoS traffic.

MPLS is essentially a hybrid of the network (Layer 3) and transport (Layer

2) structure, and may represent an entirely new way of building IP backbone

networks. MPLS is rooted in the IP switching and tag switching technologies that

were developed to bring circuit switching concepts to IP’s connectionless routing

environment.

In the current draft specification [43], MPLS edge devices add a 4-byte

(32 bits) label to the header of IP packets. An IP encapsulation scheme, basically,

the label provides routing information that allows packets to be steered over the

backbone using pre-established paths. These paths function at Layer 3 or can even

be mapped directly to Layer 2 transport such as ATM or Frame Relay. MPLS

improves Internet scalability by eliminating the need for each router and switch in

a packet’s path to perform conventional address lookups and route calculation.

MPLS also permits explicit backbone routing, which specifies in advance

the hops that a packet will take across the network. Explicit routing will give IP

traffic a semblance of end-to-end connections over the backbone. This should

allow more deterministic, or predictable, performance that can be used to

guarantee QoS.

The MPLS definition of IP QoS parameters is limited. Out of 32 bits total,

an MPLS label reserves just three bits for specifying QoS. Label-switching routers

will examine these bits and forward packets over paths that provide the

Page 45: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

37

appropriate QoS levels. But the exact values and functions of these so called

“experimental bits” remain yet to be defined.

MPLS is focused more on backbone network architecture and traffic

engineering than on QoS definition. In an ATM network using cell-based MPLS,

MPLS tags can be mapped directly to ATM virtual circuits (VCs) that provide

ATM’s strong QoS and class of service capabilities. For example, the MPLS label

could specify whether traffic requires constant bit rate (CBR) or variable bit rate

(VBR) service, and the ATM network will ensure that guarantees are met [44].

1.8. IP QOS: A CRITICAL EVALUATION

Up to now we have defined the term QoS, we have analyzed and discussed

mechanisms that provide QoS and we have given an overview of the two QoS

frameworks. The questions that arise at this point are the following: Is QoS really

needed in today’s networks? Does the overprovisioning of bandwidth not solve

the problem? Which framework is more viable in today’s IP networks – mainly

the Internet – and why? This comparison and critical view of the two frameworks

is based on the literature survey, which means that the conclusion is based on

theory. A critical comparison of the two frameworks will display the author’s

view on this issue.

One might think that any issues, such as network congestion, can be

overcome by simply upgrading the existing lines to provide more bandwidth.

While this is essentially the first step to provide QoS, it is the author’s view that

bandwidth overprovisioning might not always be possible, firstly because the

customer might not want to pay more and secondly because there still might be

applications that would use the whole bandwidth, such as peer-to-peer file sharing

applications, high-definition streaming video and so on. In essence, no matter how

Page 46: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

38

much bandwidth is available, there will always be applications that will make use

of this bandwidth and therefore there will always be the need for mechanisms to

provide QoS for the various types of applications.

Of course, just upgrading bandwidth is the simplest way to provide some

sort of QoS. By implementing schemes and mechanisms, the complexity of the

network infrastructure will increase. Implementing mechanisms and frameworks

that provide QoS in a network means adding complexity to the network, making it

more difficult and challenging to design, configure, operate and monitor. So, is it

worth it to add complexity to the network? Wouldn’t it be more efficient to

increase the bandwidth every time it reaches the network’s limit? In the author’s

opinion, the answer is no, because – as mentioned above – the customer may not

be always willing to pay, even if bandwidth becomes abundant and cheap.

Furthermore, any network is bound to grow in size, which means more users will

access it and as a result bandwidth will be eaten up more quickly. It is obvious

that at some point, even if all available mechanisms and frameworks will be

employed in the most efficient way (which obviously is very difficult) to provide

QoS, the bandwidth will have to be increased eventually, not drastically, but

gradually. One can say that the frameworks that provide QoS simply enhance and

prolong the lifespan of an existing network, without the need for upgrading

existing lines in very short time periods. Therefore, these frameworks and

mechanisms are essential for designing and deploying a modern multi-service

network, even if it increases its complexity. However, the most fundamental

problem of any complex network is that it is very difficult for network managers

to specify the parameters needed in order to provide QoS and that these

parameters must be monitored constantly and adjusted to keep up with the traffic

shifts. Nevertheless, the architectures and mechanisms that provide QoS are

Page 47: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

39

beneficial for both customers and service providers, therefore complexity is not a

strong enough reason not to implement such architectures. Besides, a combination

of slight capacity overprovisioning and a “light” implementation of QoS

mechanisms keeps a balance between network complexity and effectiveness and

is the ideal solution.

Another issue is of course that by implementing QoS mechanisms and

frameworks, ISPs are able to offer different service levels to their customers,

ranging from best-effort to premium. This opens a whole new world of

opportunity for both ISPs and customers, because – to put it simply – the

customer is able to get what he/she wants or needs and the ISP is able to offer the

customer the desired service level. Without these frameworks, this would not be

possible, even by just increasing bandwidth, because – as mentioned earlier – IP

does not have the ability to distinguish between different classes of service and

therefore ISPs would not be able to provide assurance to their customers as to the

offered service quality. The economic aspect of QoS should not be underestimated

in the modern world, as customer’s requirements in today’s Internet are very

diverse. This, in the author’s opinion, is another reason why QoS is essential for

today’s IP networks.

The next question is obviously which framework is more viable for

today’s IP networks. In order to be able to answer this, one must first consider

each framework’s advantages and disadvantages. In the author’s opinion,

DiffServ is the architecture for the future, but why?

When IntServ was first introduced, it was initially considered to be the

framework that would provide efficient QoS in the Internet. However, several

characteristics of this framework made this assumption difficult and IntServ soon

became somewhat “obsolete” for large-scale IP networks.

Page 48: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

40

Firstly, IntServ treats each traffic flow individually, which means that all

routers along a path must keep current flow-state information for each individual

flow in their memory. This approach does not scale well in the Internet core,

because there are thousands, even millions of individual flows. As a result,

processing overhead and flow-state information increases proportionally to the

number of flows. In the Internet, such behavior is unacceptable and highly

inefficient because of the very large number of flows. This treatment would

essentially make any efforts for providing QoS “non-existent”, because the

available resources that are needed for a reservation would not be enough for

every flow and as a result it would not be possible to provide QoS for every flow.

IntServ is not able to cope with a large number of flows at high speeds.

Secondly, resource reservation in the Internet requires the cooperation of

multiple service providers, i.e. the service providers must make agreements for

accounting and settlement information. Such infrastructures do not exist in

today’s Internet and it is highly unlikely that they will be implemented in the

future. We have to keep in mind that there are simply too many ISPs for such a

solution to be viable, because when multiple ISPs are involved in a reservation,

they have to agree on the charges for carrying traffic from other ISPs’ customers

and settle these charges between them. It is true that a number of ISPs is bound by

bilateral agreements, but to extend these agreements to an Internet-wide policy

would be impossible because of the sheer number of ISPs.

Moreover, IntServ’s usefulness was further compromised by the fact that

RSVP was for a very long period in experimental stage. As a result, its

implementation in end systems was fairly limited and even today there are very

few applications that support it. What is the best technology worth (at least on

Page 49: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

41

paper), if there are no applications that support it? The answer is obvious and this

is also a severe disadvantage of the IntServ architecture.

Finally, IntServ focused mainly on multimedia applications, leaving

mission-critical ones out. The truth is that even though multimedia applications

(videoconferencing or VoIP) are important for today’s corporate networks,

mission-critical applications and their efficient operation are still the most

important ones. After all, that is what their name implies: “mission-critical”,

critical for the company’s growth, expansion and survival. IntServ did not offer

the versatility to effectively support both types of application on a large-scale

level.

At this point one might ask where and how the IntServ framework should

be used and implemented. The answer is that IntServ might be viable in small-

scale corporate networks because of their limited size and – as a result – limited

number of traffic flows. Furthermore, the above mentioned settlement issues

disappear. However, the author’s opinion on this matter is that IntServ might be a

good solution for such types of networks, but what if these networks extend

beyond a single site and require WAN access, which is the case in most corporate

networks nowadays? Then we have to face the fact that IntServ does not scale

well and a WAN means an increased number of users and traffic flows. The

IntServ architecture simply was not designed for today’s networks and therefore is

not a viable solution. IntServ looks good on paper, i.e. in its specification, but in

the real world its implementation range is very limited.

Since the author limited IntServ as a viable solution for today’s Internet

only for a few occasions, the answer to the question “which architecture is more

viable and effective” is obvious: DiffServ is the architecture that will be able to

Page 50: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

42

provide effective and sufficient QoS in the future. But what are DiffServ’s strong

points and advantages over IntServ?

DiffServ relies on provisioning to provide resource assurance. Users’

traffic is divided into pre-defined traffic classes. These classes are specified in the

SLAs between customer and ISP. The ISP is therefore able to adjust the level of

resource provisioning and control the degree of resource assurance to the users.

What this means in essence is that each traffic flow is not treated separately, like

in IntServ, but as an aggregate. This of course makes the DiffServ architecture

extremely scalable, especially in a network where there are several thousands of

traffic flows, because DiffServ classifies each traffic flow into one of (usually)

three traffic classes and treats them according to pre-specified parameters. This

gives ISPs greater control of traffic handling and enables them to protect the

network from misbehaving sources. The downside is that because of the dynamic

nature of traffic flows, exact provisioning is extremely difficult, both for the

customer and the ISP. In order to be able to offer reliable services and

deterministic guarantees, the ISP must assign proper classification characteristics

for each traffic class. This, on the other hand, requires that the customer knows

exactly what applications will be used and what kind of traffic will enter the

network. As a result, network design and implementation is generally more

difficult, but in the author’s opinion it is worth the effort because once resources

have been organized efficiently, ISPs are able to deliver their commitments

towards the customers and minimize the cost. The customer can rest assured that

he/she will get what he/she wants for the agreed price. This means that cost-

effectiveness of services is increased and both ISPs and customers can be satisfied

with the result. The DiffServ model therefore is a more viable solution for today’s

IP networks, even small-scale ones.

Page 51: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

43

The author’s view is that DiffServ’s treatment of flows as aggregates

rather that individual fully justifies its superiority over IntServ, at least for the

large majority of today’s IP networks. IntServ’s capabilities, compared to the ones

of DiffServ, are fairly limited and can be employed only for specific cases.

DiffServ is extremely flexible and scalable. It is true that DiffServ is more

difficult to implement than IntServ because of its nature, but once it has been

setup properly, it is beneficial for all the parties involved. Additionally, it enables

ISPs to develop new services over the Internet and attract more customers to

increase their return. Thus, DiffServ’s economical model is more viable,

profitable and flexible than IntServ’s and this is – in the author’s opinion – a very

important advantage.

The conclusion to all this is that good network design, simplicity and

protection are the keys for providing QoS in a large-scale IP network. Good

network design plus a certain degree of overprovisioning network capacity not

only makes a network more robust against failure, but also eliminates the need for

overly complex mechanisms designed to solve these problems. Implementing the

DiffServ architecture keeps the network simple because in most cases three traffic

classes (premium, assured, best-effort) are sufficient to meet almost every

customer’s requirements under various network conditions. As a result, the

network is utilized more efficiently and the balance between complexity and

providing QoS is kept at its optimum level.

We will proceed by presenting a case study, on which the practical part of

the dissertation will be based. We will discuss the problems and issues faced

during the design and implementation stage and how they were overcome.

Page 52: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

44

CHAPTER 2: CASE STUDY: DESIGN AND IMPLEMENTATION

OF A QOS-ENABLED NETWORK

In order to test and observe QoS mechanisms, a network that supports QoS

functions must be designed and set up. For this purpose, a case study will be used

that involves specific applications that require QoS as well as network devices

(routers) that support QoS functions and protocols.

In our case study we will assume that a customer wants to build a network

that connects four different LANs located at different sites. The budget is limited

and as a result most connections will have limited bandwidth. The customer wants

to use VoIP, videoconferencing, mission-critical, Web and E-mail applications.

Therefore, it is imperative to implement a framework that provides QoS for these

applications. The reason for this is that if all traffic would be treated as best-effort,

real-time as well as mission-critical applications would suffer greatly and even

become unusable because of network congestion at various nodes in the network.

Another reason is that the available bandwidth is fairly limited and must therefore

be provisioned as efficiently as possible.

Since we already know the customer’s requirements, i.e. the applications

that will be used as well as the network topology design (see section 2.4), the first

issue regarding network design and QoS implementation is to decide which

framework to use in order to provide QoS and why.

The QoS framework to be used will be the DiffServ framework. DiffServ,

as mentioned above, is a standardized, scalable (treating aggregate rather than

individual flows) solution for providing QoS in a large-scale IP network like the

case study network topology. DiffServ is based on the following mechanisms:

Page 53: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

45

Classification: each packet is classified into one of some pre-defined

CoSs, either according to the values of the IP/TCP/UDP header or

according to its already marked DSCP.

Marking: after classification, each packet is marked with the DSCP value

corresponding to its CoS (if not already marked).

Policing: control of the amount of traffic offered at each CoS.

Scheduling and Queue Management: each CoS is assigned to a queue of

a router’s egress interface. Prioritization is applied among these queues, so

that each CoS gets the desired performance level. To improve throughput,

in case queues are filling up, queue management mechanisms, like RED,

can be applied to these queues.

But why DiffServ? Since the network is a large-scale corporate network,

we have to assume that there will be a large number of traffic flows, especially

during peak times. As discussed in section 1.8, IntServ does not scale well and

would not be an efficient solution for the customer’s needs. DiffServ’s scalability

also suggests that it is “future-proof”, i.e. it can be more easily upgraded,

expanded and adapted to the customer’s needs.

Another reason is that the applications used do not support RSVP and

hence implementation of the IntServ architecture would not be the best solution.

In the author’s opinion, DiffServ’s scalability and flexibility is the main reason

why it is more appropriate in the context of the case study. A complete

justification for why the DiffServ architecture is preferable has been given above

in section 1.8. We will assume the same for our case study.

Page 54: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

46

2.1. IDENTIFICATION OF APPLICATIONS

After deciding on the architecture, the next step is to identify the

applications the customer will use in the network. This is necessary in order to be

able to appropriately classify the traffic into CoSs.

The applications that will be used in the case study will be the following:

VoIP

Videoconference

Streaming video

Mission-critical data (e.g. database, client/server applications)

Internet (e-mail, web browsing etc.)

VoIP and videoconference are both real-time streaming applications that

use UDP and therefore require high levels of QoS (premium services). For

mission-critical traffic, which uses TCP, proper and timely delivery must be

ensured, thus this type of traffic will require assured services. Finally, the rest of

the traffic (such as web browsing or e-mail) will be treated as best-effort and

allocated the remaining bandwidth.

Policing has to be applied so that the first two CoSs (premium and

assured) do not claim all the bandwidth for themselves, thus leaving none for the

best-effort traffic.

Here we see the difficulty of properly designing and implementing CoSs.

Every aspect of the network traffic needs to be thoroughly considered so as to

ensure proper operation of all CoSs. In the author’s opinion, this poses the

greatest challenge in QoS implementation. It is also very essential that the

customer knows exactly his/her traffic requirements, so that the service provider is

able to design precise CoSs according to the customer’s needs.

Page 55: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

47

2.2. DEFINITION OF QOS CLASSES

The foundation of the DiffServ architecture is the definition of the Classes

of Service (CoSs) to be supported by the network and the mapping of each CoS to

a DSCP value. This part of QoS implementation is probably the most important

one, because if CoSs are not chosen and designed properly and precisely, QoS

provision will be inefficient and will not provide the expected service level.

Therefore, it is essential to carefully plan and design the CoSs that will be used

and assign traffic appropriately.

We have chosen to classify traffic with DSCP rather than with IP

precedence because DSCP offers more flexibility (i.e. more granular treatment of

traffic) than IP precedence and is therefore more appropriate for the customer’s

requirements in our case study. It is also the author’s opinion that DSCP is far

more effective than IP precedence in a DiffServ-capable network like the one that

will be used in the case study.

Knowing that real-time traffic has very specific requirements and is very

sensitive to the four network performance parameters discussed in section 1.3, we

can classify such traffic with the highest DSCP, which is EF (46). Furthermore,

real-time traffic must get strict priority over all other traffic. This will ensure

proper operation of the applications. Thus, all real-time traffic will be classified as

premium.

Mission-critical traffic will receive assured services (AF), because it is not

as delay- and jitter-sensitive as real-time traffic. Loss rate for this kind of traffic

must still be kept at a minimum possible.

Finally, the rest of the traffic (Web, e-mail) will be treated as best-effort,

but we still must ensure that some bandwidth remains for this type of traffic, as

mentioned in the previous section.

Page 56: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

48

After the identification of applications, careful consideration of the above

factors and with regard to the general application requirements outlined in Table

1-1, the CoSs that have been defined for out case study are presented below:

Class of Service Applications Performance

Level DSCP value

Premium VoIP Videoconference Video streaming

Low delay Low jitter Low loss rate

101110 (46)

Assured Mission-critical data

Controlled delay Controlled jitter Low loss rate

001010 (10)

Best-effort Rest of traffic Uncontrolled (best-effort)

000000 (0)

Table 2-1: Case Study Classes of Service

At implementation stage it is very important to identify what protocol each

application uses. Real-time applications are UDP-based whereas mission-critical

applications are TCP-based. This fact will have an impact on how the router will

handle the traffic at interface egress and will be discussed later on. Moreover,

“Controlled delay” implies delay similar to that of an uncongested network.

In the network testbed, all DiffServ mechanisms will be implemented in

the network’s routers. No mechanism is to be implemented in the end systems, i.e.

no host-based marking will be implemented. It is obvious that proper

configuration of network devices is also very important and will be discussed later

on.

Page 57: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

49

2.3. ROLES OF INTERFACES

It is very important to understand that a router’s interface in a network can

have one of two possible roles. It can either be an edge or a core interface.

Depending on the role of a router interface, different mechanisms have to be

applied at that interface, because traffic has different characteristics when entering

an edge interface than when entering a core interface. These differences are

discussed below.

This difference also plays an important role at configuration stage. Which

mechanisms need to be applied at what point and why? These are all issues that

we have to face when implementing and configuring the network.

Edge Interface:

These are interfaces receiving/sending traffic from/to end-systems, i.e.

there is no other layer 3 device, such as a router, between the interface and the

end-systems. End-systems will send unmarked traffic through these interfaces,

thus ingress traffic at these interfaces will be classified, marked with a DSCP

value and policed.

Policing on ingress is necessary for the Premium traffic. This traffic will

have absolute priority in the scheduling schemes applied in the network, therefore

it is necessary to control the amount of Premium traffic offered to the network in

order to avoid starvation of the other CoSs. Also, although not necessary, Assured

traffic will be policed as well in order to guarantee a minimum amount of

bandwidth for the Best-effort traffic.

At egress, these interfaces will classify packets according to their DSCP

values and will apply scheduling and queue management mechanisms. In

particular, Premium traffic, which is UDP-based, will receive strict queue priority

Page 58: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

50

over all other traffic; Assured traffic will be treated with WRED, so as to

maximize link utilization and avoid TCP global synchronization, since Assured

traffic is TCP-based. The rest of the traffic will be treated as best effort.

At this point we cannot exactly specify bandwidth requirements for all

three CoSs, because we must first test their behavior. An initial estimation is 20%

for Premium traffic, 35% for Assured traffic and 20% for the rest of the traffic.

The remaining unused 25% will be used for overhead and other unpredictable

traffic that may occur. This is to minimize the possibility of link saturation and

network congestion. If all 100% of the link bandwidth were to be used, traffic

unaccounted for would cause the link bandwidth to exceed the limit and thus

congestion would occur. These 25% can be considered a “safety net” for the link.

These numbers are initial estimations and cannot be predicted beforehand

because of the heterogeneous nature of the network, i.e. the variety of link speeds.

For example, at a 128Kbps link 20% of the bandwidth for Premium traffic might

not be enough and may have to be increased. In that case, bandwidth for Assured

traffic might also have to be increased, but the remaining bandwidth may not be

enough for the best-effort traffic, i.e. the link will be too saturated for the best-

effort traffic to come through. These are design issues that are difficult to

overcome, therefore we must design the mechanisms as efficiently as possible.

One solution would be to upgrade the link to a higher speed, but we are assuming

that the customer does not want to spend money on such an upgrade.

Core interface:

These are interfaces receiving/sending traffic from/to other routers. Ingress

traffic at these interfaces will already be marked with a DSCP. Therefore, for

these interfaces there is just a need to classify traffic at egress according to DSCP

Page 59: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

51

values and apply scheduling and queue management mechanisms. Premium traffic

will again get strict queue priority; Assured traffic will be serviced with WRED

with an exponential weighting constant of 1 so as to avoid TCP global

synchronization and maximize link utilization. Best-effort will be serviced with a

regular tail-drop mechanism.

Once again we cannot precisely specify bandwidth percentages for the

three CoSs, therefore we will estimate the same numbers as for the edge

interfaces. The precise numbers will have to be worked out during the testing

phase.

Figure 2-4 illustrates an example topology with core and edge interfaces

and which mechanisms will be applied at each:

Edge InterfaceClassificationMarkingPolicing

Queuing

Core InterfaceQueuing

Core InterfaceQueuing

Fig. 2-1: Edge and Core Interfaces

The figures that follow point out the differences between interface ingress

and interface egress of a router. Figure 2-2 is a highly simplified design of a

router’s operation. It should be noted that an interface’s ingress in one direction

Page 60: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

52

acts simultaneously as the interface’s egress in the other direction and vice versa.

For reasons of simplicity only one traffic direction is shown:

InterfaceIngress

Routing andForwardingFunctions

InterfaceEgress

Data In Data Out

Fig. 2-2: Simplified Router Schematic

A router consists therefore of three logical blocks. Figure 2-3 illustrates

the QoS actions that may be taken at the router’s interface ingress:

Classification

Metering

Marking Policing

Incoming Traffic(Interface In) Outgoing Traffic

Fig. 2-3: Interface Ingress Schematic

As the traffic enters the interface, it is examined and classified according

to the router’s metering function. Then, the packets are marked according to the

pre-defined policies (in our case by their DSCP value) and then policed before

they exit the ingress. Throughout all these stages the traffic is “measured”, i.e.

identified according to the above mentioned criteria (source/destination IP and

port number, protocol, DSCP value). The traffic then is forwarded to the routing

and forwarding logical block of the router.

Page 61: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

53

Figure 2-4 illustrates the QoS functions that may take place at interface

egress. As traffic enters, it can be re-classified according to its destination. It then

may be remarked and either policed or shaped. The metering function remains

active until the packet reaches the queue, where it is scheduled and finally exits

the interface and is sent on its way.

It should be noted that usually only queue management and scheduling

mechanisms are employed at interface egress, since the other functions have been

performed at interface ingress. Nevertheless, the figure shows that there is a

possibility for classification, marking and policing at egress in case it is needed.

Finally, as stated above, it is also crucial for the decision which QoS mechanisms

to employ where the interface is located, i.e. if it is an edge or a core interface.

Classification

Metering

Marking Policing/Shaping

Scheduling/Queuing

Incoming Traffic

Outgoing Traffic(Interface Out)

Fig. 2-4: Interface Egress Schematic

2.4. CASE STUDY NETWORK TOPOLOGY

The customer’s network consists of several different routers linked

together at different link speeds and types. The network interconnects four

separate LANs at different locations. The applications that will be used at each

LAN are shown as appropriate icons. It also shows the customer’s specifications

for the link speeds and types.

Page 62: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

54

The first observation is that there are several 128Kbps links, a fact that

will complicate efficient design for QoS provisioning because of the very limited

bandwidth.

Figure 2-5 below shows the topology proposed by the customer.:

CORE NETWORK

LAN 1

LAN 4

WorkstationVideoVoIP

Telephone

LAN 2

WorkstationVideoVoIP

Telephone

LAN 3

WorkstationVideoVoIP

Telephone

DatabaseServerWorkstation VoIP

TelephoneVideo

1984KbitSerial

1984KbitSerial

1984KbitSerial

1544KbitSerial

100MbitFast Ethernet

128KbitSerial

100MbitFast Ethernet

128KbitSerial

100MbitFast Ethernet

100MbitFast Ethernet

256KbitSerial

128KbitSerial

Fig. 2-5: Case Study Network Topology

Page 63: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

55

2.5. LABORATORY NETWORK TOPOLOGY

The actual topology testbed that will be used differs in that we will not use

LANs, but workstations, and not four, but two.

Figure 2-6 shows the actual topology that will be used to simulate the case

study topology and execute all tests:

CORE NETWORK

R2620

R7206

E0/0163.100.140.1

S2/0163.100.130.2

S0/3:1132.1.30.1

S0/1163.100.130.1

S0/3:1132.1.20.1

S0/0132.1.40.1

FE0/0163.105.11.2

S1/0:0132.1.10.1

S1/1:1132.1.30.2

S0/3:1163.100.30.2

S0/0132.1.40.2

R2612

S0/2:0132.1.10.2

S0/2:0163.100.30.1

S0/2:0132.1.20.2

1984Kbit1984Kbit

1984Kbit

128Kbit

100Mbit

100Mbit

256Kbit

128Kbit

R2621

NetMeeting

Sniffer

IP Traffic

NetMeeting

IP TrafficPC 2

QPM

R3640

163.105.1.200

163.100.140.169

PC 1

Fig. 2-6: Laboratory Network Topology

Page 64: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

56

The figure also shows router and interface designations, workstation IP

addresses as well as applications that will be used at each workstation. These

applications will be presented in chapter 3, except for QPM, which will be

presented and discussed in section 2.8. Router configurations will also be

discussed in section 2.8.

2.6. NETWORK DEVICES

As mentioned previously, all network devices are Cisco® Systems routers.

Several routers from different product families will be used for the laboratory

topology testbed, ranging from large-scale enterprise routers (Cisco 7206) to

medium-scale branch office routers (Cisco 2612). In order to gain a better

understanding of the capabilities of the different routers, a short description of

each product family follows:

2600 Series:

The Cisco 2600 series is a family of modular multiservice access routers,

providing LAN and WAN configurations, multiple security options and a range of

high performance processors.

Cisco 2600 modular multiservice routers offer versatility, integration, and

power. With over 70 network modules and interfaces, the modular architecture of

the Cisco 2600 series easily allows interfaces to be upgraded to accommodate

network expansion.

Cisco 2600 series routers are also capable of delivering a broad range of

end-to-end IP and Frame Relay-based packet telephony solutions.

Page 65: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

57

3600 Series:

The Cisco 3600 Series is a family of modular, multiservice access

platforms for medium and large-sized offices and smaller Internet Service

Providers. With over 70 modular interface options, the Cisco 3600 family

provides solutions for data, voice video, hybrid dial access, VPNs and multi-

protocol data routing. VoIP is fully supported.

7200 Series:

Cisco 7200 routers offer flexible connectivity options and a broad range of

feature support. It is a fast single-processor router and provides extensive

serviceability and manageability features. Some of the key features include:

Broad range of IP service support (QoS, broadband aggregation, security,

multiservice and MPLS).

Broad range of flexible, modular interfaces (from DS0 to OC12).

Support for Fast Ethernet, Gigabit Ethernet and Packet Over SONET

(POS).

Multi-protocol support

Scalability and redeployability

Table 2-2 below illustrates the routers that will be used, a short description

of each router, the corresponding host names that will be used for the simulations

and finally the Cisco Internetworking Operating System (IOS) version each

device has installed. IOS is the operating system that every Cisco router uses.

Depending on the IOS version, different features are supported and therefore can

be used and enabled, i.e. earlier IOS versions do not support QoS features.

Page 66: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

58

Model Description Host

Name

IOS

Version

Cisco Router 2612 10/100 Base-T & Token Ring

router

R2612 12.2(10a)

Cisco Router 2620 10/100 Base-T router with 2

WIC slots & 1 network module

slot

R2620 12.2(10a)

Cisco Router 2621 Dual 10/100 Base-T router

with 2 WIC slots & 1 Network

Module (NM) slot

R2621 12.2(10a)

Cisco Router 3640 4-slot modular router with IP

software

R3640 12.2(7c)

Cisco Router 7206 7206VXR with Network

Processing Engine NPE-400

and I/O controller with 2 Fast

Ethernet/Ethernet ports

R7206 12.1(5a)E

Table 2-2: Network Devices, Descriptions and IOS Versions

Table 2-3 shows the interfaces that each router will use, their

corresponding IP addresses and the interface link speeds. The interface expression

format is as follows: [Interface Type] Slot Number/Port Number(:channel group),

e.g. S1/0:0 means Serial Interface on Slot 1, Port 0, channel group 0 (because the

line is a channelized E1 line). The channel group is configured in the router and

may contain any number of channels from a channelized E1/T1 line from 0 to 31.

For example, a configuration could be that channel group 0 uses channels 1 to 4

etc.

Page 67: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

59

Router Interfaces Interface

IP Addresses Link Speed

R2612 Serial 0/0 (S0/0)

Serial 0/2:0 (S0/2:0)

Serial 0/3:1 (S0/3:0)

132.1.40.2

132.1.10.2

163.100.30.2

256Kbit

1984Kbit

128Kbit

R2620 Serial 0/0 (S0/0)

Serial 0/1 (S0/1)

Serial 0/2:0 (S0/2:0)

Serial 0/3:1 (S0/3:1)

132.1.40.1

163.100.130.1

163.100.30.1

132.1.20.1

256Kbit

128Kbit

128Kbit

1984Kbit

R2621 Serial 0/2:0 (S0/2:0)

Serial 0/3:1 (S0/3:1)

132.1.20.2

132.1.30.1

1984Kbit

1984Kbit

R3640 Ethernet 0/0 (E0/0)

Serial 2/0 (S2/0)

163.100.140.1

163.100.130.2

10Mbit

128Kbit

R7206 Fast Ethernet 0/0 (FE0/0)

Serial 1/0:0 (S1/0:0)

Serial 1/1:1 (S1/1:1)

163.105.11.2

132.1.10.1

132.1.30.2

100Mbit

1984Kbit

1984Kbit

Table 2-3: Interfaces, IP Addresses and Interface Link Speeds

Table 2-4 illustrates each router’s loopback interface IP address that will

be used. This is helpful so as to distinguish the router itself from the interface IP

addresses:

Router Loopback Interface

R2612 163.100.2.2

R2620 163.100.128.1

R2621 163.200.2.1

R3640 163.100.128.2

R7206 163.105.2.1

Table 2-4: Router Loopback Interface IP Addresses

Page 68: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

60

An issue that we had to face during implementation stage was that the

routers we used had different IOS versions and some of them did not fully support

the DiffServ architecture, i.e. specific mechanisms. This poses a problem for

every network designer. In order to overcome this issue, we had to upgrade the

IOS versions of the routers in questions by uploading the IOS files to the routers’

memory, thus enabling full support of the DiffServ architecture and all relevant

mechanisms.

2.7. PROTOCOLS AND QOS MECHANISMS

The QoS mechanisms that will be used, as outlined in section 2.1, are the

following:

1. Classification based on DSCP.

2. Marking with DSCP.

3. Policing of Premium traffic using Cisco’s token bucket scheme,

Committed Access Rate (CAR).

4. Class-Based Queuing (CBQ): each CoS will be assigned a queue at each

egress interface of a router; each CoS’s traffic will be serviced through the

corresponding queue. These queues will share the bandwidth of the

interface. Scheduling (bandwidth allocation) among the queues will be

done using a Class-Based Queuing scheme implemented by Cisco, Low-

Latency Queuing (LLQ). LLQ is the combination of Class-Based WFQ

and Priority Queuing, i.e. WFQ is used as the scheduling algorithm among

the CoS queues, but there is also a special queue (low-latency queue) that

has absolute priority over all other queues. This priority will be reflected

in the appropriate policies.

Page 69: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

61

The QoS mechanisms will be implemented using commands of the Cisco

Modular QoS Command-Line Interface (MQCLI). A brief overview of the

mechanisms that will be used at core and edge interfaces as well as ingress or

egress follows:

1. Edge interfaces:

a) Ingress:

Classification

Marking

Policing

b) Egress:

Classification

Queue scheduling

Shaping

2. Core interfaces (only egress is needed since incoming packets will already

be marked with DSCP):

a) Egress:

Queuing

Scheduling

2.7.1. QOS MECHANISMS FOR EDGE INTERFACES

Classification

End systems will send unmarked traffic to edge router interfaces. There,

packets have to be classified according to various criteria (MF classification) to

the appropriate CoS, so as to be marked with the corresponding DSCP value.

DSCP marking will enable the correct treatment of these packets at subsequent

network nodes, so that the desired performance level is achieved.

Page 70: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

62

Packets will be classified through the use of extended access lists (ACLs).

These ACLs can match packets on IP source/destination address, protocol type,

and UDP/TCP port numbers.

One ACL will be defined for each CoS. The criteria for the packets to be

matched by each ACL have to be agreed with the customer. For our case study we

will use arbitrary values and criteria for the ACLs.

The classified traffic types will subsequently be mapped in their respective

classes. For instance, considering that extended access list 105 is used for

Premium traffic, 101 for Assured traffic and 100 for Best effort traffic, the router

configuration for the edge interface would be the following:

class-map match-all premium

match access-group 105

class-map match-all assured

match access-group 101

class-map match-all best-effort

match access-group 100

Marking

After being classified to the appropriate CoS, packets will be marked with

the corresponding DSCP value. Two distinct mechanisms can be used for

marking.

Class Based Marking (MQCLI): This is the recommended method of

marking traffic, which is not policed (e.g. best-effort). Class Based

Marking is part of the MQCLI method of configuring QoS parameters.

Committed Access Rate (CAR): This is the method for simultaneously

marking and policing traffic. In the policing section, it is recommended to

introduce a mechanism of in-profile and out-of-profile traffic for the

Assured traffic, i.e. conforming and non-conforming traffic. The reasons

Page 71: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

63

behind this are explained in the policing section. CAR will be used as the

method to mark the Premium and Assured in-/out-of profile traffic types.

The following is the required example configuration for Class Based

Marking of the Best-effort traffic. Best-effort traffic will be marked with a DSCP

value of 0.

policy-map ingress-profile

class best_effort

set ip dscp 0

interface Ethernet 0/0

service-policy input ingress-profile

The following is the example required configuration for CAR marking of

the Assured traffic. The in-profile Assured traffic is marked as AF11 (DSCP 10),

while the out-of-profile Assured traffic is marked as Best-Effort (DSCP 0).

interface Ethernet0/0

rate-limit input access-group 101 256000 8000 16000

conform-action set-dscp-transmit 10 exceed-action

set-dscp-transmit 0

It must be noted that for Premium and Assured CoS, marking and policing

have to be treated together, i.e. they are implemented by the same set of

commands. For the complete example of marking and policing for Premium and

Assured CoS, see the examples at the end of the policing section (one using CAR

and one using MQCLI).

Policing

It is recommended to introduce a mechanism for policing the Premium

traffic. The main reasons behind this recommendation are twofold:

Page 72: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

64

It should be avoided that well behaving sources (whereas a source can be

considered a whole site, a user or a specific application) are penalized by

ill-behaving ones. A well behaving source is a source that sends traffic

into the network below the Committed Rate (CR) for each traffic class. An

ill behaving source sends traffic into the network above the CR for a

particular traffic class. The problem is that, if a well behaving source and

an ill behaving source both send traffic to a receiver, congestion might

occur on an egress interface to that receiver. If there is no way of

differentiating between the “well behaving” and the “ill behaving” traffic,

traffic from the well behaving site might be dropped instead of traffic from

the ill behaving site. The introduction of a policing mechanism at the

ingress edge router interface will prevent this.

The introduction of in-/out-of profile traffic policies will facilitate the

capacity planning of the backbone network, which is shared among the

different sources. Indeed, the shared backbone network needs to be

engineered and capacity-planned only for the in-profile part of the traffic.

When QoS mechanisms are deployed in the core backbone network due to

possible backbone congestion, it will be possible to differentiate the out-

of-profile traffic from the in-profile traffic and as a result, discard the out-

of-profile traffic earlier and avoid congestion.

For the Assured CoS, in-profile traffic will be transmitted marked with the

correct DSCP (10), while out-of-profile traffic will be transmitted with a lower

priority DSCP, such as DSCP 0 (best-effort). Thus, out-of-profile Assured traffic

is not immediately dropped, but its service level is degraded.

For the Premium CoS, the in-profile traffic will be transmitted marked

with the correct DSCP (46). Due to the nature of the applications that require

Page 73: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

65

Premium CoS (VoIP/videoconferencing/real-time), there is no point in

transmitting out-of-profile Premium traffic with degraded service level. Such

traffic can be dropped. Once again, the amount of traffic permitted for each traffic

class at each ingress interface has to be agreed with the customer.

The policing function will be implemented through CAR. Cisco’s CAR

can be used to rate-limit traffic based on certain matching criteria, such as

incoming interface, DSCP, QoS group or IP access list criteria. CAR provides

configurable actions, such as transmit, drop, set precedence or set QoS group,

when traffic conforms to or exceeds the rate limit.

CAR performs two QoS functions:

Bandwidth management through rate limiting. This function controls the

maximum rate for traffic transmitted or received on an interface. CAR is

often configured on interfaces at the edge of a network to limit traffic into

or out of the network. Traffic that falls within the rate parameters is

transmitted, while packets that exceed the acceptable amount of traffic are

dropped or transmitted with a different priority.

Packet classification through DSCP and QoS group setting. This function

partitions the network into multiple CoSs.

Furthermore, a few important aspects surrounding the CAR

implementation should be understood:

CAR propagates bursts to a certain extent. It does not shape the traffic

flow and as such does not cause any packet delay.

CAR bandwidths need to be configured in 8Kbps multiples.

CAR bandwidths are pure Layer 3 IP. No Layer 2 overhead is included.

The CAR rate-limit configuration requires the setting of the “normal-

burst” (NB) and “excess-burst” (EB) parameters. These are parameters used in

Page 74: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

66

CAR’s token bucket algorithm. A token bucket itself has no discard or priority

policy. The concept of a token bucket works as follows:

Tokens are put into the bucket at a certain rate.

Each token is permission for the source to send a certain number of bits.

To send a packet, the traffic regulator must be able to remove from the

bucket a number of tokens equal in representation to the packet size.

If not enough tokens are in the bucket to send a packet, the packet either

waits until the bucket has enough tokens (in the case of shaping) or the

packet is discarded or marked down (in the case of policing).

The bucket itself has a specified capacity. If the bucket fills to capacity,

newly arriving tokens are discarded and are not available to future packets.

Thus, at any time, the largest burst a source can send into the network is

roughly proportional to the size of the bucket. A token bucket permits

burstiness, but bounds it.

NB is the number of bytes allowed in a burst before some packets will

exceed the rate limit. Larger bursts are more likely to exceed the rate limit. EB is

the number of bytes allowed in a burst before all packets will exceed the rate

limit.

The following table identifies the recommended CAR NB and EB values

in function of the access link speed:

Page 75: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

67

Link Speed (Kbps) NB (bytes) EB (bytes)

64 8000 16000

128 8000 16000

256 8000 16000

512 8000 16000

1024 8000 16000

2048 12800 25600

Table 2-5: Recommended CAR NB and EB Settings

The following is an example configuration for CAR policing. In this

particular example, the Premium in-profile traffic is limited to 32Kbps. The

Assured in-profile traffic is limited to 24Kbps:

interface Ethernet 0/0

rate-limit input access-group 105 32000 8000 16000

conform-action set-dscp-transmit 46 exceed-action drop

rate-limit input access-group 101 24000 8000 16000

conform-action set-dscp-transmit 10 exceed-action

set-dscp-transmit 0

The following is the required example configuration if MQCLI (“police”

command) is used instead of CAR “rate-limit”. Note that this variation excludes

the possibility of having a second rate threshold above which the traffic will be

dropped.

policy-map ingress-profile

class premium

police 32000 8000 conform-action set-dscp-transmit 46

exceed-action drop

class assured

police 24000 8000 conform-action set-dscp-transmit 10

exceed-action set-dscp-transmit 0

interface Ethernet 0/0

service-policy input ingress-profile

Page 76: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

68

Queue Scheduling

The traffic from each CoS will be serviced by a separate queue. These

queues will share the interface’s bandwidth according to a scheduling discipline

(Weighted RR or CBWFQ).

Queuing within the classes will be implemented through LLQ. LLQ is in

fact the combination of CBWFQ and strict Priority Queuing (PQ). PQ is used for

delay sensitive traffic such as VoIP. LLQ will be configured through the MQCLI.

Different traffic classes – a maximum of 64 traffic classes can be defined

on a single router – can be combined in a service policy. Each of the classes in the

service policy will be assigned a minimum bandwidth according to user

requirements, i.e. the service contract that has been agreed with the customer. The

minimum bandwidth that can be configured is 8Kbps. Under congestion, each of

the traffic classes will have this minimum bandwidth available. Also, other

parameters like congestion avoidance and control parameters can be configured

on a per-class basis. The class serviced by the Priority Queue (Premium) will be

assigned not a minimum, but a maximum bandwidth to prevent other queues from

starvation.

The sum of the bandwidths reserved for the customer traffic classes

(Premium, Assured and Best-effort) needs to be lower than the total link

bandwidth. Indeed, some bandwidth needs to be reserved for management traffic

and routing traffic. It is recommended that for class-default a minimum bandwidth

of 8Kbps (or 1%, whatever is larger) is configured. Obviously, the sum of all the

minimum reserved bandwidths cannot be larger than the total link bandwidth.

It should also be understood that the actual minimum bandwidths

configured through MQCLI include the following Layer 2 overhead, in contrast

Page 77: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

69

with CAR, which only includes pure Layer 3 IP bandwidth. Overhead added by

the hardware (CRC, flags) is not included in the MQCLI bandwidths.

By default, on the non-distributed router platforms, the sum of the

minimum bandwidths needs to be lower than 75% of the configured access

bandwidth. Since the actual configured sum of minimum bandwidths will

probably be larger (Low-Loss + Low-Delay + Best-effort + 8Kbps management +

8Kbps class-default), this default parameter setting will be changed (maximum-

reserved-bandwidth) to 100%. However, it is also a very good design practice not

to push the design boundaries to the edge without allowing for any margin of error

or unexpected traffic patterns. Therefore, it is still recommended to keep the sum

of all minimum bandwidths below 100%. Keeping the sum of all minimum

bandwidths around 90% will allow for unaccounted traffic such as Layer 2

overhead, Layer 2 keepalives etc.

The following is the required example configuration for LLQ class

queuing. The order in which the classes are configured under the policy-map is

important. The bandwidths assigned to each class have again to be agreed with the

customer.

policy-map egress-profile

class premium

priority <X-kbps>

class assured

bandwidth <Y-kbps>

class best-effort

bandwidth 8

class class-default

bandwidth 8

interface Ethernet0/0

max-reserved-bandwidth 100

service-policy output egress-profile

Page 78: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

70

2.7.2. QOS MECHANISMS FOR CORE INTERFACES

Core interfaces will receive and transmit IP packets already marked with a

DSCP value. Also, the traffic sent to a core interface is already policed at an edge

interface. Therefore, there is no need for marking and policing at core interfaces.

Core interfaces just need to classify traffic that is going to be transmitted out of

the interface, according to the DSCP, assign the packets to the appropriate queue

that serves the corresponding service class and apply the queue scheduling

mechanism.

These interfaces will be configured using MQCLI for a LLQ mechanism,

as described in the relevant section “Queue Scheduling” for edge interfaces.

2.8. CONFIGURATION OF NETWORK DEVICES

After discussing the mechanisms that will be implemented to simulate the

customer’s network, the next crucial step in the implementation stage is the

configuration of the router’s. Every router must be configured in order to work

properly, not just in terms of QoS, but also in terms of routing, interface discovery

etc. This is another issue we faced during implementation, because all routers had

to be configured according to the customer’s specification and QoS mechanisms

were to be implemented. Proper and correct configuration is essential for the

efficient operation of a network and even the slightest error (e.g. a wrong

command) can cause problems. Therefore we had to carefully plan and implement

the router configurations in order to create the desired topology.

The next step during configuration stage was to implement the

mechanisms that would provide QoS so as to begin the testing phase. For this

purpose we used the Cisco Quality of Service Policy Manager (QPM), which

consists of the QPM and the Quality of Service Distribution Manager (QDM).

Page 79: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

71

With QPM, policies can be created for each router interface under a graphical

environment according to the pre-specified requirements outlined in section 2.1.

In each policy a number of mechanisms can be specified that will provide QoS.

An interface can have as many policies as required.

After the creation of a policy, QPM translates the selected settings into

router commands, i.e. MQCLI commands, which then can be manually edited and

uploaded to each router using QDM. As a result, QPM made our work

significantly easier, because we did not have to create configuration listings by

hand. With just a few mouse clicks the policies were ready and the routers

configured. The difficult part was to decide on the different parameters for each

policy in order to make the network work properly for the applications that were

to be tested. How will we identify unmarked traffic originating from the hosts, i.e.

what ports do the applications use? How much bandwidth should we reserve for

each CoS? At what rate should we police the CoSs? How will we handle

bottleneck sections in the network, i.e. 128Kbps links? All these were issues that

had to be considered during the creation of the policies in order to ensure proper

operation of the applications.

The QPM version that we used was version 2.1.2, which supports all

current QoS architectures and mechanisms. Policies can be easily created and

reviewed before uploading to the router.

After loading, the main QPM screen appears and it looks like this:

Page 80: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

72

Fig. 2-7: QPM Main Screen

Note that all routers and their relevant interfaces have been installed and

are ready to be set up with the appropriate policies. All devices are presented with

their loopback interface addresses.

The device properties page looks like this:

Fig. 2-8: Device Properties Page

Page 81: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

73

From this page the current router configuration can be viewed (“View

Configuration...”). QPM communicates with the selected device and queries it for

the current configuration. After a few seconds it appears on the screen:

Fig. 2-9: Device Configuration Listing

This is helpful for troubleshooting purposes as well as ensuring that any

QoS configuration uploaded by QPM to the device has actually been executed and

is active.

From the main screen the individual interfaces can be configured. In order

to support the QoS mechanisms discussed in section 2.7, we have to set all

available interfaces to “Class-Based QoS” as follows:

Fig. 2-10: Interface Properties Page

Page 82: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

74

After configuring all interfaces properly, policies must be created for each

interface according to the specifications laid out in sections 2.1 and 2.7. Here all

required mechanisms can be set up and implemented. The policy creation and

editing screen looks as follows:

Fig. 2-11: Policy Editor

As mentioned earlier, this step is very crucial, because the policies are the

essence of providing QoS, i.e. enabling and activating the relevant mechanisms in

order to provide QoS. It is very difficult to say beforehand what parameters will

work best for the applications that are to be tested without first testing.

Nevertheless, after an initial assessment of the situation and after careful

consideration of the customer’s requirements, available network equipment and

link speeds, we decided to create policies that follow. It is important to stress the

fact that we had to use a special program in order to find out the UDP ports that

our application uses, which is very important for identifying unmarked traffic.

The following policies are based on the author’s initial evaluation of the

customer’s requirements in combination with the applications that would be used

to test the network. As we will see later on, the policies had to be refined several

Page 83: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

75

times at different interfaces in the network so as to make the applications work

properly.

Edge interfaces:

1) Ingress-Premium-Edge:

Direction: In

Filter: Protocol UDP, port range 49000-65000

Limiting: 128Kbps, DSCP, NB 8, EB 16, DSCP 46, none

This policy defines the actions to be taken at the interface when Premium

traffic enters the network from an edge interface. Direction is set to “In” since it is

at interface ingress. Traffic entering the network at edge interfaces is unmarked,

therefore it will be identified by protocol and port number (which can also be a

port range). Premium traffic uses UDP and the port range that we will use is

49000-65000. “Limiting” refers to policing. Here, the settings mean that Premium

traffic will be limited to 128Kbps, which should be enough for both the voice and

video streams. The marking mechanism will be DSCP, i.e. all unmarked packets

falling into this category will be marked with a DSCP value. NB will be 8Kbytes

and EB 16Kbytes; both settings are recommended by Cisco, therefore we will

adhere to them. In-profile traffic in this CoS will be marked with DSCP 46 (EF),

while out-of-profile traffic will be dropped, because – as pointed out earlier –

there is no point in transmitting Premium traffic with a lower DSCP, therefore

dropping is the best option in the author’s opinion.

2) Ingress-Assured-Edge:

Direction: In

Filter: Protocol TCP, ports 1434, 1433, 3299

Page 84: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

76

Limiting: 256Kbps, DSCP, NB 8, EB 16, DSCP 10, DSCP 0

This policy defines the actions to be taken at the interface when Assured

traffic enters the network from an edge interface. Direction is set to “In” since it is

at interface ingress. Assured traffic uses TCP and the ports we will use are 1434,

1433 and 3299. These ports have been chosen randomly. Traffic in this CoS will

be generated, i.e. we will not use a specific application. Assured traffic will be

limited to 256Kbps, which should be enough; the marking mechanism will be

DSCP. NB will be 8Kbytes and EB 16Kbytes. In-profile traffic in this CoS will be

marked with DSCP 10 (AF), while out-of-profile traffic will be remarked with a

DSCP value of 0 (best-effort). This means that in-profile traffic will receive

assured services, as specifies in the DiffServ architecture, while out-of-profile

traffic will not be dropped, but transmitted with a lower DSCP. Thus, out-of-

profile packets will still have a chance to reach their destination.

3) Egress-Premium-Edge:

Direction: Out

Filter: Protocol UDP, port range 49000-65000

CBWFQ: BW 20%, priority

This policy defines the actions to be taken at the interface when Premium

traffic enters the core network from an edge device. Direction is set to “Out” since

it is at interface egress. Filtering will be employed with the same parameters as at

interface ingress (UDP, port range 49000-65000). The queue scheduling

mechanism will be CBWFQ and this CoS will receive 20% of the link bandwidth

as well as strict priority over any other traffic flow. We will use CBWFQ because

in the author’s opinion it is the most effective queue scheduling mechanism for

highest priority traffic. The percentage of the bandwidth is an initial assessment

Page 85: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

77

and most probably will have to be adjusted during the testing phase for the

different link speeds, i.e. we might have to increase it at all 128Kbps links,

because it simply will not be enough for both voice and video. Strict queue

priority will ensure that no packets of this traffic class will be dropped, i.e. all

Premium packets will be transmitted before other packets.

4) Egress-Assured-Edge:

Direction: Out

Filter: Protocol TCP, ports 1434, 1433, 3299

CBWFQ: WRED, weight 1, BW 35%

This policy defines the actions to be taken at the interface when Assured

traffic enters the core network from an edge device. Direction is set to “Out” since

it is at interface egress. Filtering will be employed with the same parameters as at

interface ingress (TCP, ports 1434, 1433, 3299). CBWFQ will be employed and

Assured traffic will receive 35% of the link bandwidth. Again, note that this value

might have to be adjusted at low-speed interfaces, i.e. 128Kbps links. The queue

management mechanism will be WRED with an exponential weighting constant

of 1, which means that WRED will react quickly should congestion occur, i.e. it

will begin dropping packets sooner. In the author’s opinion, WRED is a very

effective queue management mechanism that prevents congestion.

5) Best-effort:

Direction: Out

CBWFQ: Tail, BW 20%

The rest of the traffic will be treated as best-effort, therefore no filtering is

needed in order to identify the flow. That is the reason why there is no need to

Page 86: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

78

define an interface ingress policy for the best-effort traffic class and only egress is

required, i.e. the “Direction” setting is set to “Out”. Here the queue management

mechanism will be traditional FIFO, i.e. the dropping mechanism will be tail drop.

No other mechanism is necessary for this CoS. Bandwidth will initially be set to

20%, a value which might have to be adjusted at slower interfaces.

Core interfaces:

At the core interfaces only interface egress policies need to be defined,

because packets will already be marked with a DSCP value and will be identified

by it. As a result, there is no need to create a policy for best-effort traffic. The

policies that will be created initially are the following:

1) Egress-Premium-Core:

Direction: Out

Filter: Protocol IP, DSCP 46

CBWFQ: BW 20%, Priority

This policy defines the actions to be taken at the interface when Premium

traffic exits a core interface. Direction is set to “Out” since it is at interface egress.

Packets will be identified by IP protocol and their DSCP value, which is 46 for the

Premium traffic (EF). CBWFQ settings are the same as for the edge interfaces

(20% link bandwidth, strict priority). The bandwidth will most likely have to be

increased at slower interfaces.

2) Egress-Assured-Core:

Direction: Out

Filter: Protocol IP, DSCP 10

CBWFQ: WRED, weight 1, BW 35%

Page 87: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

79

This policy defines the actions to be taken at the interface when Assured

traffic exits a core interface. Direction is set to “Out” since it is at interface egress.

Packets will be identified by IP protocol and their DSCP value, which is 10 for the

Assured traffic (AF). CBWFQ settings are the same as for the edge interfaces

(35% link bandwidth, WRED with exponential weighting constant 1). During

testing, this value might have to be adjusted as well.

After all policies have been created at the appropriate interfaces, the main

QPM screen looks like this:

Fig. 2-12: Created Policies

At this point the policies that have been defined are the same for all

interfaces (according to their position, i.e. edge or core) without any specific

adjustments for the slower links. These adjustments will have to be worked out

after a few tests have been carried out in order to observe traffic behavior and

application performance and thus make the necessary changes.

As mentioned above, QPM is a tool that helps creating QoS policies

easily. It essentially translates the created policies and their settings into router

Page 88: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

80

commands. In the author’s opinion, QPM is a very powerful tool for

administrators that want to manage, implement and deploy QoS policies quickly,

easily and efficiently across a network compared to any router’s command line

interface (CLI). Whenever we wanted to change a parameter in a policy or modify

a configuration, QPM’s ease of use proved to be invaluable. It would have taken

much longer to do it via CLI. Besides, it would have been much more difficult

had we used CLI. Therefore, in the author’s opinion QPM is a very powerful, yet

easy to use tool.

The commands that are to be uploaded to the routers can be reviewed for

each device from the “Device Properties” page (Figure 2-8) by clicking on the

“View Commands” button. A listing with all commands for the selected device

appears:

Fig. 2-13: Policy Commands

The above listing is part of the commands for all the policies in all

interfaces of device R3640. Note that the commands follow the order in which the

policies are sorted in QPM. As a result, policy order in QPM is very important.

Ingress policies should come first, then egress and finally best-effort (class-

default). Of course, policies for Premium traffic should come before Assured. The

Page 89: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

81

complete configuration listing for R3640 is the following (note that every line

represents a single command except where indented):

class-map match-all QPM_Ethernet0/0 match access-group 100 exit class-map match-all QPM_Ethernet0/0_1 match access-group 101 exit class-map match-all QPM_Ethernet0/0_2 match access-group 100 exit class-map match-all QPM_Ethernet0/0_3 match access-group 101 exit class-map match-all QPM_Serial2/0 match ip dscp 46 exit class-map match-all QPM_Serial2/0_1 match ip dscp 10 exit policy-map QPM_Ethernet0/0 class QPM_Ethernet0/0 police 64000 8000 16000 conform-action set-dscp-transmit 46 exceed-action drop exit class QPM_Ethernet0/0_1 police 256000 8000 16000 conform-action set-dscp-transmit 10 exceed-action set-dscp-transmit 0 exit exit exit policy-map QPM_Ethernet0/0_1 class QPM_Ethernet0/0_2 priority 1000 exit class QPM_Ethernet0/0_3 bandwidth percent 40 random-detect random-detect exponential-weighting-constant 1 exit class class-default bandwidth percent 25 exit exit exit policy-map QPM_Serial2/0 class QPM_Serial2/0 priority 26 exit class QPM_Serial2/0_1 bandwidth percent 40 random-detect random-detect exponential-weighting-constant 1 exit class class-default bandwidth percent 15 exit exit exit

Page 90: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

82

access-list 100 permit udp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 range 49000 65000 access-list 101 permit tcp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 1434 access-list 101 permit tcp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 1433 access-list 101 permit tcp 0.0.0.0 255.255.255.255 0.0.0.0 255.255.255.255 eq 3299 interface Ethernet0/0 service-policy input QPM_Ethernet0/0 exit interface Ethernet0/0 service-policy output QPM_Ethernet0/0_1 exit interface Serial2/0 service-policy output QPM_Serial2/0 exit write memory

The command listings for the other devices look similar.

The next step during configuration stage is to upload the commands to all

devices. For this purpose we will use QDM, which in essence accesses QDM’s

database, i.e. the set of devices and relevant policies that have been created, and

by clicking the “Apply All” button, all commands are uploaded and executed on

the routers. QDM can be invoked from within QPM.

QDM’s main screen looks like this:

Fig. 2-14: QDM Main Screen

Page 91: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

83

Any command errors are logged and reported to the user, for example if a

QoS function is not supported on a specific device for a specific interface, as well

as errors of other nature (e.g. command line or communication errors). If all goes

well, QDM reports that all devices have been successfully configured.

As mentioned earlier, we had to upgrade some router’s IOS versions in

order to enable DiffServ-support, but after doing so, everything went well and the

routers had been configured successfully. After QDM completes its operation, all

policies have been uploaded to the routers and as a result the network is fully

QoS-enabled and ready to be tested.

This completes the implementation and configuration stage. The next step

is to proceed to the testing phase. We will install various types of applications in

order to simulate traffic and observe QoS mechanisms in action. These will be

presented and discussed in the following chapter.

Page 92: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

84

CHAPTER 3: MEASUREMENT INFRASTRUCTURE FOR

VALIDATING QOS OF A NETWORK

After setting up the network and successfully configuring all network

devices according to the customer’s requirements in our case study, it is time to

proceed to the testing stage in order to observe traffic behavior, fine-tune the

mechanisms and make necessary changes in the configurations to ensure proper

operation of the applications. After collection and analysis of results, we will be

able to make several statements and draw our conclusions regarding the DiffServ

architecture and QoS in IP networks in general.

3.1. TRAFFIC GENERATION AND MEASUREMENT TOOLS

In order to simulate the applications that the customer wishes to use in

his/her network in our case study, collect and observe results, we will use three

different applications: Microsoft’s NetMeeting, ZTI Telecom’s IP Traffic and

Network Associates’ Sniffer Pro. A brief overview of each application follows.

Microsoft NetMeeting: NetMeeting is a teleconferencing application that

supports video and voice transmission and reception, remote application

sharing, whiteboard and also has a chat function. We will use it to simulate

VoIP and videoconferencing applications and we will observe video and

voice quality in terms of the four QoS parameters mentioned earlier.

NetMeeting will of course be used in conjunction with the appropriate

equipment, i.e. microphones, speakers and cameras for video

transmissions. The following is a sample screenshot of NetMeeting from a

session between PC 1 and PC 2 (see laboratory network topology in

section 2.5):

Page 93: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

85

Fig. 3-1: NetMeeting Screenshot

NetMeeting uses UDP as its transport protocol, since video and voice

streams are real-time. We had to use Sniffer Pro in order to find out the UDP ports

that NetMeeting uses for video and voice transmissions. The port range is 49000

to 65000, which is reflected in the policies we created. The measurements will be

subjective, because we had no way to quantify video or voice quality, i.e. to give a

specific measurement. Instead, we will refer to voice and video quality as “good”

or “bad” by considering parameters such as delay, scrambled signals, distortions

etc.

ZTI IP Traffic: IP Traffic is a very flexible and useful application.

Besides being a traffic generator, it also displays traffic statistics such as

throughput average and round trip times (RTT). In order to set up a n IP

Traffic connection, the program must be installed on two PCs, one acting

as the traffic generator (transmitter) and the other as the traffic answerer

(receiver). Multiple connections (i.e. traffic flows) with different

characteristics (i.e. protocol, destination IP, port number) can be set up

according to requirements in order to test and collect results. Furthermore,

Page 94: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

86

we can specify if the generated packets will be of fixed or variable size

(and specify in bytes) and the inter-packet time in milliseconds. We will

use IP Traffic to simulate the customer’s Assured traffic (client/server

applications), which use TCP and specific port numbers. Furthermore, we

will set up a Premium traffic flow in order to measure RTT and average

throughput and be able to have at least some Premium CoS performance

metrics. IP Traffic will be essential for the collection of the results,

because it will be used for this purpose.

Fig. 3-2: IP Traffic Screenshot

The receiving station can act for each connection either as packet absorber

or as packet echoer. The first means that any received packets for that connection

will be absorbed, whereas the second option sends the received packets back to

the transmitting station. This is useful for measuring the RTT of packets. Finally,

the user can start and stop connections individually, which is useful for observing

the behavior of traffic as new traffic flows enter the network. For example, we

Page 95: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

87

could initiate a best-effort traffic flow, leave it alone for a few minutes and then

start an Assured traffic flow to see if policies have been implemented properly.

The expected result would of course be the throughput decrease of the best-effort

traffic and the throughput stabilization of Assured traffic. Then another best-effort

flow could be set up, which would not at all affect the throughput of the Assured

traffic.

Sniffer Pro: Sniffer Pro is a traffic capturing program that analyzes

packets. It also has network analyzer functions and can display various

statistics regarding traffic. It captures packets, displays their contents and

the user is able to view statistics. We used this program mainly for

troubleshooting purposes as well as in order to find out port numbers and

other packet characteristics. The following is a screenshot of Sniffer’s

main screen:

Fig. 3-3: Sniffer Pro Screenshot

Page 96: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

88

3.2. DESIGN OF VALIDATION TESTS

After setting up the network properly and installing the applications on the

two PCs, we now need to design and execute tests. As mentioned earlier, we will

test video and voice performance with NetMeeting and IP Traffic will generate

Assured traffic.

Another issue we encountered was appropriate bandwidth allocation. As

stated in section 2.8, the initial bandwidth percentage values at each interface

were rough estimations. On several occasions we had to adjust the bandwidth for

each CoS at the low-speed interfaces (128Kbps), mainly because of poor video

performance, i.e. because video requires more bandwidth by default. Voice

quality was still acceptable even with the initial values.

For reasons of simplicity, we will from now on always assume that the

transmitter is PC 1 and the receiver PC 2. No differences in transmission quality

or speed were encountered the other way around, i.e. it does not matter which PC

acts as transmitter and which as receiver.

In our first test we will test voice and video with no additional traffic on

the network just to make sure that the videoconference actually works. As

expected, both voice and video performed well and exceeded acceptable levels.

Neither had perceivable delay at reception and quality was excellent.

The second test will involve traffic generation with IP Traffic without any

Premium traffic. We will set up four connections within IP traffic with following

characteristics:

1. TCP, port 2009 (best-effort), fixed packet size 1460 bytes, fixed inter-

packet time 1ms, receiver in absorber mode.

2. TCP, port 1434 (Assured), fixed packet size 1460 bytes, fixed inter-packet

time 1ms, receiver in absorber mode.

Page 97: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

89

3. UDP, port 10000 (best-effort), fixed packet size 500 bytes, fixed inter-

packet time 10ms, receiver in echoer mode.

4. UDP, port 50000 (Premium), fixed packet size 1000 bytes, fixed inter-

packet time 5ms, receiver in echoer mode.

The best-effort flows will start earlier. As soon as they have stabilized, we

will make a note of their throughput values and initiate the Assured and Premium

traffic flows. We will measure Assured throughput average and Premium

throughput average and RTT. The best-effort flows are expected to drop abruptly

as soon as Assured and Premium start transmitting. This will help us identify

bottleneck spots in the network and adjust the policies accordingly, especially at

the low-speed links. Furthermore, it will give us a first observation on how the

implemented mechanisms actually work.

After all flows have been set up, we will initiate a videoconference to

ensure that voice and video quality are acceptable under light network load

conditions. Voice and video are expected to perform satisfactory under these

circumstances, although the bandwidth will have to be adjusted to provide enough

bandwidth for the video.

In the next test we will simulate a heavy load condition on our network.

The connections that will be set up are the following:

1. TCP, port 1434 (Assured), fixed packet size 1460 bytes, fixed inter-packet

time 1ms, receiver in absorber mode.

2. TCP, port 2009 (best-effort), fixed packet size 1460 bytes, fixed inter-

packet time 1ms, receiver in absorber mode.

3. UDP, port 49999 (Premium), fixed packet size 10 bytes, fixed inter-packet

time 100ms, receiver in echoer mode.

Page 98: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

90

4. UDP, port 10000 (best-effort), fixed packet size 10 bytes, fixed inter-

packet time 100ms, receiver in echoer mode.

5. UDP, port 20000 (best-effort), fixed packet size 10 bytes, fixed inter-

packet time 100ms, DSCP 46 (EF), receiver in echoer mode.

6. UDP, port 30000 (best-effort), fixed packet size 10 bytes, fixed inter-

packet time 100ms, DSCP 10 (AF), receiver in echoer mode.

7. UDP, port 60000 (Premium), fixed packet size 100 bytes, fixed inter-

packet time 20ms, receiver in echoer mode.

8. Videoconference (Premium).

This simulation will indeed create a heavy load situation, because we have

multiple Premium connections as well as one Assured and several best-effort. One

problem that might occur is that because of the low-speed links, best-effort traffic

might experience bandwidth starvation and the connections will eventually be

dropped, a fact that will again lead to adjustment of policies at the interfaces in

question. Here we see again the need for constant monitoring and adjustment of

network parameters, which is essential in any real-world scenario.

The measurements in this scenario will be throughput average and RTT

for all connections. We must verify that Premium RTT < Assured RTT < Best-

effort RTT and Assured throughput > Best-effort throughput, because only then

will we able to say with certainty that QoS implementation actually works on our

network.

All connections will start simultaneously, as this will help us observe

traffic behavior right from the start. Bandwidth allocation adjustments might be

necessary again to avoid starvation of any CoS. It is obvious that finding the

optimal balance when allocating bandwidth to each CoS is a very difficult task

and poses a significant challenge for network administrators that wish to enable

Page 99: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

91

mechanisms for providing QoS in their networks. This issue can be overcome

only by constantly monitoring traffic on the network and adjusting the policies

accordingly.

These tests should be sufficient in order to be able to make a statement as

to whether the mechanisms that have been implemented provide sufficient QoS

for the customer’s requirements or if any other changes in the network are

necessary (or recommended) to improve application performance and find the

optimal balance.

The collected results will be presented and analyzed in the next section.

Also, all necessary policy adjustments will be mentioned and justified.

3.3. COLLECTION AND ANALYSIS OF RESULTS

The first test was carried out with the initial bandwidth allocations

specified in the policies. This test helped us determine the optimum bandwidth for

all low-speed links. Note that the numbers in each traffic flow column refer to the

corresponding connection setup as mentioned in section 3.2.

We initiated the first flow, a best-effort TCP flow. The average throughput

was 90KB/s with an RTT of 2s.

Traffic flow Throughput avg. (KB/sec) RTT (ms)

1. TCP, best-effort 90 2000

Table 3-1: First Test: TCP Best-effort

Then we activated the second flow, an Assured TCP flow. We observed

that almost instantly the average throughput of the first flow dropped to 40KB/s

and its RTT increased by 1s whereas the Assured flow had an average throughput

Page 100: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

92

of 90KB/s and RTT of 2s. These statistics remained stable during the test, an

indication that the mechanisms were working well for Assured traffic.

Traffic flow Throughput avg. (KB/s) RTT (ms)

1. TCP, best-effort 40 3000

2. TCP, Assured 90 2000

Table 3-2: First Test: TCP Best-effort & TCP Assured

As can be seen from the above table, Assured throughput > Best-effort

throughput and Assured RTT < Best-effort RTT, indicating that the mechanisms

work properly. At this stage we were not examining efficiency, but merely proper

operation.

We then initiated the remaining two traffic flows, one UDP best-effort and

one UDP Premium. Almost immediately the statistics for all traffic flows changed

to the following:

Traffic flow Throughput avg. (KB/s) RTT (ms)

1. TCP, best-effort 20 7000

2. TCP, Assured 80 3000

3. UDP, best-effort 300 4000

4. UDP, Premium 1000 290

Table 3-3: First Test: All Traffic Flows without Videoconference

Both TCP best-effort and TCP Assured throughput decreased and their

RTTs increased, indicating the strict priority that Premium traffic receives. The

RTT of UDP Premium indicated that delay was well within tolerance levels, i.e.

the real-time application would work fine. To our surprise both UDP flows

Page 101: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

93

claimed the largest part of the bandwidth and as a result had very high average

throughput values compared to the TCP flows. This is a result of UDP’s

connectionless nature, since it does not employ acknowledgements (ACKs).

Up to now bandwidth allocation in policies did not seem to be an issue,

although we realized that some minor adjustments would be necessary to contain

UDP best-effort traffic from taking up such a large amount of bandwidth (the

large average throughput indicated this).

However, when we started NetMeeting and initiated a videoconference

call, the values changes instantly to the following:

Traffic flow Throughput avg. (KB/s) RTT (ms)

1. TCP, best-effort 15 10000

2. TCP, Assured 60 2000

3. UDP, best-effort 20 3000

4. UDP, Premium 40 655

Table 3-4: First Test: All Traffic Flows with Videoconference

All throughput values dropped, especially Premium traffic throughput, and

RTTs increased almost instantly. 655ms RTT translates into an equivalent delay

for Premium traffic, which is almost intolerable for such types of applications.

Regarding video and voice quality, both were scrambled, arrived after very long

delays. In our opinion the application was almost unusable and after a while the

call was even dropped, i.e. the connection could not be maintained.

This confirmed our suspicions that the initial bandwidth allocations for the

128Kbps links would not be sufficient for Premium traffic. We immediately

changed the bandwidth at all appropriate interfaces to the following: 40% for

Premium traffic, 25% for Assured traffic and 20% for best-effort traffic.

Page 102: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

94

We then ran the test again to observe the impact these changes would have

on each traffic flow.

First we initiated all flows without setting up a videoconference. The

results were the following:

Traffic flow Throughput avg. (KB/s) RTT (ms)

1. TCP, best-effort 18 10000

2. TCP, Assured 60 3000

3. UDP, best-effort 200 6000

4. UDP, Premium 1500 100

Table 3-5: First Test: All Traffic Flows without Videoconference after First Policy Change

By changing the policies we managed to contain UDP best-effort traffic

and improve UDP Premium RTT. We then initiated a videoconference call and

observed the changes in the traffic statistics:

Traffic flow Throughput avg. (KB/s) RTT (ms)

1. TCP, best-effort 15 10000

2. TCP, Assured 40 3000

3. UDP, best-effort 100 6000

4. UDP, Premium 1000 223

Table 3-6: First Test: All Traffic Flows with Videoconference after First Policy Change

The values were somewhat better than the first time, especially UDP

Premium RTT, which was well within tolerance limits. Voice quality was very

good, the signal was clear, unscrambled and without any noticeable delay. Video

reception was indeed better, however, we experienced some delay and at some

Page 103: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

95

point the signal became somewhat distorted. This meant that we once again had to

adjust the bandwidths at all low-speed links.

This time we allocated 55% for Premium traffic, 20% for Assured and

10% for Best-effort, thus exceeding the recommended limit by 10%, but we had

no option because of the bandwidth limitation at all 128Kbps links. Here we

realized that sometimes the only solution in order to provide sufficient QoS is to

upgrade slow links to higher bandwidths, because no matter how efficient the

mechanisms will be implemented and set up, bandwidth will always be limited on

a slow link.

With the above change, we expected that Best-effort traffic might

experience extremely high RTTs and low throughput, even starvation and dropped

connections. However, this was not the case.

We initiated all four connections as well as a videoconference call. The

results we obtained are the following:

Traffic flow Throughput avg. (KB/s) RTT (ms)

1. TCP, best-effort 10 11000

2. TCP, Assured 35 4000

3. UDP, best-effort 98 9000

4. UDP, Premium 700 157

Table 3-7: First Test: All Traffic Flows with Videoconference after Second Policy Change

The above results show us that the changes in bandwidth allocations we

made fulfilled their purpose and were very effective and balanced. Of course,

when looking at these results, one has to keep in mind the bandwidth limitations,

i.e. the number of 128Kbps links in the network.

Page 104: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

96

UDP Premium traffic had a very satisfactory RTT and average throughput.

In general, all connection statistics complied with the known formula (Premium

throughput > Assured throughput > Best-effort Throughput and Premium RTT <

Assured RTT < Best-effort RTT), thus indicating satisfactory QoS provisioning.

Furthermore, video and voice quality were both excellent. Both signals

were clear, undistorted and without delay, almost perfect. All these results showed

us that we had found the optimal balance for each CoS for the given network

topology and with the known limitations. We therefore decided that no further

changes in the policies were needed.

We proceeded with the second and final test, which essentially simulated a

heavy load network condition. Two Premium, one Assured and multiple traffic

flows were set up, besides the videoconference call via NetMeeting. We collected

following results:

Traffic flow Throughput avg. (KB/s) RTT (ms)

1. TCP, Assured 60 4000

2. TCP, best-effort 30 6000

3. UDP, Premium 1 246

4. UDP, best-effort 0.9 992

5. UDP, best-effort, DSCP 46 0.88 950

6. UDP, best-effort, DSCP 10 0.9 972

7. UDP, Premium 7.5 200

Table 3-8: Second Test: All Traffic Flows with Videoconference

The first observation was that flows 5 and 6, marked with a DSCP value

(which in essence was an IP precedence value mapped to the appropriate DSCP

value) by the traffic generator (i.e. the host) were not serviced accordingly.

Obviously, this happened because the filtering policies we specified at the edge

Page 105: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

97

interfaces involved just protocol type and port range. The routers ignored the IP

precedence value and examined only the specified parameters (protocol type and

port range). As a result, they could not recognize these packets as Premium or

Assured and treated them as best-effort. This is reflected in the relevant statistics.

We observed that UDP Premium traffic had somewhat higher – although

insignificant – RTTs, but this was due to the total number of flows. Throughput

was constant for all Premium flows. Note that it was low because of the

parameters specified in the traffic generator, i.e. small packet size and large inter-

packet delay.

TCP Assured traffic had also satisfactory statistics. RTT was within

acceptable limits and throughput was constant. TCP best-effort had a fluctuating

throughput, but this was due to the already known issue of low-speed links.

Video and voice performance was also more than acceptable, although not

as good as in the previous test. Nevertheless, the signals were clear and neither

video nor voice experienced noticeable delay.

This concluded the testing phase. After carefully observing and analyzing

the above results, the only change that would affect application performance

would be the upgrade of all 128Kbps links to at least 512Kbps. The network we

tested had all capabilities to provide sufficient QoS for the customer’s

requirements, although we had to fine-tune some mechanisms to achieve optimal

results.

All in all, the testing phase provided us with invaluable insight into QoS

provision in a complex and large-scale network and showed us that there is indeed

the need to implement mechanisms in order to provide QoS for specific

applications. Moreover, it justified the existence of such mechanisms and made it

clear that without them, many of today’s services in large-scale IP networks, such

Page 106: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

98

as e-commerce, VoIP, videoconferencing and real-time streaming, would be

almost impossible to implement.

The testing phase also confirmed the author’s opinion about the DiffServ

architecture. Indeed, DiffServ demonstrated its flexibility and scalability, exactly

where IntServ would have failed. How would IntServ have handled the 128Kbps

bottlenecks, especially with the increasing number of traffic flows? DiffServ

managed to provide sufficient QoS for both Premium and Assured traffic. Even

best-effort traffic was serviced up to a certain extent.

The dissertation will be concluded in the next chapter.

Page 107: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

99

CHAPTER 4: CONCLUSIONS

4.1. SUMMARY

The dissertation began with a general overview of QoS in IP networks. We

discussed the reason for the need of technologies that provide QoS in IP networks.

The rapid development of the Internet to a global network, the emergence of new

applications and the advancements of technology led to the need for service

differentiation. The IP protocol suite was not able to fulfill these needs because it

treated all packets the same, also known as best-effort service. This led to the

invention and implementation of mechanisms and architectures that would

provide QoS for the applications that had such high requirements.

We defined the term QoS as a set of architectures and mechanisms that

attempt to provide the resources that the applications require in order to operate

efficiently. This set of mechanisms and architectures essentially attempts to

control the four important network performance parameters (delay, jitter, packet

loss ratio and throughput) in order to provide needed resources and an acceptable

service level.

The applications that require increased QoS were then identified. Mainly

real-time and mission-critical applications require this type of service, because

they are more or less sensitive to the four QoS performance parameters. They

need performance guarantees in order to operate smoothly and efficiently and that

is where the QoS architectures and mechanisms come in.

We introduced the two QoS architectures, IntServ and DiffServ and briefly

mentioned MPLS as a packet forwarding strategy. DiffServ’s scalability and

flexibility makes it the preferable solution over IntServ. This was concluded after

a critical evaluation of the two architectures, where we essentially compared them

Page 108: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

100

and discussed their strengths and weaknesses. DiffServ offers more versatility for

any type and size of network, thus being the better solution for modern IP

networks.

We then proceeded to investigate a case study of a customer, who

proposes a network topology and because of the nature of the applications that are

to be used, a QoS architecture had to be implemented to provide the required

service quality. After careful consideration of the customer’s needs, we decided to

implement the DiffServ architecture and all relevant mechanisms. The decision

was based on the conclusions in the previous chapter and after the comparison

between IntServ and DiffServ and the identification of the applications that were

to be used in the network.

The next step was to work out the CoSs for the customer’s network. We

stressed that this first step represents a major design challenge and needs to be

carefully and thoroughly considered in order to achieve the desired results. We

discussed the mechanisms that were to be employed and pointed out the

difference between core and edge interfaces.

The customer’s topology was simulated in the laboratory, where a similar

topology to the proposed one was designed and implemented. We stressed the

importance of choosing network equipment that supports the chosen architecture

so as to avoid configuration problems and ensure compatibility between devices.

We then analyzed in detail the mechanisms that were employed at each interface,

gave configuration listing examples and made an initial assessment of the policies

that were to be activated.

The next step was to configure the devices. We used Cisco’s QPM, which

made this part extremely easy. The other way would be to configure each device

independently through CLI, a time-consuming and difficult task.

Page 109: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

101

After configuring all network devices according to the customer’s needs,

we then proceeded by installing the applications, with which we would simulate

the customer’s applications. We mainly used NetMeeting for voice and video and

IP Traffic – a traffic generation tool – for the rest of the traffic.

During testing phase we ran several tests to ensure proper operation of

mechanisms, observe traffic behavior for each CoS and make appropriate

adjustments to the policies in order to deal with network bottleneck spots, i.e. low-

speed links. We collected results and proceeded to analyze them.

The results’ analysis strengthened our opinion about the DiffServ

architecture. The network performed as expected even under extreme load

conditions. Nevertheless, we also concluded that if a link is too slow for some

type of traffic, even the most balanced and efficient usage of network resources

will not prevent an application to perform poorly, if its traffic passes through that

link. In that case only upgrading the link to a higher bandwidth would be a viable

solution.

We will now conclude this dissertation by stating our conclusions

regarding IP QoS and the relevant architectures as well as recommending topics

for further research.

4.2. CONCLUSIONS

The main conclusion after reading this dissertation is that QoS

provisioning is indeed essential for today’s modern IP networks, especially the

Internet, which has become a global multiservice infrastructure. By providing

QoS, new services and applications can be enabled and implemented. Without

QoS, e-commerce, real-time video and audio and other similar applications and

Page 110: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

102

services would be impossible to realize. QoS enables them to work properly and

efficiently, thus opening new possibilities.

Moreover, QoS enables network administrators to make network

utilization as efficient as possible, without the need to upgrade the existing

infrastructure. However, we also encountered the possibility where the only

solution would be to upgrade the link’s bandwidth, a situation that sometimes is

inevitable and uncomfortable because of the cost involved. Also, the devices on a

network have to support the relevant mechanisms in order to provide QoS. As a

result, outdated network equipment might have to be upgraded as well, either by

software (i.e. router operating system upgrade) or by hardware (change of

equipment). Nevertheless, the benefit from implementing such architectures and

mechanisms usually surpasses the disadvantages of such upgrades.

Implementation of QoS architectures and mechanisms adds complexity to

the existing infrastructure. As a result, network design is more difficult and

requires more effort and precision. Additionally, constant monitoring of traffic is

required, so that network administrators can adjust the operations and functions of

the implemented mechanisms to serve new traffic profiles better.

Complexity also means finding a perfect balance between application

requirements and available network resources in order to optimize QoS

provisioning. This task represents the most difficult and challenging one for any

QoS implementation. It needs to be thoroughly designed and thought out before

actual implementation, a situation we’ve encountered during the design stage of

our case study.

We must stress, however, that this additional complexity is worth the

effort, because the value of the benefits by far surpasses any difficulties that come

with complexity.

Page 111: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

103

Finally, the DiffServ architecture is indeed preferable over IntServ. The

theoretical as well as practical analysis showed us that DiffServ is in the majority

of cases preferable over IntServ. Not only are the DiffServ architecture and

relevant mechanisms supported and more readily available on current state-of-the

art network equipment, but also the backward-compatibility (i.e. DSCP backward-

compatible with IP precedence) will make DiffServ work on non-DiffServ-

enabled equipment. As a result, a customer will not have to upgrade his/her

equipment, in contrast to IntServ. Moreover, we identified and confirmed

DiffServ’s scalability and flexibility and how it handled multiple traffic flows

during the testing stage. DiffServ’s aggregate treatment of flows performs very

well under heavy load situations, while providing all necessary resources for the

applications to work.

The result of this treatment is that it enables ISPs to offer advanced service

models to their customers by SLAs, thus adding flexibility to their services.

Customers have a broad range of choice and can even apply for service models

that are not available, but can be adjusted according to their needs. The

economical aspect of DiffServ may have not been in the scope of this dissertation,

but should not be underestimated and certainly is an advantage over IntServ.

4.3. RECOMMENDATIONS FOR FURTHER STUDY

A very interesting topic would be to investigate how DiffServ, IntServ and

MPLS cooperate in a mixed network in order to analyze how they interwork with

each other. This is due to the fact that many large-scale networks are not uniform,

but consist of several technologies. How would such a network perform under

heavy load conditions? What would be the traffic behavior? How well will the

employed mechanisms perform their functions? Will there be any

Page 112: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

104

incompatibilities? All these issues could not be examined because of time

constraints, as such a study would represent a dissertation topic by itself.

Another interesting topic would be to investigate QoS provision at

Application layer level, i.e. Application layer based classification, methods, tools

and mechanisms and how well it would perform compared to IP-based

classification and QoS provisioning in general.

Page 113: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

105

BIBLIOGRAPHY

Xipeng Xiao, “Providing Quality of Service in the Internet”, Ph.D.

dissertation, 2000.

Srinivas Vegesna, “IP Quality of Service”, Cisco Press, December 2001,

ISBN 1-57870-116-3.

Xipeng Xiao and Lionel M. Ni, “Internet QoS: A Big Picture”, paper,

2000.

Evangelos Vayias, “Traffic Control for Quantitative Quality of Service

Guarantees in Packet-Switched Networks”, Ph.D. thesis, National

Technical University of Athens, 2000.

Arindam Paul, “QoS in Data Networks: Protocols and Standards”, paper,

Ohio State University, June 2000.

Wei Sun, “QoS/Policy/Constraint Based Routing”, survey paper, Ohio

State University, July 2000.

Gautam Ray, “Quality of Service in Data Networks: Products”, paper,

Ohio State University, July 2000.

Xipeng Xiao, Thomas Telkamp & Lionel M. Ni, “A Practical Approach

for Providing QoS in the Internet Backbone”, paper, 2001.

Martin van der Zee and Rachid Ait Yaiz, “Quality of Service over Specific

Link Layers”, state of the art report, Ericsson, July 1999.

Weibin Zhao, David Olshefski & Henning Schulzrinne, “Internet Quality

of Service: An Overview”, paper, Columbia University, 2000.

Markus Peuhkuri, “IP Quality of Service”, paper, Helsinki University of

Technology, 1999.

Page 114: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

106

Saravanan Radhakrishnan. “IP Quality of Service: An Overview”, paper,

Kansas University, June 2000.

M. Potts, “NGN-I Briefing Paper on QoS for IP Networks”, paper,

February 2002.

Shigang Chen, “Routing Support for Providing Guaranteed End-To-End

Quality of Service”, Ph.D. dissertation, 1999.

Fayaz A. Shaikh, Stan McClellan, Manpreet Singh and Sannedhi K.

Chakravarthy, “End-to-End Testing of IP QoS Mechanisms”, research

paper, IEEE, 2002.

Intracom S.A., Dissertation Guidelines and Objectives, March 2002.

Alcatel, “IP QoS Support in the Internet Backbone”, white paper, October

2000, p. 1.

Tektronix, “QoS in the IP Network: Definitions, Processes and

Initiatives”, white paper, 2001.

Ashley Stevenson, “QoS: The IP Solution: Delivering End-to-End Quality

of Service for the Future of IP”, white paper, Lucent Technologies,

December 1999.

Richard Forberg and Tim Hale, “Internet Protocol Quality of Service:

Using DiffServ for Application-Specific Service Level Agreements”,

white paper, Quarry Technologies, 2001.

Intel Corporation, “Differentiated Services”, white paper, December 2000.

Nortel/Bay Networks, “IP QoS: A Bold New Network”, white paper,

September 1998.

Luminous Networks, “Quality of Service”, white paper, 2000.

Paul Ferguson and Geoff Huston, “The Evolution of Quality of Service:

Where Are We Headed?”, article, January 1998.

Page 115: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

107

Zheng Wang, “Internet QoS: Architectures and Mechanisms for Quality of

Service”, summary of book, 2001.

Anita Karve, “IP Quality of Service”, article, Network Magazine, June

1998.

Naveen Joy, “RSVP Provides Quality of Service”, article, Network World,

June 17th, 2002.

Chris Griffin and Greg Goddard, “QoS Terms Defined”, article, Network

World, June 3rd, 2002.

Cisco Systems Technical Documentation Database at www.cisco.com,

articles related to Quality of Service: “Implementing Quality of Service”,

“Quality of Service Networking”.

Cisco Systems, “Signaled QoS (using RSVP)”, white paper, 2002.

Cisco Systems, “DiffServ: The Scalable End-to-End QoS Model”, white

paper, 2001.

Cisco Systems Presentation, “Introduction to Quality of Service”, Session

IPS-130, February 2001.

Cisco Systems Presentation, “Deploying Quality of Service

Technologies”, Session IPS-230, May 2001.

Cisco Systems Presentation, “Deploying Quality of Service in Service

Provider Networks”, Session IPS-231, May 2001.

Cisco Systems Presentation, “Troubleshooting QoS”, Session IPS-330,

May 2001.

Cisco Systems Presentation, “Advanced QoS Concepts and

Developments”, Session IPS-430, May 2001.

Cisco IOS QoS: overview, technical documents, presentations, articles and

press releases at http://www.cisco.com/warp/public/732/Tech/qos/.

Page 116: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

108

K. Nichols, S. Blake, F. Baker and D. Black, “Definition of the

Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers”,

RFC 2474, December 1998.

B. Braden et al., “Recommendation on Queue Management and

Congestion Avoidance in the Internet”, RFC 2309, April 1998.

Cisco Systems, “Cisco Content Networking: Delivering Intelligent

Network Services”, Cisco Systems, 2000.

Page 117: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

109

REFERENCES

[1] Xipeng Xiao, “Providing Quality of Service in the Internet”, Ph.D.

dissertation, 2000, p. 1.

[2] Xipeng Xiao, “Providing Quality of Service in the Internet”, Ph.D.

dissertation, 2000, pp. 5-6.

[3] Alcatel, “IP QoS Support in the Internet Backbone”, white paper, October

2000, p. 1.

[4] Xipeng Xiao, “Providing Quality of Service in the Internet”, Ph.D.

dissertation, 2000, pp. 1-2.

[5] Anita Karve, “IP Quality of Service”, article, Network Magazine, June 1998,

p. 1.

[6] Xipeng Xiao and Lionel M. Ni, “Internet QoS: A Big Picture”, paper, p. 2.

[7] Fayaz A. Shaikh, Stan McClellan, Manpreet Singh and Sannedhi K.

Chakravarthy, “End-to-End Testing of IP QoS Mechanisms”, research

paper, IEEE, 2002, p. 1.

[8] M. Potts, “NGN-I Briefing Paper on QoS for IP Networks”, February 2002,

pp. 2-3.

[9] M. Potts, “NGN-I Briefing Paper on QoS for IP Networks”, February 2002,

pp. 3-4.

[10] Srinivas Vegesna, “IP Quality of Service”, Cisco Press, December 2001,

ISBN 1-57870-116-3, p. 11.

[11] Srinivas Vegesna, “IP Quality of Service”, Cisco Press, December 2001,

ISBN 1-57870-116-3, p. 10.

[12] M. Potts, “NGN-I Briefing Paper on QoS for IP Networks”, February 2002,

p. 8.

Page 118: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

110

[13] Paul Ferguson and Geoff Huston, “The Evolution of Quality of Service:

Where Are We Headed?”, article, January 1998, p. 7.

[14] Srinivas Vegesna, “IP Quality of Service”, Cisco Press, December 2001,

ISBN 1-57870-116-3, p. 34.

[15] Srinivas Vegesna, “IP Quality of Service”, Cisco Press, December 2001,

ISBN 1-57870-116-3, p. 35.

[16] P. Almquist, “Type of Service in the Internet Protocol Suite”, RFC 1349,

July 1992.

[17] Ashley Stevenson, “QoS: The IP Solution: Delivering End-to-End Quality

of Service for the Future of IP”, white paper, Lucent Technologies,

December 1999, p. 6.

[18] L. Zhang, S. Deering, D. Estrin, S. Shenker and D. Zappala, “RSVP: a New

Resource Reservation Protocol”, IEEE Network, September 1993.

[19] R. Braden, L. Zhang, S. Berson, S. Herzog and S. Jamin, “Resource

Reservation Protocol (RSVP) – Version 1 Functional Specification”, RFC

2205, September 1997.

[20] Xipeng Xiao and Lionel M. Ni, “Internet QoS: A Big Picture”, paper, p. 4.

[21] Weibin Zhao, David Olshefski & Henning Schulzrinne, “Internet Quality of

Service: An Overview”, paper, Columbia University, 2000, p. 5.

[22] A. Demers, S. Keshav and S. Shenker, “A Classical Self-Clocked WFQ

Algorithm”, SIGCOMM ’89, Austin, TX, September 1989.

[23] D. Clark, S. Shenker and L. Zhang, “Supporting Real-Time Applications in

an Integrated Services Packet Network: Architecture and Mechanism”,

Proceedings SIGCOMM ’92, pp. 14-26, August 1992.

Page 119: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

111

[24] Cisco Systems, “Cisco Content Networking: Delivering Intelligent Network

Services”, Cisco Systems, 2000, p. 123.

[25] Cisco Systems, “Cisco Content Networking: Delivering Intelligent Network

Services”, Cisco Systems, 2000, p. 171.

[26] Srinivas Vegesna, “IP Quality of Service”, Cisco Press, December 2001,

ISBN 1-57870-116-3, p. 114.

[27] Weibin Zhao, David Olshefski & Henning Schulzrinne, “Internet Quality of

Service: An Overview”, paper, Columbia University, 2000, pp. 5-6.

[28] Cisco Systems, “Cisco Content Networking: Delivering Intelligent Network

Services”, Cisco Systems, 2000, pp. 159-160.

[29] S. Floyd and V. Jacobson, “Random Early Detection Gateways for

Congestion Avoidance”, IEEE/ACM Transactions on Networking, August

1993.

[30] Srinivas Vegesna, “IP Quality of Service”, Cisco Press, December 2001,

ISBN 1-57870-116-3, p. 21.

[31] S. Shenker, C. Partridge and R. Guerin, “Specification of Guaranteed

Quality of Service”, RFC 2212, September 1997.

[32] J. Wroclawski, “Specification of the Controlled-Load Network Element

Service”, RFC 2211, September 1997.

[33] Srinivas Vegesna, “IP Quality of Service”, Cisco Press, December 2001,

ISBN 1-57870-116-3, p. 21.

[34] Srinivas Vegesna, “IP Quality of Service”, Cisco Press, December 2001,

ISBN 1-57870-116-3, p. 148.

[35] M. Potts, “NGN-I Briefing Paper on QoS for IP Networks”, February 2002,

p. 5.

Page 120: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

112

[36] S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, and W. Weiss, “An

Architecture for Differentiated Services”, RFC 2475, December 1998.

[37] K. Nichols, S. Blake, F. Baker and D. Black, “Definition of the

Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers”,

RFC 2474, December 1998.

[38] Srinivas Vegesna, “IP Quality of Service”, Cisco Press, December 2001,

ISBN 1-57870-116-3, p. 25.

[39] V. Jacobson, K. Nichols and K. Poduri, “An Expedited Forwarding PHB”,

RFC 2598, June 1999.

[40] J. Heinanen, F. Baker,W. Weiss and J. Wroclawski, “Assured Forwarding

PHB Group”, RFC 2597, June 1999.

[41] Srinivas Vegesna, “IP Quality of Service”, Cisco Press, December 2001,

ISBN 1-57870-116-3, p. 25.

[42] Ashley Stevenson, “QoS: The IP Solution: Delivering End-to-End Quality

of Service for the Future of IP”, white paper, Lucent Technologies,

December 1999, pp. 3-4.

[43] E. Rosen, A. Viswanathan and R.Callon, “Multiprotocol Label Switching

Architecture”, Internet draft, August 1999.

[44] Ashley Stevenson, “QoS: The IP Solution: Delivering End-to-End Quality

of Service for the Future of IP”, white paper, Lucent Technologies,

December 1999, p. 5.

Page 121: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

113

NOTATION

ACK Acknowledgement

ACL Access Control List

AF Assured Forwarding

ATM Asynchronous Transfer Mode

BA Behavior-Aggregate

CAR Committed Access Rate

CBR Constant Bit Rate

CBWFQ Class-Based Weighted Fair Queuing

CLI Command Line Interface

CR Committed Rate

CRC Cyclic Redundancy Check

DiffServ Differentiated Services

DS Differentiated Services

DSCP DiffServ Code Point

DSL Digital Subscriber Line

DWDM Dense Wavelength Division Multiplexing

EB Excess Burst

EF Expedited Forwarding

FIFO First-In First-Out

IETF Internet Engineering Task Force

IntServ Integrated Services

IOS Internetwork Operating System

IP Internet Protocol

ISP Internet Service Provider

LAN Local Area Network

LLQ Low-Latency Queuing

MAN Metropolitan Area Network

Page 122: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

114

MDRR Modified Deficit Round Robin

MF Multi-Field

MPLS Multi-Protocol Label Switching

MQCLI Modular QoS Command Line Interface

NB Normal Burst

OSI Open Systems Interconnection

PHB Per-Hop Behavior

POS Packet-Over-SONET

PQ Priority Queuing

QDM QoS Distribution Manager

QoS Quality of Service

QPM QoS Policy Manager

RED Random Early Detection

RFC Request For Comments

RR Round-Robin

RSVP Resource ReSerVation Protocol

RTT Round Trip Time

SLA Service Level Agreement

TCP Transmission Control Protocol

ToS Type of Service

UDP User Datagram Protocol

VBR Variable Bit Rate

VC Virtual Circuit

VoIP Voice-over-IP

VPN Virtual Private Network

WAN Wide Area Network

WFQ Weighted Fair Queuing

WIC WAN Interface Card

WRED Weighted Random Early Detection

Page 123: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

115

APPENDIX

PART I – INTERIM REPORT 1. INTRODUCTION

With the exponential growth of Internet and network users around the

globe, a variety of applications has emerged to meet the users’ needs. Such

applications include e-commerce, Voice-over-IP (VoIP), streaming media (audio

and video), teleconferencing and other bandwidth-intensive applications. This

situation has led to high demand for a network architecture that will provide

appropriate facilities.

Traditionally, the Internet does not have the ability to distinguish between

different service classes, therefore being unable to cope with different

requirements for different applications. The Internet Protocol (IP) is

fundamentally a best-effort protocol in that it does not even guarantee the delivery

of data packets. Confirmation of the arrival of data packets at the destination is the

responsibility of a higher layer protocol (Transmission Control Protocol – TCP),

which is located just above the IP in the well-known seven-layer open systems

interconnection (OSI) reference model. In other words, traffic – no matter what its

requirements – is processed as quickly as possible and there is no guarantee as to

timely or actual delivery.

Quality of Service (QoS) tackles this problem inherent to the Internet and

to IP networks in general and tries to solve it by providing mechanisms that

ensure timely as well as reliable delivery of data to its destination. The two main

driving forces for QoS are companies that utilize the Web as their main trading

point and require better delivery of their content and services and Internet Service

Providers (ISPs) to provide better services to their customers by increasing the

Page 124: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

116

effectiveness of their available bandwidth. Ultimately, both groups seek increased

revenue, which can be achieved by providing better and more effective services.

QoS is the mechanism that will make this possible.

The increasing importance of the Internet suggests that some form of

coordination and regulation will be needed to enhance its development and usage.

One of these elements will be the need to address quality of service issues. If

nothing else, quality of service will help differentiate between levels of service

and access provision. At the other end, quality of service may be necessary for

tariff purposes. Pricing and billing strategies for Internet access may have to

partly depend on service quality levels that are requested or provided.

Page 125: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

117

2. BACKGROUND

There are no agreed quantifiable measures that define unambiguously QoS

as perceived by a user. Terms, such as “better”, “worse”, “high”, “medium”,

“low”, “good”, “fair”, “poor”, are typically used, but these are subjective and

therefore cannot be translated precisely into network level parameters that can

subsequently be designed for by network planners. The end effect at the terminal

is also heavily dependent upon issues such as compression algorithms, coding

schemes, the presence of higher layer protocols for security, data recovery, re-

transmission etc. and the ability of applications to adapt to network congestion, or

their requirement for synchronisation.

However, network providers need performance metrics that they can agree

with their peers (when exchanging traffic), and with service providers buying

resources from them with certain performance guarantees. The following five

system performance metrics are considered the most important in terms of their

impact on the end-to-end QoS, as perceived by a user:

Availability: Ideally, a network should be available 100% of the time.

Even a high-sounding figure as 99.8% translates into about an hour and a

half of downtime per month, which may be unacceptable for a large

enterprise. Serious carriers strive for 99.9999% availability, which they

refer to as "six nines" and which translates into a downtime of 2.6 seconds

per month.

Throughput: This is the effective data transfer rate measured in bits per

second. It is not the same as the maximum capacity, or wire speed, of the

network, often erroneously called the network's bandwidth. Sharing a

network lowers the throughput that can be realized by any one user, as

does the overhead imposed by the extra bits included in every packet for

Page 126: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

118

identification and other purposes. A minimum rate of throughput is usually

guaranteed by a service provider (who needs to have a similar guarantee

from the network provider).

Packet loss: Network devices, such as switches and routers, sometimes

have to hold data packets in buffered queues when a link gets congested. If

the link remains congested for too long, the buffered queues will overflow

and data will be lost. The lost packets must be retransmitted, adding to the

total transmission time. In a well-managed network, packet loss will

typically be less than 1% averaged over a month.

Delay: The time taken by data to travel from source to destination is

known as delay. Unless satellites are involved, the latency of a 5000km

voice call carried by a circuit-switched telephone network is about 25ms.

For the Internet, a voice call may easily exceed 150ms of delay because of

signal processing (digitizing and compressing the analog voice input) and

congestion (queuing).

Jitter (delay variation): This has many causes, including variations in

queue length, variations in the processing time needed to reorder packets

that arrived out of order because they travelled over different paths and

variations in the processing time needed to reassemble packets that were

segmented by the source before being transmitted.

Network planning is the tool for ensuring that availability, throughput,

packet loss, delay and jitter values are within the limits needed to provide

acceptable end-to-end QoS. This calculation has to take into account the nature of

the underlying network (physical capacity and the protocols used at layers 1, 2

and 3) and the knowledge of whether (and if so, how and where) network

resources are shared. The reason for implementation of QoS in IP environments is

Page 127: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

119

to avoid (and as much as possible prevent) all of the above problems related with

networks.

There are mainly three techniques that provide QoS on the Internet:

Integrated Services (IntServ), Differentiated Services (DiffServ) and

MultiProtocol Label Switching (MPLS).

IntServ reserves resources (link bandwidth and buffer space) for each

individual stream of data so that the service quality can be guaranteed if needed. It

assigns a specific stream of data to a so-called “traffic class”, which defines a

certain level of service. Once a class has been assigned to the data stream, a

message is forwarded to the destination to determine whether the network has the

available resources (transmission capacity, buffer space etc.) needed to support

that specific class of service. If all devices along the path are found capable of

providing the required resources, the receiver sends an acknowledgement and

returns it to the source indicating that the latter may start transmitting its data. The

procedure, known as the resource reservation protocol (RSVP), is repeated

continually to verify that the necessary resources remain available. If the required

resources are not available, the receiver sends an RSVP error message to the

sender.

Theoretically, this continuous checking that resources are available means

that the network resources are used very efficiently. When the resource

availability reaches a minimum threshold, services with very strict QoS

requirements will not receive an acknowledgement and will know that the QoS is

not guaranteed.

However, IntServ has its disadvantages as well. It is not capable of

ensuring that the required resources will be available when needed. Another is

that it reserves network resources for individual data streams. If multiple streams

Page 128: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

120

from one aggregation point require the same resources, the streams will

nevertheless all be treated individually. This means that acknowledgements must

be sent to each individual data stream, thus wasting valuable network resources.

DiffServ divides traffic into different classes and treats them differently,

according to requirements. A short tag is appended to each packet depending on

its service class. Data streams having the same resource requirements may then be

aggregated based on their tags when they arrive at the edge routers. The routers at

the core then forward the data flows toward their destinations based on their tags

without examining the individual packet headers in detail. Since most of the

decision-making is in this way transferred from the core routers to the edge

routers, the core network runs much faster. This mechanism also does not require

a signalling procedure prior to sending the data.

MPLS labels the packets in such a way as to make it unnecessary for each

IP packet header to be analyzed at intermediate points between the source and

destination. MPLS does this by appropriately labelling IP packets at the input of

edge routers located at the entry points of an MPLS-enabled network. It is called

multiprotocol because it works with IP, Asynchronous Transfer Mode (ATM) and

frame relay network protocols. According to the OSI reference model, MPLS

forwards packet to the second layer (switching) rather than the third layer

(routing), thus simplifying traffic management, especially when there is a large

variety of types of traffic.

In the past, QoS planners supported both IntServ and DiffServ. At present,

however, the trend is to use DiffServ supplemented by some of the resource

reservation capabilities of RSVP at the edges. At the edges of the network,

resources tend to be more limited and there are not so many streams to maintain.

Page 129: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

121

The dissertation will be carried out at Intracom S.A. (www.intracom.gr),

located in Peania (near Athens) in Greece. Intracom is one of the largest

telecommunications companies in Greece. According to Intracom’s guidelines,

the survey part of the dissertation will include:

Study of the mechanisms, algorithms, protocols and technologies that

support QoS implementation, including the work of several Internet

Engineering Task Force (IETF) working groups as well as relevant papers.

Investigation of the current status of implementation of such capabilities in

network devices (routers, switches etc.), focusing mainly on Cisco®

Systems products.

Intracom has already identified key technical issues in their outline. The

basic literature survey outline proposed by Intracom will consist of the following:

What is QoS

QoS parameters and levels

Applications that need QoS

QoS Frameworks:

• Integrated Services (IntServ)

• Differentiated Services (DiffServ)

Relevant technologies:

• Asynchronous Transfer Mode (ATM)

• Multiprotocol Label Switching (MPLS)

• Routing, constraint-based routing and traffic engineering

Network element mechanisms for QoS

• Packet marking, classification, policing, shaping

Page 130: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

122

• Resource allocation, scheduling (Round-Robin (RR), Modified

Deficit Round-Robin (MDRR), Weighted Fair Queuing (WFQ),

Class-Based Queuing (CBQ))

• Queue management (Random Early Detection (RED), Weighted

Random Early Detection (WRED))

A preliminary search for relevant papers, documents and books has

yielded promising results and there is indeed a very broad variety of material on

this topic, both on the Internet as well as in books. Some essential websites and

books that will prove to be valuable are presented below (books are presented

together with their descriptions and/or summaries):

Cisco website at www.cisco.com (product documentations, technical

guides and white papers).

ITpapers at www.itpapers.com (large collection of IT-related white papers

and technical documents).

IETF website at www.ietf.org (documents and progress reports of IETF

working groups).

Grenville Armitage, “Quality of Service in IP Networks: Foundations for a

Multi-Service Internet”, MacMillan Technical Publishing, April 2000.

This book provides readers with information to evaluate existing IP

networks and determine how to improve upon the traditional best-effort service.

This book will provide following information:

• Pros and cons of different service models and schemes for

classification, queue management, and scheduling.

• The impact of existing access technologies, such as dial-up

modems, ISDN, ADSL and cable modems, on QoS.

Page 131: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

123

• The role of WAN technologies, such as ATM, IP over ATM,

Packet over SONET/SDH and WDM, in providing QoS.

• The impact of current signaling protocols upon QoS-sensitive

applications such as voice and video.

Cisco Networking Essentials, “IP Quality of Service”, Cisco Press,

January 23rd, 2001, 1st Edition.

This book will help understand and deploy IP QoS for Cisco networks. Its

key features are the following:

• QoS fundamentals and the need for IP QoS.

• DiffServ QoS architecture and its QoS functionality.

• IntServ QoS model and its QoS functions.

• ATM, Frame Relay and IEEE 802.1p/802.1Q QoS technologies

and how they interwork with IP QoS.

• MPLS and MPLS VPN QoS and how they interwork with IP QoS.

• MPLS traffic engineering

• Routing policies, general IP QoS functions and other

miscellaneous QoS information.

Mike Flannagan, “Administering Cisco QoS for IP Networks”, 1st Edition,

Syngress Media, March 2001.

This book discusses IP QoS and how it applies to enterprise and service

provider environments. The author reviews routing protocols and QoS

mechanisms available on Cisco network devices, such as routers and switches.

Alistair A. Croll & Eric Packman, “Managing Bandwidth: Deploying QOS

In Enterprise Networks”, Prentice-Hall, 2000.

This book is a guide to QoS techniques for cutting costs, enhancing

performance, deploying next-generation applications, handling peak loads and

Page 132: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

124

maximizing the value of enterprise networks. It contains comparisons of latest

QoS options, workarounds for the limitations of today's standards and real-world

case studies of bandwidth-managed networks, ranging from single-site small

businesses to global financial institutions. Furthermore, it features:

• Various QoS approaches, including media prioritization, IntServ

and DiffServ models.

• Load-balancing and caching alternatives that mitigate server load

and increase end-to-end application performance.

• Implementations of policy systems and service level agreements

for scalable distribution and tracking of QoS rules.

• QoS Integration with enterprise directory services such as

Directory-Enabled Networks and directory access protocols like

LDAP.

Zheng Wang, “Internet QoS: Architectures and Mechanisms for Quality of

Service”, Morgan Kaufmann Publishers, 2001.

This book is an up to date review of the aspects relevant to Internet-

specific QoS. It makes a consistent case around what are important elements for

Internet QoS. The main part revolves around IntServ and DiffServ, but it separates

into sub-sections the necessary mechanisms from the concept and the policies

used by IntServ and DiffServ. In the case of IntServ, after a motivation based on

real-time applications, the intended architecture and the service modes, three

crucial mechanisms are introduced: resource reservation as featured by RSVP,

flow identification schemes and packet scheduling. IntServ over specific data link

layers is also summarized. The chapter on DiffServ describes the framework,

traffic classification and conditioning, the assured forwarding and expedited

forwarding services, along with the mechanisms for the differentiated services

Page 133: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

125

field, packet classification, and traffic policing algorithms. In addition, the issues

of interoperability with non-DiffServ-compliant networks, end-to-end resource

management and relevant performance issues are discussed. The chapter on

MPLS covers the historical perspective, proceeds to different proposals for label

switching and provides essential concepts of the MPLS architecture.

Page 134: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

126

3. AIMS AND OBJECTIVES

Again, Intracom has clearly defined the aims and objectives of this

dissertation. According to the outline, the objectives of the dissertation are the

following:

• To investigate the mechanisms, algorithms, protocols and technologies

that have been defined for implementing QoS in IP networks.

• To identify current state-of-the-art, with regard to implementation,

availability and operation of such features and capabilities in current

commercial network devices.

• To experiment with such capabilities of network devices in a networking

laboratory environment by designing, implementing and validating a

network provided with QoS mechanisms.

Page 135: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

127

4. METHODS

The practical part of the dissertation will include the design of QoS

mechanisms in an IP network and their implementation using the networking

laboratory of the Intracom Information Systems Department.

Firstly, a network topology and QoS framework will be identified and the

functions required for each network element will be selected. Then, the network

elements will be configured appropriately and finally, tests will be run for

validating and assessing the offered QoS. To this respect, some traffic generation

and analysis tools will be needed. These will be installed and configured on

workstations.

Below is a complete detailed outline of the key activities and tasks that

will lead to the achievement of the desired objectives:

Design and implementation of a QoS-enabled network at Intracom IS Lab,

consisting of Cisco network devices, using technologies such as DiffServ,

MPLS, ATM. This will be carried out on an existing network

infrastructure, the network does not need to be designed from the

beginning. The relevant sub-tasks are the following:

• Identification of applications (e.g. Virtual Private Networks

(VPNs), Voice-over-IP (VoIP), Internet access etc.)

• Definition of QoS classes

• Identification and classification of network topology

• Types of network elements (edge routers, core routers, switches

etc.)

• Protocols and QoS mechanisms for each network element type per

QoS class

Page 136: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

128

• Configuration of network elements

Measurement Infrastructure for validating QoS of a network. Sub-tasks

consist of the following:

• Applications, traffic generation and measurement tools

• Installation/configuration/development of traffic generation and

analysis tools (TCPDUMP, MGEN, LANtraffic, TG, etc.) on

appropriate workstations (Linux or Windows 2000 PCs).

• Design of validation tests

• Execution of tests, collection and analysis of results

• Possible fine-tuning of QoS mechanisms

Simple tests with advanced technologies for QoS. Sub-tasks are:

• MPLS Traffic Engineering

• MPLS Traffic Engineering and DiffServ (DS-TE: DiffServ aware

traffic engineering)

Page 137: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

129

5. TIME PLAN

The dissertation will commence on June 17th, 2002 at the main Intracom

installation in Peania, Greece. Submission deadline is September 30th, 2002. This

leaves three months time for the completion of the dissertation. To be exact, the

project start date is June 17th and the projected date of completion is September

13th. This leaves one week for additional review and corrections. The final

dissertation will be sent either by mail or by courier to the department. According

to the outline proposed by Intracom, there will be three main project stages. All

three major project stages will consist of several smaller sub-tasks that will be

carried out throughout the project. These are outlined below:

• Design and implementation of a QoS enabled network

• Measurement infrastructure for validating QoS of a network

• Simple tests with advanced technologies for QoS

Since all experiments will be carried out on an existing network

infrastructure and all required equipment will be readily available, the project can

be completed on time. Below is a table, outlining the projected progress of the

dissertation:

Page 138: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

130

PROJECT STAGE START FINISH

1. DESIGN & IMPLEMENTATION OF QOS-ENABLED NETWORK 17/06/2002 24/07/2002

• Identification of applications 17/06/2002 21/06/2002

• Definition of QoS classes 18/06/2002 25/06/2002

• Identification and classification of network topology 24/06/2002 10/07/2002

• Types of network elements 08/07/2002 17/07/2002

• Protocols and QoS mechanisms for each network element type 12/07/2002 19/07/2002

• Configuration of network elements 15/07/2002 24/07/2002

2. MEASUREMENT INFRASTRUCTURE FOR VALIDATING QOS OF A NETWORK 25/07/2002 30/08/2002

• Applications, traffic generation and measurement tools 25/07/2002 02/08/2002

• Installation/configuration/development of traffic generation and analysis tools 01/08/2002 09/08/2002

• Design of validation tests 12/08/2002 16/08/2002

• Execution of tests, collection and analysis of results 19/08/2002 23/08/2002

• Possible fine-tuning of QoS mechanisms 26/08/2002 30/08/2002

3. SIMPLE TESTS WITH ADVANCED TECHNOLOGIES FOR QOS 02/09/2002 13/09/2002

• MPLS traffic engineering 02/09/2002 06/09/2002

• MPLS traffic engineering and DiffServ 09/09/2002 13/09/2002

FINAL REVIEW & CORRECTIONS 16/09/2002 20/09/2002

SUBMISSION 23/09/2002 23/09/2002

Please see appendix for the relevant Gantt chart.

Page 139: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

131

6. DELIVERABLES

The expected outcome of this project is to broaden and expand the

knowledge on the specific topic and give possible solutions for future use and

future applications. Quality of Service in IP networks is a quickly emerging topic

and will dominate the future of the networking arena.

The project will lead to results and conclusions regarding this technology,

possible advantages and disadvantages and future trends. This will be achieved by

investigating and evaluating the technology’s mechanisms and how it can be

applied using current hardware.

Ultimately, the project will contribute to the investigation of this topic and

will provide a basis for future experiments and investigations. The main

deliverables of this project are:

A fully working QoS-enabled LAN infrastructure, on which all tests will

be carried out.

Results of various experiments, which have been outlined previously,

together with critical evaluation.

An extensive report based on the performed experiments and results.

The current report.

Page 140: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

132

7. BIBLIOGRAPHY/REFERENCES

• M. Potts, NGN Initiative, “NGN-I Briefing Paper on QoS for IP

Networks”, February 2002.

• Intracom S.A., Dissertation Guidelines and Objectives, March 2002.

• The whatis.com Website at www.whatis.com.

• Ashley Stevenson, “QoS: The IP Solution – Delivering End-to-End

Quality of Service for the Future of IP”, White Paper, Lucent

Technologies, December 1999.

• Xipeng Xiao, “Providing Quality of Service in the Internet”, Dissertation,

2000.

• Peng Hwa Ang & Berlinda Nadarajan, “Issues in the Regulation of

Internet Quality of Service”, The Internet Society Website at

www.isoc.org, 2000.

Page 141: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

133

PART II – INTERIM REPORT GANTT CHART

PART III – FINAL PROJECT TIME PLAN

PROJECT STAGE START FINISH

1. LITERATURE SURVEY 17/6/2002 27/6/2002

Identification of relevant bibliography 17/6/2002 21/6/2002

Definition of terms 20/6/2002 27/6/2002

2. DESIGN & IMPLEMENTATION OF A QOS-ENABLED NETWORK 27/6/2002 5/8/2002

Identification of applications 27/6/2002 6/7/2002

Definition of QoS classes 1/7/2002 13/7/2002

Network topology 8/7/2002 20/7/2002

Page 142: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

134

PART IV – FINAL PROJECT GANTT CHART

Page 143: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

135

Page 144: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

136

PART V – IETF RFC SPECIFICATIONS

Below are all relevant IETF RFC (Request For Comments) specifications,

which can be found at the IETF website (http://www.ietf.org).

J. Postel, “Internet Protocol”, STD 5, RFC 791, September 1981.

J. Postel, “Transmission Control Protocol”, STD 7, RFC 793, September

1981.

J. Postel, “IP Service Mappings”, RFC 795, September 1981.

P. Almquist, “Type of Service in the Internet Protocol Suite”, RFC 1349,

July 1992.

R. Braden, L. Zhang, S. Berson, S. Herzog and S. Jamin, “Resource

ReSerVation Protocol (RSVP) – Version 1 Functional Specification”, RFC

2205, September 1997.

J. Wroclawski, “The Use of RSVP with IETF Integrated Services”, RFC

2210, September 1997.

J. Wroclawski, “Specification of the Controlled-Load Network Element

Service”, RFC 2211, September 1997.

S. Shenker, C. Partridge and R. Guerin, “Specification of Guaranteed

Quality of Service”, RFC 2212, September 1997.

S. Shenker and J. Wroclawski, “General Characterization Parameters for

Integrated Service Network Elements”, RFC 2215, September 1997.

B. Braden et al., “Recommendation on Queue Management and

Congestion Avoidance in the Internet”, RFC 2309, April 1998.

Page 145: Implementing QoS in IP Networks - Nikolaos Tossiou

Implementing Quality of Service in IP Networks

137

K. Nichols, S. Blake, F. Baker and D. Black, “Definition of the

Differentiated Services Field (DS Field) in the IPv4 and IPv6 Headers”,

RFC 2474, December 1998.

S. Blake, D. Black, M. Carlson, E. Davies, Z. Wang, and W. Weiss, “An

Architecture for Differentiated Services”, RFC 2475, December 1998.

J. Heinanen, F. Baker,W. Weiss and J. Wroclawski, “Assured Forwarding

PHB Group”, RFC 2597, June 1999.

V. Jacobson, K. Nichols and K. Poduri, “An Expedited Forwarding PHB”,

RFC 2598, June 1999.

E. Rosen, A. Viswanathan and R.Callon, “Multiprotocol Label Switching

Architecture”, Internet draft, August 1999.

Page 146: Implementing QoS in IP Networks - Nikolaos Tossiou

All brand names contained within this dissertation

are property of their respective owners