8/6/2019 Phase2 Report
1/32
CHAPTER 1
INTRODUCTION
In a world of fast changing technology, there is a rising requirement for people to communicate
and get connected with each other and have appropriate and timely access to information
regardless of the location of the each individuals or the information. The increasing demands and
requirements for wireless communication systems ubiquity have led to the need for a better
understanding of fundamental issues in communication theory and electromagnetic and their
implications for the design of highly-capable wireless systems. In continuous development of
mobile environments, the major service providers in the wireless market kept on monitoring the
growths of 4th generation (4G) mobile technology. 2G and 3G are well-established as themainstream mobile technology around the world. 3G is stumbling to obtain market share for a
different reasons and 4G is achieving some confidence.
In today's Internet, real-time applications such as VoIP, videoconferencing and on-line gaming
mostly use RTP over UDP or UDP alone to transport data. Because these protocols are
unresponsive to congestion events, the growing popularity of applications that use them
endangers the stability of the Internet. So, to make it possible that real-time applications are
widely adopted, common congestion control mechanisms suitable for real time multimedia are
expected to be deployed. Also, a variety of wireless and wired technologies have been developed
in the past years. The vision for the next generation of mobile communications networks consists
in having these technologies integrated and handovers between them occurring seamlessly.
These handovers may cause that during a connection the bandwidth available varies in one or
more orders of magnitude. More volatile scenarios, such as ad hoc or sensor networks, are also
expected. Most probably, next generation terminals will be multi-homed and will act as mobile
routers. For these reasons, the control of real time flows in 4G networks is still an unsolved issue.
New solutions are required so that the network stability is maintained even when conditions vary
abruptly, and the quality perceived by interactive real-time applications is not degraded by the
mechanisms controlling the flow
8/6/2019 Phase2 Report
2/32
1.1 4G Network Architecture
Figure shows the widely accepted 4G network structure with IP as the core network used for
communication; integrating the 2G, 3G and 4G technologies using a convergence layer
Fig. Architecture of 4G Network
4G architecture will provide access through a collection of radio interfaces, seamless
roaming/handover and the best-connected service, combining multiple radio access interfaces
(such as WLAN, Bluetooth and GPRS) into a single network that subscribers may use. It allows
any mobile device to seamlessly roam over different wireless technologies automatically, using
the best connection available for the intended use. Users will have access to different services,
increased coverage, the convenience of a single device, one bill with reduced total access cost,
and more reliable wireless access even with the failure or loss of one or more networks.
8/6/2019 Phase2 Report
3/32
In the 4G architecture, a single physical 4G communication device with multiple
interfaces to access services on different wireless networks. The multimode device architecture
may improve call completion and expand effective coverage area. The device itself incorporates
most of the additional complexity without requiring wireless network modification or employing
interworking devices. Each network can deploy a database that keeps track of user location,
device capabilities, network conditions, and user preferences. It allow the social network user to
connect the rest of the network members without any modification of his/her infrastructure,
application, services and the architecture of communication system .
1.2 Issues in 4G Networks
Some of the issues in 4G Network are
1. Multimode user terminal:
Multimode user terminalis a device working in different modes supporting a wide variety of 4G
services and wireless networks by reconfiguring themselves to adapt to different wireless
networks. They encounter several design issues such as limitations in the device size, cost, power
consumption and backward compatibility to systems.
2. Wireless network discovery:
Availing 4G services require the multimode user terminal to discover and select the preferred
wireless network. Service discovery in 4G will be much more challenging then 3G because of
the heterogeneity of the networks and their access protocols.
3. Wireless network selection
4G will provide the users a choice to select a wireless network providing optimized performanceand high QoS for a particular place, time and desired service (communication, multimedia). But
what parameters define high QoS and optimized performance at particular instant needs to be
clearly defined to make the network selection procedure efficient and transparent to the end user.
8/6/2019 Phase2 Report
4/32
Possible considerations may be available network resources, network supported service types
and cost and user preference.
4. Terminal mobility
Terminal mobility is an essential characteristic to fulfill the Anytime Anywhere promise of 4G.
It allows the mobile users to roam across the geographic boundaries of wireless networks . Two
main issues in terminal mobility are location and hand off management. Location management
involves tracking the location of the mobile users and maintaining information such as the
authentication data, QoS capabilities, and the original and the current cell location. Handoff
management is maintaining the ongoing communication when the terminal roams. Handoff can
be horizontal or vertical depending on whether the user moves from one cell to another
within the same wireless systems or across different wireless systems (WLAN to GSM). Handoffprocess faces several challenges like maintaining the QoS and system performance across
different systems, deciding the correct handoff time, designing the correct handoff mechanism,
packet losses, handover latency and the increased system load.
5.Network infrastructure and QoS support
Unlike previous generation networks (2G and 3G), 4G is an integration of IP and non-IP based
system. Prior to 4G, QoS designs were made with a particular wireless system in mind. But in
4G networks QoS designs should consider the integration of different wireless networks to
guarantee QoS for the end-to-end services end-to-end services.
6. Security
Most of the security schemes and the encryption/decryption protocols of the current generation
networks were designed only for specific services. They seem to be very inflexible to be used
across the heterogeneous architecture of 4G which needs dynamically reconfigurable, adaptive
and lightweight security mechanism.
7. Fault tolerance
Wireless networks characterize a tree-like topology. Any failure in one of the levels can affect all
8/6/2019 Phase2 Report
5/32
the network elements at the levels below. This problem can be further aggravated because of the
multiple tree topologies. Adequate research work is required to devise a strategy for fault
tolerance in wireless networks
8. Convergence services
The idea of convergence means that the creation of the atmosphere that can eventually
provide seamless and high-reliable and quality broadband mobile communication service and
ubiquitous service through wired and wireless convergence networks without the space problem
and terrestrial limitation, by means of ubiquitous connectivity. Convergence among industries is
also
accelerated by formation of alliances through participation in various projects to provide
convergence services. 4G mobile systems will mainly be characterized by a horizontal
communication model, where such different access technologies as cellular, cordless, wireless
LAN type systems, short-range wireless connectivity, and wired systems will be combined on a
common platform to complement each other in the best possible way for different service
requirements and radio environments . The development is expected to inspire the trend of
progressive information technologies a far from the current technical focus on fully mobile and
widespread convergence of media. The trends from the service perspective include integration of
services and convergence of service delivery mechanisms. In accordance with these trends,
mobile network architecture will become flexible and versatile, and new services will be easy to
deploy.
9. Broadband Services
Broadband is a basis for the purpose of enabling multimedia communications including
video service, which requires transmission of a large amount of data; it naturally calls media
convergence aspect, based on packet transport, advocating the integration of various media ondifferent qualities. The increasing position of broadband services like Asymmetric Digital
Subscriber Line (ADSL) and optical fiber access systems and office or home LANs is expected
to lead to a demand for similar services in the mobile communication environment. 4G service
application characteristics will give broadband service its advantages;
8/6/2019 Phase2 Report
6/32
Low cost: To make broadband services available to the user to exchange various kinds of
information, it is necessary to lower charges considerably in order to keep the cost at or
below the cost of existing service.
Coverage of Wide Area: One feature of mobile communications is that its availability
and omnipresent. That advantage is important for future mobile communication as well.
In particular, it is important to maintain the service area in which the terminals of the new
system can be used during the transition from the existing system to a new system.
Wide Variety of Services Capability: Mobile communication is for various types of
users. In the future, we expect to make the advanced system performance and
functionality to introduce a variety of services not only the ordinary telephone service.
Those services must be made easier for anyone to use
10 .Interactive with Home-Networking, Telemetric, Sensor-network services
Since technologies are becoming more collaborative and essential. Evolution of all network
services based on All-IP network is needed for more converged services. IP-based unified
network for far above the ground quality convergence services through active access is what
broadband convergence network is all about. ALL-IP or Next Generation Network-IP based
convergence of wired or wired backbone network, which may be the most rapidly deployed case
of convergence.
All-IP technology networking and IP multimedia services are the major trends in the wired and
wireless network. The idea of the broadband convergence network (BcN) fit in the provision of a
common, unified, and flexible service architecture that can support multiple types of services and
management applications over multiple types of transport networks. The primary purpose of
putting 4G service application into more interactive driven broadband convergence network is its
applicability for home-networking, telemetric, and sensor-network service. Collaborative
converged network will give a more beneficial service and application, especially if it is in
broadband computing to the users and its providers. To give more emphasis on this service
8/6/2019 Phase2 Report
7/32
application, one example is home networking as its applicability binds to give more advantage
to the users and the society in terms of broadband connectivity. Far more than broadband
convergence network application, telemetric application will put more tangible emphasis on the
4G mobile technology application.
11. Flexibility and Personalized Service
The key concern in security designs for 4G networks is flexibility. 4G systems will support
comprehensive and personalized services, providing stable system performance and quality of
service. To support multimedia services, high-datarate services with good system reliability will
be provided. At the same time, a low data rate transmission cost will be maintained. In order to
meet the demands of these diverse users, service providers should design personal and
customized services for them. Personal mobility is a concern in mobility management. Personal
mobility concentrates on the movement of users instead of users terminals, and involves the
provision of personal communications and personalized operating environments.
1.3.Congestion Control in 4G Heterogenous Network
In today's Internet, real-time applications such as VoIP, videoconferencing and on-line gaming
mostly use RTP over UDP or UDP alone to transport data. Because these protocols are
unresponsive to congestion events, the growing popularity of applications that use them
endangers the stability of the Internet. So, to make it possible that real-time applications are
widely adopted, common congestion control mechanisms suitable for real time multimedia are
expected to be deployed. Also, a variety of wireless and wired technologies have been developed
in the past years. The vision for the next generation of mobile communications networks consists
in having these technologies integrated and handovers between them occurring seamlessly.
These handovers may cause that during a connection the bandwidth available varies in one or
more orders of magnitude. More volatile scenarios, such as ad hoc or sensor networks, are also
expected. Most probably, next generation terminals will be multi-homed and will act as mobile
8/6/2019 Phase2 Report
8/32
routers. For these reasons, the control of real time flows in 4G networks is still an unsolved issue.
New solutions are required so that the network stability is maintained even when conditions vary
abruptly, and the quality perceived by interactive real-time applications is not degraded by the
mechanisms controlling the flow.
Congestion control over network, for all types of media traffic, has been an active area of
research in the last decade. This is due to the flourishing increase in the audiovisual traffic of
digital convergence. There exists a variety of network applications built on its capability of
streaming media either in real-time or on demand such as video streaming and conferencing,
voice over IP (VoIP), and video on demand (VoD). The number of users for these network
applications is continuously growing hence resulting in congestion.
All the networks applications do not use TCP and therefore do not allow fair allocation with the
available bandwidth. Thus, the result of the unfairness of the non-TCP applications did not have
much impact because most of the traffic in the network uses TCP-based protocols. However, the
quantity of audio/video streaming applications such as Internet audio and video players, video
conferencing and analogous types of real-time applications is frequently increasing and it is soon
expected that there will be an increase in the proportion of non-TCP traffic. In view of the fact
that these applications commonly do not amalgamate TCP-compatible congestion control
mechanisms, network applications treat challenging TCP-flows in an unreasonable manner. All
TCP-flows reduce their data rates in an attempt to break up the congestion, where the non-TCP
flows maintains to send at their original rate. This highly unfair condition will lead to starvation
of TCP-traffic i.e.., congestion collapse, which describes the disagreeable situation where the
accessible bandwidth in a network is almost entirely occupied by packets which are discarded
because of the congestion before they reach their destination. For this reason, it is desirable to
define suitable congestion control mechanisms for non-TCP traffic that are compatible with the
rate-adaptation mechanism of TCP. These mechanisms should make non-TCP applications TCP-
friendly, and thus lead to a fair distribution of bandwidth..
8/6/2019 Phase2 Report
9/32
1.4 Problem Definition
Since IP was designed to be a protocol of integration, i.e. to interconnect
various networks (which may or may not be using different transmission
technologies), its essential concern has focused on robustness and
scalability. So, regardless of the technology on which the network is built and
the size of the growth, the protocol will be able to absorb them and keep on
delivering packets in the best possible manner.
The emergence of multimedia, real-time, and mission-critical
applications, and the overwhelming presence of the Internet, has created the
need to differ from the original all packets are created equal paradigm,and to look into some traffic differentiation and discrimination. Simply, while
the best effort model should be preserved for most of the traffic flows,
there are some applications, for instance voice and video that require special
treatment or guarantees with respect to bandwidth, reliability, delay, the
variation of the delay, and priority for processing at the routers. Congestion
works against the stability and the efficiency of the networks. The more
congested is the network, the less there is bandwidth for the flows, not to
mention the effective throughput.
Congestion control is a set of procedures and mechanisms whose primary function is to
either prevent congestion or rectify its consequences. In general, congestion control schemes are
used to maintain network operation at acceptable performance levels when Congestion occurs.
Some of the main reasons behind congestion are:
Slow links,
End and intermediate systems limited processing power, and
Shortage of buffer space.
8/6/2019 Phase2 Report
10/32
Solving the congestion problem is not a simple case of just adding new
resources or extending the capabilities of the old ones. For example, sending
data at high rate through a high-speed LAN might be a problem for the
gateway linking the network to the outside. Due to the high volume of data
in a short time interval, or a burst, the buffer will be ultimately overflowed. In
this case, having a larger buffer will most likely cause a larger accept loss,
since burst are likely to challenge any reasonable buffer capacity.
Occasionally, the complexity of TCP works against itself, viz. not allapplications on the Internet have the same requirements concerning
reliability, delay, or flow control. Reliability, which is based on redundancy or
retransmissions of delayed or lost packets, is Counterproductive in real-time
applications. In fact, the same is true for multimedia applications where the
main concerns are the available bandwidth, small variations in the delay, and
the guarantees that sustain the transmission quality over certain time
interval.
In order to avoid using TCP as a main vehicle for transport for all the
applications on the Internet, a simpler protocol, termed as UDP, has been
designed and implemented. It transports data at a high speed with a low
overhead. Unlike TCP, UDP is not aware of congestion and thus does not care
if it occurs. The protocol pumps data into the network,
as much it is possible, and consequently, within a reasonable time, it induces
congestion.
The first sign is usually a dramatic drop in the performance of TCP, which in
presence of congestion will slow down and eventually halt transporting
segments.
8/6/2019 Phase2 Report
11/32
Most of the applications on the Internet, or at least those that have been
widely used so far, such as mail exchange, ftp, web browsing, employ TCP as
a transport protocol. The initial procedures built into TCP to control
congestion were rather elementary and restricted to preventing an overflow
of the destination buffer. They did not deal with the routers at all. This
problem was behind the series of congestion collapses at the end of eighties
in the last century, and the surge of research into possible modifications and
extensions of the protocol in order to meet the challenges of the new
transmission technologies and the explosive growth of networking and the
Internet.
Indeed, the last fifteen years have witnessed quite an extensive and
meritorious research in the nature of congestion and how to control it. The
two types of mechanisms that address network congestion are congestion
avoidance and congestion control. Congestion avoidance allows a network to
operate in the optimal region of low delay and high throughput, thus
preventing the network entering the state of congestion. Traditional
congestion control facilitates network recovery from congestion, or high
delay and low throughput, to a normal operating state.
While trying to preserve end-to-end semantics that is inherent in the way
TCP was conceived and operates, there are two ways to approach the
congestion. The first venue is the host-centric one where the source host
responds to congestion by reducing the load it injects in the network. The
other venue, a router centric one, is to deal with the intermediate nodes by
using queue scheduling and active queue management of the routers
buffers. Finally, there is a blend of two, in essence a host-centric
management that requires assistance from the network, and where the
routers provide explicit information
8/6/2019 Phase2 Report
12/32
about their own state in a form of a feedback to the host that consequently
reduces the load.
The number of TCP modifications and variants based on the host-centric and
router centric schemas is substantial, yet each one of them has some
limitations. The end-to-end congestion control schemes operate rather well,
however they are limited to TCP flows. Some of them have problem with
fairness, or the proportionate usage of the network resources by the majority
of the flows. The problem with fairness may be somewhat fixed with the
router-centric congestion control schemes. One of the problems that appear
in this case is that the packet drop leads to low throughput and resource
waste, since the packets have already reached the router and used some of
the network resources along the way. Again, network-assisted schemes are
less prone to packet loss than the router centric ones, but they only work
with TCP. An additional concern is that both router centric and network-
assisted congestion control schemes require modifications of the router
architecture and sometimes imply a modification of the TCP packet
structure.
Moreover, the router itself does something with the packet, which is not part
of its original functionality to route the packets in most efficient manner.
Let us turn our attention to non-TCP or unresponsive flows (such as UDP) that
do not recognize the state of congestion. As Floyd writes, the contribution of
unresponsive flows is becoming increasingly present in creating congestion.
One way to approach this problem is to move congestion control for
unresponsive flows to the application layer. If all applications that use UDP
have some kind of end-to-end congestion control mechanism then the
problem may be resolved. This is hardly feasible. Namely, there are no
standard mechanisms for congestion control on the application layer, and it
is not pragmatic to expect that application designers will take care of the
issues, which should not be their concern. Many multimedia applications do
8/6/2019 Phase2 Report
13/32
not use end-to-end congestion control at all. They actually increase the
sending rate in response to the increased loss to make up for the errors.
Traffic on the Internet and networks in general is becoming intensive and
mixed from both responsive and unresponsive flows. The primary research
question they would like to answer is how they can make these different
flows, socially responsible and irresponsible to work together, exhibit flavor
for fairness and impose congestion control. The corollary is whether or not it
is possible to come up with a mechanism that will do similar things to
congestion when induced by non-responsive flows, as the one that works for
TCP-flows.
1.5 Objectives of Congestion Control in 4G Heterogenous
Network
For any congestion control mechanisms, the most fundamental design objectives are stability and
scalability. However, achieving both properties are very challenging in such a heterogeneous
environment as the Internet. From the end-users' perspective, heterogeneity is due to the fact thatdifferent flows have different routing paths and therefore different communication delays, which
can significantly affect stability of the entire system.
Congestion can be defined as a state or condition that occurs when network resources are
overloaded resulting in impairments for network users as objectively measured by the probability
of loss and/or of delay. Congestion control is a (typically distributed) algorithm to share network
resources among competing traffic sources.
The Internet encompasses a large variety of heterogeneous IP networks that are realized by a
multitude of technologies, which result in a tremendous variety of link and path characteristics:
capacity can be either scarce in very slow speed radio links (several kbps), or there may be an
8/6/2019 Phase2 Report
14/32
abundant supply in high-speed optical links (several gigabit per second). Concerning latency,
scenarios range from local interconnects (much less than a millisecond) to certain wireless and
satellite links with very large latencies up to or over a second). Even higher latencies can occur
in space communication. As a consequence, both the available bandwidth and the end-to-end
delay in the Internet may vary over many orders of magnitude, and it is likely that the range of
parameters will further increase in future.
1.6 Organisation of the report
Chapter 2 deals with discussion of the papers that are referred. Chapter 3 deals with proposed
work in which flowchart and algorithm of the project is discussed. Chapter 4 deals with
simulation model . Chapter 5 deals with simulation results. Chapter 6 presents conclusion
8/6/2019 Phase2 Report
15/32
CHAPTER 2
RELATED WORK
In the paper [1] authors have compared the feedback-based to the reservation-based congestion
control approach and focus on the first one, by evaluating some mechanisms with respect to
Media Friendliness, Scalability and Dynamic Behavior. They also present a set of requirements
for the ideal congestion control mechanism of real-time flows in 4G networks.
In this article authors have compared the feedback-based to the reservation-based
congestion control approach and focus on the first one, by evaluating some mechanisms with
respect to Media Friendliness, Scalability and Dynamic Behavior. They also present a set of
requirements for the ideal congestion control mechanism of real-time flows in 4G networks.
The paper [2] considers the potentially negative impacts of an increasing deployment of non-
congestion-controlled best-effort traffic on the Internet. These negative impacts range from
extreme unfairness against competing TCP traffic to the potential for congestion collapse. To
promote the inclusion of end-to-end congestion control in the design of future protocols using
best-effort traffic, they argue that router mechanisms are needed to identify and restrict the
bandwidth of selected high bandwidth best-effort flows in times of congestion. The authors have
discussed several general approaches for identifying those flows suitable for bandwidth
regulation. These approaches are to identify a high-bandwidth flow in times of congestion as
unresponsive, not TCP-friendly, or simply using disproportionate bandwidth. A flow that is
not TCP-friendly is one whose long-term arrival rate exceeds that of any conformant TCP in
8/6/2019 Phase2 Report
16/32
the same circumstances. An unresponsive flow is one failing to reduce its offered load at a router
in response to an increased packet drop rate, and a disproportionate-bandwidth flow is one that
uses considerably more bandwidth than other flows in a time of congestion.
In the paper [3] author has described the main ideas behind some of the most important of such
router congestion feedback (RCF) approaches based on network-information sharing (NIS). In
addition, the properties, functionalities, and expected performance gain of these RCF approaches
are compared and their applicability in the current Internet environment is investigated. The aim
of this paper is to find potential RCF candidates that can be used to improve congestion control
in the current Internet as well as in future IP based networks where diverse wired and wireless
access technologies are used in parallel.
The purpose of paper [4] is to analyze and compare the different congestion control and
avoidance mechanisms which have been proposed for TCP/IP protocols, namely: Tahoe, Reno,
New-Reno, TCP Vegas and SACK. TCPs robustness is as a result of its reactive behavior in the
face of congestion, and fact that reliability is ensured by re-transmissions. All the above
mentioned implementations suggest mechanisms for determining when a segment should be re-
transmitted and how should the sender behave when it encounters congestion and what pattern of
transmissions should it follow to avoid congestion. In this paper they have discussed how the
different mechanism affect the through put and efficiency of TCP and how they compare with
TCP Vegas in terms of performance.
The paper[5] uses simulations to explore the benefits of adding selective acknowledgments
(SACK) and selective repeat to TCP. Authors have compared Tahoe and Reno TCP, the two
most common reference implementations for TCP, with two modified versions of Reno TCP.
The first version is New-Reno TCP, a modified version of TCP without SACK that avoids some
of Reno TCP's performance problems when multiple packets are dropped from a window of data.
The second version is SACK TCP, a conservative extension of Reno TCP modified to use the
SACK option being proposed in the Internet Engineering Task Force (IETF). They described the
congestion control algorithms in our simulated implementation of SACK TCP and show that
8/6/2019 Phase2 Report
17/32
while selective acknowledgments are not required to solve Reno TCP's performance problems
when multiple packets are dropped, the absence of selective acknowledgments does impose
limits to TCP's ultimate performance. In particular, they showed that without selective
acknowledgments, TCP implementations are constrained to either retransmit at most one
dropped packet per round-trip time, or to retransmit packets that might have already been
successfully delivered.
Indirect TCP (or I-TCP), which is described in paper [6], is based on an indirect protocol model.
In this approach, an end-to-end TCP connection between a fixed host and a mobile host is split
into two separate connections: 1) a regular TCP connection between the fixed host and the
mobility support router (base station) currently serving the mobile host and 2) a wireless TCP
connection between the mobility support router and the mobile host. Use of mediation by the
mobility support router (or indirection) at the transport layer allows special treatment of mobile
hosts communicating over wireless links so as to address the problems mentioned earlier without
sacrificing compatibility with existing fixed network protocols.
In the paper [7], recent surge of interest towards congestion control that relies on single-router
feedback suggests that such systems may offer certain benefits over traditional models of
additive packet loss .Besides topology-independent stability and faster convergence to
efficiency/fairness, it was recently shown that any stable single-router system with a symmetric
Jacobian tolerates arbitrary fixed, as well as time-varying, feedback delays.
Although delay-independence is an appealing characteristic, the EMKC system
developed in exhibits undesirable equilibrium properties and slow convergence behavior. To
overcome these drawbacks, authors proposed a new method called JetMax and show that it
admits a low-overhead implementation inside routers (three additions per packet), overshoot-free
transient and steady state, tunable link utilization, and delay-insensitive flow dynamics. The
proposed framework also provides capacity-independent convergence time, where fairness and
utilization are reached in the same number of RTT steps for a link of anybandwidth. Given a 1
mb/s, 10 gb/s, or googol (10100) bps link, the method converges to within 1% of the stationary
8/6/2019 Phase2 Report
18/32
state in 6 control intervals. They have finished the paper by comparing JetMaxs performance to
that of existing methods in ns2 simulations and discussing its Linux implementation.
The paper [8] examines the problem of congestion control evaluation in dynamic networks.
Authors have determined a source of deficiencies for existing metrics of congestion control
performance the existing metrics are defined with respect to ideal allocations that do not
represent short-term efficiency and fairness of network usage in dynamic environments. They
have introduced the concept of an effair allocation, a dynamic ideal allocation that specifies
optimal efficiency and fairness at every timescale. This concept has a general applicability; in
particular, it applies to networks that provide both unicast and multicast services. Another
desirable property of the effair allocation is its dependence on the communication needs and
capabilities of applications. They have designed an algorithm that accounts for network delays
and computes the effair allocation as a series of static ideal allocations. Using the notion of effair
allocation as a foundation, they define a new metric of effairness that shows how closely the
actual delivery times match the delivery times under the effair allocation.
Authors have presented in paper [9] a new implementation of TCP that is better suited to todays
Internet than TCP Reno or Tahoe. The implementation of TCP, which they call TCP Santa Cruz,
is designed to work with path asymmetries, out-of-order packet delivery, and networks with
lossy links, limited bandwidth and dynamic changes in delay. The new congestion-control and
error-recovery mechanisms in TCP Santa Cruz are based on: using estimates of delay along the
forward path, rather than the round-trip delay; reaching a target operating point for the number of
packets in the bottleneck of the connection, without congesting the network; and making resilient
use of any acknowledgments received over a window, rather than increasing the congestion
window by counting the number of returned acknowledgments. They compared TCP Santa Cruz
with the Reno and Vegas implementations using the ns2 simulator. The simulation experiments
show that TCP Santa Cruz achieves significantly higher throughput, smaller delays, and smaller
delay variances than Reno and Vegas. TCP Santa Cruz is also shown to prevent the swings in the
size of the congestion window that typify TCP Reno and Tahoe traffic, and to determine the
direction of congestion in the network and isolate the forward throughput from events on the
8/6/2019 Phase2 Report
19/32
reverse path.
In paper [10] authors describes heterogeneous congestion control protocols that react to different
pricing signals share the same network, the resulting equilibrium may no longer be interpreted as
a solution to the standard utility maximization problem. They prove the existence of equilibrium
under mild assumptions. Then they show that multi-protocol networks whose equilibria are
locally non-unique or infinite in number can only form a set of measure zero. Multiple locally
unique equilibria can arise in two ways. First, unlike in the single-protocol case, the set of
bottleneck links can be non-unique with heterogeneous protocols even when the routing matrix
has full row rank. The equilibria associated with different sets of bottleneck links are necessarily
distinct. Second, even when there is a unique set of bottleneck links, network equilibrium can
still be non-unique, but is always finite and odd in number. They cannot all be locally stable
unless it is globally unique. Finally, they provide various sufficient conditions for global
uniqueness. Numerical examples are used throughout the paper to illustrate these results.
In paper [11] TCP (transmission control protocol) is a feedback-based congestion control
algorithm and each TCP sending host determined its window size independently according to the
timeouts and the receipt of the duplicate acknowledgments (ACKs). Since this blind rate
adaptation mechanism led to multiple packet losses and a global synchronization problem, Floyd
and Jacobson proposed the random early detection (RED) algorithm . RED tried to detect the
beginning of the congestion by monitoring the average queue length at the router, and informed
the sending hosts by dropping packets. An ECN (explicit congestion notification) algorithm has
been proposed to avoid the throughput degradation due to unnecessary packet drops by the RED
algorithm. The idea of ECN was to notify sending hosts explicitly of congestion occurrence in
the network instead of packet drops. Since these congestion control mechanisms were based on
an end-to-end fashion, it would be impossible to guarantee maxmin fair sharing of the
bandwidth due to a lack of explicit information on the network states. To solve the TCP fairness
problem, packet buffering and scheduling algorithms were proposed. However, these algorithms
required per-connection state information at each router and they did not guarantee maxmin fair
sharing of the bandwidth among the active connections. In they proposed an algorithm to
eliminate the packet loss using IPv6 optional fields. In the congestion window control algorithms
8/6/2019 Phase2 Report
20/32
for TCPwith ECN were presented to achieve fairness and stability. However, they were limited
to a single bottleneck link. In this paper, author has proposed a modified window control
algorithm that guarantees TCP fairness. They use successive ECN congestion indications and
obtain network information. Using the obtained network information and the modified RED
algorithm, they develop a window control algorithm to achieve fair sharing of the available
bandwidth in an ECN capable TCP network where each connection has a different propagation
delay and traverses multiple bottleneck links.
In paper [12], authors have discussed about Recent research has indicated that knowledge of
Round Trip Time (RTT) and available bandwidth is crucial for efficient network control. In this
contribution they discuss the problem of estimating these quantities. Based on a simple
aggregated model of the network, an algorithm combining a Kalmanlter and a change detection
algorithm (CUSUM) is proposed for RTT estimation. It is illustrated on real data that this
algorithm provides estimates of significantly better accuracy as compared to the RTT estimator
currently used in TCP, especially in scenarios where new cross-traffic flows cause bottle- neck
queues to rapidly build up which in turn induces rapid changes of the RTT. They also analyze
how wireless links affect the RTT distribution. It is well known that link re-transmissions induce
delays which do not conform to the assumptions on which the transport protocol is based. This
causes undesired TCP control actions which reduce through- put. A link layer solution is
proposed to counter this problem. Carefully selected (artificial) delays are added to packets re-
transmitted on the link which makes the delay-distribution TCP-friendly. The information
required for this algorithm is readily available at the link and consists of the actual delay-
distribution induced by the link. The added delays are obtained from a non- convex program
which due to its low complexity is easy to solve.
In paper [13] authors paper presents and develops a novel delay-based AIMD
congestion control algorithm. The main features of the proposed solution
include: (1) low standing queues and delay in homogeneous environments
(with delay-based flows only); (2) fair coexistence of delay- and loss-based
flows in heterogeneous environments; (3) delay-based flows behave as loss-
8/6/2019 Phase2 Report
21/32
based flows when loss-based flows are present in the network; otherwise
they revert to delay-based operation. It is also shown that these properties
can be achieved without any appreciable increase in network loss rate over
that which would be present in a comparable network of standard TCP flows
(loss-based AIMD). To demonstrate the potential of the presented algorithm
both analytical and simulation results are provided in a range of different
network scenarios. These include stability and convergence results in
general multiple-bottleneck networks, and a number of simulation scenarios
to demonstrate the utility of the proposed scheme. In particular, they show
that networks employing our algorithm have the features of networks in
which
RED AQMs are deployed. Furthermore, in a wide range of situations
(including high speed scenarios), they show that low delay is achieved
irrespective of the queuing algorithm employed in the network, with only
sender side modification to the basic AIMD algorithm.
In this paper [14] authors have discussed about When heterogeneous congestion control
protocols that react to different pricing signals (They could be different types of signals such as
packet loss, queuing delay etc. or different values of the same type of signal such as different
ECN marking values based on the same actual link congestion level) share the same network, the
current theory based on utility maximization fails to predict the network behavior. Unlike in a
homogeneous network, the bandwidth allocation now depends on router parameters and flow
arrival patterns. It can be nonunique, suboptimal and unstable. In [36], existence and uniqueness
of equilibrium of heterogeneous protocols are investigated. This paper extends the study with
two objectives: analyze the optimality and stability of such networks and design control schemes
to improve them. First, they demonstrate the intricate behavior of a heterogeneous network
through simulations and present a framework to help understand its equilibrium properties.
Second, they propose a simple source-based algorithm to decouple bandwidth allocation from
router parameters and flow arrival patterns by only updating a linear parameter in the sources
algorithms on a slow timescale. It is used to steer a network to the unique optimal equilibrium.
8/6/2019 Phase2 Report
22/32
The scheme can be deployed incrementally as the existing protocol needs no change and only the
new protocols need to adopt the slow timescale.
In paper [15] authors have said that the classical TCP lP layered protocol architecture is
beginning to show signs of age. In order to cope with problems such as the poor manager
performance of wireless links and mobile terminals, including the high error rate of wireless
network interfaces, power saving requirements, quality of service, and an increasingly dynamic
network environment, a protocol architecture that considers cross-layer interactions seems to he
required. This article describes a framework for further enhancements of the traditional IP based
protocol stack to meet current and future requirements. Known problems associated with the
strictly layered protocol architecture are summarized and classified, and a first solution involving
cross-layer design is proposed.
In paper [16] authors have explained about 1design the multimedia transport protocol in
heterogeneous wired-cum-wireless environment faces great challenges because of two
contradictory objectives. On the one hand, the multimedia application requires smooth transfer
rate, i.e., stability objective; on the other hand, vertical handoff in heterogeneous networks
requires fast response at transfer rate, i.e., flexibility objective. To address this problem, this
paper proposes to use passive bandwidth measurement at the receiver in the design of rate
control algorithm for multimedia transport protocol.
Moreover, a window based exponentially weighted moving average (EWMA) filter with two
weights is introduced to achieve stability and flexibility at the same time. Based on these
considerations, a multimedia transport protocol (MMTP) is proposed. Its stability and flexibility
as well as its fairness are verified by simulations.
In this paper [17] authors have presented a new implementation of TCP that is better suited to
todays Internet than TCP Reno or Tahoe. Our implementation of TCP, which they call TCP
Santa Cruz, is designed to work with path asymmetries, out-of-order packet delivery, and
8/6/2019 Phase2 Report
23/32
networks with lossy links, limited bandwidth and dynamic changes in delay. The new
congestion-control and error-recovery mechanisms in TCP Santa Cruz are based on: using
estimates of delay along the forward path, rather than the round-trip delay; reaching a target
operating point for the number of packets in the bottleneck of the connection, without congesting
the network; and making resilient use of any acknowledgments received over a window, rather
than increasing the congestion window by counting the number of returned acknowledgments.
They compare TCP Santa Cruz with the Reno and Vegas implementations using the ns2
simulator. The simulation experiments show that TCP Santa Cruz achieves significantly higher
throughput, smaller delays, and smaller delay variances than Reno and Vegas. TCP Santa Cruz is
also shown to prevent the swings in the size of the congestion window that typify TCP Reno and
Tahoe traffic, and to determine the direction of congestion in the network and isolate the forward
throughput from events on the reverse path.
In paper [18] authors have discussed about Today's wireless networks are highly heterogeneous,
with mobile devices consisting of multiple wireless network interfaces (WNICs). Since battery
lifetime is limited, power management of the interfaces has become essential with flexible and
open architecture, capable of supporting various types of networks, terminals and applications.
However how to integrate the protocols to meet the heterogeneous network environments
becomes a significant challenge in the fourth generation wireless network. Adaptive protocols
are proposed to solve heterogeneity problem in future wireless networks. This paper discusses
two protocols RCP, and RCP and feasibility of RCP protocols applied to the manage power
efficiently and adaptive Congestion control on heterogeneous wireless network.
In this paper [19] authors present new queue length based Internet congestion control protocol
which is shown through simulations to work effectively. The control objective is to regulate the
queue size at each link so that it tracks a reference queue size chosen by the designer. To achieve
the latter, the protocol implements at each link a certainty equivalent proportional controller
which utilizes estimates of the effective number of users utilizing the link. These estimates are
generated online using a novel estimation algorithm which is based on online parameter
8/6/2019 Phase2 Report
24/32
identification techniques. The protocol utilizes an explicit multibit feedback scheme and does not
require maintenance of per flow states within the network. Extensive simulations indicate that
the protocol is able to guide the network to a stable equilibrium which is characterized by max-
min fairness, high utilization, queue sizes close to the reference value and no observable packet
drops. In addition, it is found to be scalable with respect to changing bandwidths, delays and
number of users utilizing the network. The protocol also exhibits nice transient properties such as
smooth responses with no oscillations and fast convergence
In this paper [20] author describes the recent trend is that the mobile internet service has been
offered in the integration of various wireless networks. In such heterogeneous networks, vertical
handover is more common and important handover technologies. But during vertical handover,
standard TCP has experienced many problems such as multiple packet losses, the packet
reordering, the under-utilization due to the drastic change of the Bandwidth Delay Product
(BDP) and the network transmission delay (Round Trip Time :RTT). In this paper, they propose
Enhanced TCP congestion control scheme with RTT inflation and the measured-RTT of the new
network for the seamless soft vertical handover and evaluate this by OPNET simulation. They
assume the proposed scheme uses the cross-layer design in a TCP receiver and a TCP time-
stamp option. OPNET simulation results show that our proposed scheme improves better TCP
performance than other handover congestion control schemes such as Freeze-TCP or SSTCP
during the vertical handover.
In this paper [21], authors have described about how to develop a novel analytical framework
for modeling and quantifying the performance of window controlled multimedia flows in a
hybrid wireless/wired network. The framework captures the traffic characteristics of window
controlled flows and is applicable to various wireless links and packet transmission schemes.
They show analytically the relationship between the sender window size, the wireless link
throughput distribution, and the delay distribution. They then substantiate the analysis by
8/6/2019 Phase2 Report
25/32
demonstrating how to statistically bound the end-to-end delay of flows controlled by a TCP-like
Datagram Congestion Control Protocol (DCCP) over an M-state Markovian wireless link.
Simulation results validate the analysis and demonstrate the effectiveness and efficiency of the
proposed delay control scheme. The scheme can also be applied to other window-based transport
layer protocols.
In this paper [22] authors present new proposed protocol to enhance the TCP/IP versatility as the
main protocol for wireless data transmission. TCP/IP has shown its superiority in the selection of
protocol for establishing wired networks. Unfortunately, its superiority cannot be extended to
wireless networks. However, they believe that the integration of several types of networks would
take place. The 4th Generation (4G) wireless mobile internet networks will merge the current
existing cellular networks (i.e., CDMA2000, WCDMA and TD_SCDMA) and Wi-Fi networks
(i.e., Wireless LAN) with the fixed internet to support wireless mobile internet. This integration
would provide the same quality of service as fixed internet. Each of the networks has their own
specified protocols, disparity frequency, and maximum data speed and cost characteristics.
TCP/IP suite protocols were successful in web application of fixed internet, but exhibit limitation
to work on the combined networks. Two research directions are available, which are replacement
and improvement. Microsoft has issued a new protocol suite for replacement. In this paper, they
propose a new protocol to improve TCP/IP suite protocols. This new protocol addresses the
limitation of TCP/IP suite so that it can work on both cellular network and Wi-Fi network
simultaneously; sending data requests through cellular network and getting reply from Wi-Fi
network. Ns2 Java version (Java Network Simulator) was chosen to simulate the new protocol
because of its feasibility. In this paper, they present the results and discussion of our simulation.
In paper [23] authors have discussed about various congestion control algorithms, using network
awareness as a criterion to categorize different approaches. The first category (the box is
black) consists of a group of algorithms that consider the network as a black box, assuming no
knowledge of its state, other than the binary feedback upon congestion. The second category
(the box is grey) groups approaches that use measurements to estimate available bandwidth,
8/6/2019 Phase2 Report
26/32
level of contention or even the temporary characteristics of congestion. Due to the possibility of
wrong estimations and measurements, the network is considered a grey box. The third category
(the box is green) contains the bimodal congestion control, which calculates explicitly the
fairshare, as well as the network-assisted control, where the network communicates its state to
the transport layer; the box now is becoming green. They go beyond a description of the different
approaches to discuss the tradeoffs of network parameters, the accuracy of congestion control
models and the impact of network and application heterogeneity on congestion itself.
In paper [24] authors have explained about Modern Telecommunication, Computer Networks
and both wired and wireless communications including the Internet, are being designed for fast
transmission of large amounts of data, for which Congestion Control is very important. Without
proper Congestion control mechanism the congestion collapse of such networks would become
highly complex. Congestion control for streamed media traffic over network is a challenge due
to the sensitivity of such traffic towards. This challenge has motivated the researchers over the
last decade to develop a number of congestion control protocols and mechanisms that suit the
traffic and provides fair maintenance for both unicast and multicast communications. This paper
gives out a brief survey of major congestion control mechanisms, categorization characteristics,
elaborates the TCP-friendliness concept and then a state-of-the-art for the congestion control
mechanisms designed for network. The paper points the pros and cons of the congestion control
mechanism, and evaluates their characteristics.
8/6/2019 Phase2 Report
27/32
CHAPTER 4
SIMULATION
4.1 Simulation
In communication and computer networkresearch, network simulation is a technique where a
program models the behavior of a network either by calculating the interaction between the
different network entities (hosts/routers, data links,packets, etc) using mathematical formulas, or
actually capturing and playing back observations from a production network. The behavior of the
network and the various applications and services it supports can then be observed in a test lab;
various attributes of the environment can also be modified in a controlled manner to assess how
the network would behave under different conditions.
4.2 Simulator
A network simulatoris a software program that imitates the working of a computer network. In
simulators, the computer network is typically modelled with devices, traffic etc and the
performance is analysed. Typically, users can then customize the simulator to fulfill their
specific analysis needs. Simulators typically come with support for the most popular protocols in
use today, such as WLAN, Wi-Max, UDP, and TCP. We have used OMNET++ as a simulator
for your project.
4.2.1 OMNET++
http://en.wikipedia.org/wiki/Communicationhttp://en.wikipedia.org/wiki/Computer_networkhttp://en.wikipedia.org/wiki/Routerhttp://en.wikipedia.org/wiki/Data_linkhttp://en.wikipedia.org/wiki/Packet_(information_technology)http://en.wikipedia.org/wiki/Network_simulatorhttp://en.wikipedia.org/wiki/WLANhttp://en.wikipedia.org/wiki/Wi-Maxhttp://en.wikipedia.org/wiki/User_Datagram_Protocolhttp://en.wikipedia.org/wiki/Transmission_Control_Protocolhttp://en.wikipedia.org/wiki/Communicationhttp://en.wikipedia.org/wiki/Computer_networkhttp://en.wikipedia.org/wiki/Routerhttp://en.wikipedia.org/wiki/Data_linkhttp://en.wikipedia.org/wiki/Packet_(information_technology)http://en.wikipedia.org/wiki/Network_simulatorhttp://en.wikipedia.org/wiki/WLANhttp://en.wikipedia.org/wiki/Wi-Maxhttp://en.wikipedia.org/wiki/User_Datagram_Protocolhttp://en.wikipedia.org/wiki/Transmission_Control_Protocol8/6/2019 Phase2 Report
28/32
OMNeT++ is an object-oriented modular discrete event network simulator. The simulator can be
used for:
traffic modeling of telecommunication networks
protocol modeling
modeling queuing networks
modeling multiprocessors and other distributed hardware systems
validating hardware architectures
evaluating performance aspects of complex software systems
modeling any other system where the discrete event approach is suitable.
An OMNeT++ model consists of hierarchically nested modules. The depth of module nesting is
not limited, which allows the user to reflect the logical structure of the actual system in the
model structure. Modules communicate through message passing. Messages can contain
arbitrarily complex data structures. Module scan send messages either directly to their
destination or along a predefined path, through gates and connections. Modules can have their
own parameters. Parameters can be used to customize module behavior and to parameterize the
models topology. Modules at the lowest level of the module hierarchy encapsulate behavior.
These modules are termed simple modules, and they are programmed in C++ using the
simulation library. OMNeT++ simulations can feature varying user interfaces for different
purposes: debugging, demonstration and batch execution. Advanced user interfaces make the
inside of the model visible to the user, allow control over simulation execution and to intervene
by changing variables/objects inside the model. This is very useful in the
development/debugging phase of the simulation project. User interfaces also facilitate
demonstration of how a model works.
The simulator as well as user interfaces and tools are portable: they are known to work on
Windows and on several Unix flavors, using various C++ compilers. OMNeT++ also supports
parallel distributed simulation. OMNeT++ can use several mechanisms for communication
between partitions of a parallel distributed simulation, for example MPI or named pipes. The
parallel simulation algorithm can easily be extended or new ones plugged in. Models do not need
8/6/2019 Phase2 Report
29/32
any special instrumentation to be run in parallel it is just a matter of configuration. OMNeT++
can even be used for classroom presentation of parallel simulation algorithms, because
simulations can be run in parallel even under the GUI which provides detailed feedback on what
is going on .OMNEST is the commercially supported version of OMNeT++. OMNeT++ is only
free for academic and non-profit use for commercial purposes one needs to obtain OMNEST
licenses from Omnest Global, Inc.
4.2.1 Modeling concepts
OMNeT++ provides efficient tools for the user to describe the structure of the actual system.
Some of the main features are:
hierarchically nested modules
modules are instances of module types
modules communicate with messages through channels
flexible module parameters
topology description language
A. Hierarchical modules
An OMNeT++ model consists of hierarchically nested modules, which communicate by passing
messages to each another. OMNeT++ models are often referred to as networks. The top level
module is the system module. The system module contains submodules, which can also contain
submodules themselves (Fig. 2.1). The depth of module nesting is not limited; this allows the
user to reflect the logical structure of the actual system in the model structure. Model structure is
described in OMNeT++s NED language.
Modules that contain submodules are termed compound modules, as opposed simple modules
which are at the lowest level of the module hierarchy. Simple modules contain the algorithms in
8/6/2019 Phase2 Report
30/32
the model. The user implements the simple modules in C++, using the OMNeT++ simulation
class library.
Fig 4.1 Simple and Compound Modules
B. Module types
Both simple and compound modules are instances of module types. While describing the model,
the user defines module types; instances of these module types serve as components for more
complex module types. Finally, the user creates the system module as an instance of a previously
defined module type; all modules of the network are instantiated as submodules and sub-
submodules of the system module. When a module type is used as a building block, there is no
distinction whether it is a simple or a compound module. This allows the user to split a simple
module into several simple modules embedded into a compound module, or vica versa,
aggregate the functionality of a compound module into a single simple module, without affecting
existing users of the module type. Module types can be stored in files separately from the place
of their actual usage. This means that the user can group existing module types and create
component libraries.
C.Messages, gates, links
Modules communicate by exchanging messages. In an actual simulation, messages can represent
frames or packets in a computer network, jobs or customers in a queuing network or other types
of mobile entities. Messages can contain arbitrarily complex data structures. Simple modules can
8/6/2019 Phase2 Report
31/32
send messages either directly to their destination or along a predefined path, through gates and
connections.
The ``local simulation time'' of a module advances when the module receives a message. The
message can arrive from another module or from the same module (self-messages are used to
implement timers).
Gates are the input and output interfaces of modules; messages are sent out through output gates
and arrive through input gates.
Each connection (also called link) is created within a single level of the module hierarchy: within
a compound module, one can connect the corresponding gates of two submodules, or a gate of
one submodule and a gate of the compound module (Fig.below).
Fig 4.2 Connections
Due to the hierarchical structure of the model, messages typically travel through a series of
connections, to start and arrive in simple modules. Such series of connections that go from
simple module to simple module are called routes. Compound modules act as `cardboard boxes'
in the model, transparently relaying messages between their inside and the outside world.
D.Modeling of packet transmissions
Connections can be assigned three parameters, which facilitate the modeling of communication
networks, but can be useful in other models too: propagation delay, bit error rate and data rate,
http://c/OMNeT++/doc/manual/usman.html#fig:ch-overview:connectionshttp://c/OMNeT++/doc/manual/usman.html#fig:ch-overview:connections8/6/2019 Phase2 Report
32/32
all three being optional. One can specify link parameters individually for each connection, or
define link types and use them throughout the whole model.
Propagation delay is the amount of time the arrival of the message is delayed by when it travels
through the channel. Bit error rate speficifies the probability that a bit is incorrectly transmitted,
and allows for simple noisy channel modelling. Data rate is specified in bits/second, and it is
used for calculating transmission time of a packet. When data rates are in use, the sending of the
message in the model corresponds to the transmission of the first bit, and the arrival of the
message corresponds to the reception of the last bit. This model is not always applicable, for
example protocols like Token Ring and FDDI do not wait for the frame to arrive in its entirety,
but rather start repeating its first bits soon after they arrive -- in other words, frames ``flow
through'' the stations, being delayed only a few bits. If you want to model such networks, thedata rate modeling feature of OMNeT++ cannot be used.