[IEEE 32nd IEEE Conference on Local Computer Networks (LCN 2007) - Dublin, Ireland...

2
Issues in TCP Vegas over Optical Burst Switching Networks Olga León and Sebastià Sallent Telematics Engineering Department Technical University of Catalonia (UPC) Castelldefels (Barcelona), Spain Abstract—In this paper we study by means of simulation the effect of the variation of the burstification period in OBS networks on the performance of TCP Vegas. I. INTRODUCTION Optical Burst Switching (OBS) [1] is a promising technology which combines the benefits from optical circuit and packet switching and is expected to satisfy the huge demand of bandwidth in the future Internet backbone, while optical technology matures and optical packet switching becomes feasible. In OBS networks, packets are assembled into bursts in edge nodes according to its destination address and class of service and transmitted through the core network. Packets are buffered at the ingress nodes until burst generation is triggered either when the burst size exceeds a certain threshold or when a timer set on the arrival of the first packet expires. Given that TCP is the predominant protocol in the internet, there is need to evaluate its performance over OBS networks. Although previous work has been done in this area, most has focused on TCP variants such as Reno or Sack. This paper aims at studying the effect of the aggregation of packets into bursts on TCP Vegas throughput. With this purpose, we present the results obtained using the network simulator ns-2 [2]. II. TCP VEGAS BACKGROUND Unlike most TCP variants, TCP Vegas [3] does not rely on packet loss to detect network congestion. Instead, it uses round trip time (RTT) measurements to estimate the available bandwidth of the network and control the congestion window accordingly. Every RTT, it records the average RTT and sets BaseRTT to the minimum round trip time measured during the connection. Then, computes the difference between the expected rate and the actual rate as indicated by expression (1), which gives an idea of the amount of extra data used by the connection. Diff = = (Expected – Actual) x BaseRTT (1) During the slow start phase, Vegas doubles its congestion window only every other RTT, in order to detect incipient congestion and avoid losses. When >γ, it leaves the slow start phase, reduces its congestion window by 1/8 and enters the congestion avoidance phase. While in congestion avoidance, it increases or decreases the congestion window by one segment if <α or >β respectively. Otherwise, it leaves the congestion window untouched. III. TCP VEGAS PERFORMANCE OVER OBS NETWORKS In this section we identify some of the key parameters that affect the performance of TCP Vegas over an OBS network. Some problems common to all TCP variants, such the impact of burst losses, have been addressed in previous work [4][5]. However, this study focuses on the specific problems that arise with Vegas due to its delay-based congestion control scheme. A. Impact of high bandwidth-delay product networks As mentioned in section II, Vegas doubles its congestion window only every other RTT while being in the slow start phase. During the “active round” where Vegas increases its congestion window, it sends two segments for each incoming ACK causing a temporary queue buildup which leads to an overestimation of the RTT. The delay introduced by the buffering of segments is interpreted by TCP Vegas as a signal of congestion, ending in an early termination of slow start. This problem reported in [6], results in a long convergence delay to the equilibrium window. In [7], the authors propose a method based on pacing to eliminate this problem. B. Impact of burstification in TCP Vegas In an OBS network, packets are aggregated into bursts in the ingress nodes according to its destination. This mechanism named burstification adds an additional delay due to the waiting time until a burst is complete. An incoming TCP segment, may wait between 0 an T b seconds until the burst is transmitted, thus experiencing an increase in its RTT by at most 2T b . This overestimation of RTT degrades considerably Vegas performance. As explained before, TCP Vegas computes the difference between the expected and the actual rate, which depend respectively, on the minimum RTT observed along the connection named BaseRTT and the estimated RTT. Since different segments may experience a different delay due to the burstification process, the estimated RTT computed by TCP Vegas will be in average RTT 0 + T b , being RTT 0 the RTT in absence of burstifier. On the other hand, the minimum RTT will be that measured for the last segment assembled into a burst, that is, a segment arriving when the assembler timer 32nd IEEE Conference on Local Computer Networks 0742-1303/07 $25.00 © 2007 IEEE DOI 10.1109/LCN.2007.116 242

Transcript of [IEEE 32nd IEEE Conference on Local Computer Networks (LCN 2007) - Dublin, Ireland...

Page 1: [IEEE 32nd IEEE Conference on Local Computer Networks (LCN 2007) - Dublin, Ireland (2007.10.15-2007.10.18)] 32nd IEEE Conference on Local Computer Networks (LCN 2007) - Issues in TCP

Issues in TCP Vegas over Optical Burst Switching Networks

Olga León and Sebastià Sallent Telematics Engineering Department

Technical University of Catalonia (UPC) Castelldefels (Barcelona), Spain

Abstract—In this paper we study by means of simulation the effect of the variation of the burstification period in OBS networks on the performance of TCP Vegas.

I. INTRODUCTION Optical Burst Switching (OBS) [1] is a promising

technology which combines the benefits from optical circuit and packet switching and is expected to satisfy the huge demand of bandwidth in the future Internet backbone, while optical technology matures and optical packet switching becomes feasible. In OBS networks, packets are assembled into bursts in edge nodes according to its destination address and class of service and transmitted through the core network. Packets are buffered at the ingress nodes until burst generation is triggered either when the burst size exceeds a certain threshold or when a timer set on the arrival of the first packet expires. Given that TCP is the predominant protocol in the internet, there is need to evaluate its performance over OBS networks. Although previous work has been done in this area, most has focused on TCP variants such as Reno or Sack.

This paper aims at studying the effect of the aggregation of packets into bursts on TCP Vegas throughput. With this purpose, we present the results obtained using the network simulator ns-2 [2].

II. TCP VEGAS BACKGROUND Unlike most TCP variants, TCP Vegas [3] does not rely on

packet loss to detect network congestion. Instead, it uses round trip time (RTT) measurements to estimate the available bandwidth of the network and control the congestion window accordingly. Every RTT, it records the average RTT and sets BaseRTT to the minimum round trip time measured during the connection. Then, computes the difference between the expected rate and the actual rate as indicated by expression (1), which gives an idea of the amount of extra data used by the connection.

Diff = ∆ = (Expected – Actual) x BaseRTT (1)

During the slow start phase, Vegas doubles its congestion window only every other RTT, in order to detect incipient congestion and avoid losses. When ∆>γ, it leaves the slow start phase, reduces its congestion window by 1/8 and enters the congestion avoidance phase.

While in congestion avoidance, it increases or decreases the congestion window by one segment if ∆<α or ∆>β respectively. Otherwise, it leaves the congestion window untouched.

III. TCP VEGAS PERFORMANCE OVER OBS NETWORKS In this section we identify some of the key parameters that

affect the performance of TCP Vegas over an OBS network. Some problems common to all TCP variants, such the impact of burst losses, have been addressed in previous work [4][5]. However, this study focuses on the specific problems that arise with Vegas due to its delay-based congestion control scheme.

A. Impact of high bandwidth-delay product networks As mentioned in section II, Vegas doubles its congestion

window only every other RTT while being in the slow start phase. During the “active round” where Vegas increases its congestion window, it sends two segments for each incoming ACK causing a temporary queue buildup which leads to an overestimation of the RTT. The delay introduced by the buffering of segments is interpreted by TCP Vegas as a signal of congestion, ending in an early termination of slow start. This problem reported in [6], results in a long convergence delay to the equilibrium window. In [7], the authors propose a method based on pacing to eliminate this problem.

B. Impact of burstification in TCP Vegas In an OBS network, packets are aggregated into bursts in

the ingress nodes according to its destination. This mechanism named burstification adds an additional delay due to the waiting time until a burst is complete. An incoming TCP segment, may wait between 0 an Tb seconds until the burst is transmitted, thus experiencing an increase in its RTT by at most 2Tb. This overestimation of RTT degrades considerably Vegas performance.

As explained before, TCP Vegas computes the difference between the expected and the actual rate, which depend respectively, on the minimum RTT observed along the connection named BaseRTT and the estimated RTT. Since different segments may experience a different delay due to the burstification process, the estimated RTT computed by TCP Vegas will be in average RTT0 + Tb, being RTT0 the RTT in absence of burstifier. On the other hand, the minimum RTT will be that measured for the last segment assembled into a burst, that is, a segment arriving when the assembler timer

32nd IEEE Conference on Local Computer Networks

0742-1303/07 $25.00 © 2007 IEEEDOI 10.1109/LCN.2007.116

242

Page 2: [IEEE 32nd IEEE Conference on Local Computer Networks (LCN 2007) - Dublin, Ireland (2007.10.15-2007.10.18)] 32nd IEEE Conference on Local Computer Networks (LCN 2007) - Issues in TCP

1 Work supported by the Spanish “Ministerio de Educación y Ciencia” and FEDER under project TSI2006-12507-C03-03.

expires or when the burst size exceeds a given value. The difference between baseRTT and the estimated RTT is considered by TCP Vegas as a signal of buffer filling within the network. Thus, depending on the value of Tb, the burstification delay will force TCP Vegas to stop increasing its congestion window prematurely.

IV. PERFORMANCE EVALUATION In this section, we evaluate the performance of TCP Vegas

via simulation using the Network Simulator (ns-2) [2].

A. Simulation Model and Results We have considered a simple model with only one TCP

connection through a lossless path as shown in Figure 1. Fiber links consist of 2 bidirectional wavelengths channels, one for data transmission and the other one used as control channel. Both OBS edge nodes use a hybrid burst assembler which transmits a burst either when the amount of incoming data exceeds a given threshold or either when expires a timer. The LAUC-VF algorithm is used to schedule the transmission of bursts with an offset time of 2 µs and the processing time within OBS nodes is set to 1 µs. The TCP sender is fed by a 700Mbps CBR source and generates segments with a fixed size of 4000 bytes. TCP Vegas parameters are set to α=1 and β=3 and the slow start threshold to γ=1. Since the main objective is to determine the effect of burstification on Vegas throughput, no background traffic has been considered so that spurious effects such contention and burst losses are avoided.

With this purpose, we have run a set of simulations varying the burst timeout value. Figure 2 shows the simulation results for a maximum burst size of 80Kbytes and burst timeouts Tb ranging from 0.01msg to 10msg. In all cases, TCP Vegas exits slow start prematurely due to the queue buildup effect explained in section III. For small timer values in comparison to RTT0, Vegas behaves similar as if there was no burstifier: after exiting slow start the congestion window converges linearly to the optimal value (around 250 segments) by means of the congestion avoidance mechanism. However, for bigger burst timeouts the congestion window size stabilizes far away from the target value due to the effect of burstification.

Figure 3 depicts the evolution of the estimated RTT and baseRTT for a maximum burst size of 80Kbytes and a burst timeout of 1msg. We can appreciate that at some point in time baseRTT decreases sharply, causing the Diff parameter to exceed β and thus leading to the reduction of the congestion window. In conventional networks, baseRTT is generally the RTT measured for the first segment of the connection but in an OBS network, the minimum RTT of the connection is measured when upon receiving a segment, the burst is transmitted immediately. In a scenario like the one presented in Figure 1, this is only possible when the congestion window size is equal or bigger than the burst size.

Figure 1. Simulation Model of TCP Vegas over OBS

Figure 2. Congestion Window Evolution for different timeout values

Figure 3. Estimated RTT and BaseRTT for a burst timeout of 1msg

V. SUMMARY We have analyzed via simulation the impact of the burstification on TCP Vegas. The results outline its poor performance in terms of throughput and show that the level of degradation is determined by the relation between the maximum delay introduced by the burstifier Tb and the RTT of the connection in absence of the former. Currently, we are working in a method based on a more accurate estimation of the RTT and its variance to overcome this problem.

VI. REFERENCES [1] C. Quao and M. Yoo, “Optical Burst Switching (OBS) – a new paradigm

for an optical internet” in Journal on High Speed Networks, vol.8, nº 1, pp.69-84, 1999.

[2] The Network Simulator ns-2. www.isi.edu/nsnam/ns [3] L. Brakmo, S. O’Malley and L. Peterson, “TCP Vegas: new techniques

for congestion detection and avoidance”, in Proceedings of the SIGCOMM’94 Symposium, pp. 24-35.

[4] A. Detti and M. Listanti, “Impact of Segments Aggregation on TCP Reno Flows in Optical Burst Switching Networks”, in Proceedings of INFOCOM 2002, vol 3, pp. 1803-1812.

[5] X. Yu, C. Qiao, Y. Liu and D. Towsley, “TCP implementations and False Time Out Detection in OBS networks”, Proc. of INFOCOMM 2004, pp. 774-784.

[6] U. Hengartner, J. Boliger and T. Gross, “TCP Vegas revisited”, IEEE INFOCOM 2000, vol. 3, pp. 1546-1555.

[7] S. Lee, Byung G. Kim and Y. ChoiI, “TCP-Vegas Slow Start Performance in Large Bandwdith Delay Network”, ICOIN 2002, LNCS 2343, pp.394-406.

243