The Effect of TCP Variants on the Coexistence of MMORPG and Best-Effort Traffic
-
Upload
jose-saldana -
Category
Education
-
view
110 -
download
4
description
Transcript of The Effect of TCP Variants on the Coexistence of MMORPG and Best-Effort Traffic
NIME 2012, Munich, 30th July 2012
The Effect of TCP Variants on the Coexistence of MMORPG and Best-Effort Traffic
Jose Saldana1 Mirko Suznjevic2 Luis Sequeira1 Julián Fernández-Navajas1 Maja Matijasevic2 José Ruiz-Mas1
1 Communication Technologies Group (GTC) Aragon Inst. of Engineering Research (I3A) University of Zaragoza 2 University of Zagreb. Faculty of Electrical Engineering and Computing
NIME 2012, Munich, 30th July 2012
Index
1. Introduction
2. Related Works
3. Test Methodology
4. Results
5. Conclusions
NIME 2012, Munich, 30th July 2012
Index
1. Introduction
2. Related Works
3. Test Methodology
4. Results
5. Conclusions
Introduction
NIME 2012, Munich, 30th July 2012
Growing popularity of online games: number of titles, and number of players growing, especially in Asia
MMORPG (Massive Multiplayer Online Role Playing Games): genre with a set of shared characteristics
Introduction
NIME 2012, Munich, 30th July 2012
MMORPGs:
Long-term avatars
Missions, objects
Soft real-time
Fights, powers
Levels
Introduction
NIME 2012, Munich, 30th July 2012
http://designcult.org/designcult/2010/08/mmo-subscription-charts.html
Introduction
NIME 2012, Munich, 30th July 2012
The most popular one: World of Warcraft (video)
Introduction
NIME 2012, Munich, 30th July 2012
Other games (i.e. First Person Shooters) use UDP, because they priorize interactivity
But MMORPGs priorize secure transmission of information: you can miss a shot in a FPS, but you cannot miss a player buying a new sword in an MMORPG
Real-time requirements are looser (although they exist)
They use TCP instead of UDP
Introduction
NIME 2012, Munich, 30th July 2012
But they are interactive:
The speed of the player matters
We have a real-time service using TCP
Introduction
NIME 2012, Munich, 30th July 2012
Consequences of this fact (using TCP)
Retransmission when a packet is lost
Dependence on TCP variants, and on the TCP stack present on player’s machine
The game relies on the OS’s ability to deliver packets
More overhead (40 bytes instead of 28)
Introduction
NIME 2012, Munich, 30th July 2012
TCP traffic of the game has to share:
Access network
Core network
with other TCP traffics (e-mail, FTP, etc)
In this paper we will explore the effect of TCP variants on this coexistence
NIME 2012, Munich, 30th July 2012
Index
1. Introduction
2. Related Works
3. Test Methodology
4. Results
5. Conclusions
Related Works
NIME 2012, Munich, 30th July 2012
Modelling traffic of MMORPGs
Some statistical models have been developed for MMORPGs
Characterization of Packet size and Inter Packet Time
APDU and Inter Arrival Time:
APDU of 1600 bytes
TCP stack of the computer
Packet 1: 1460 bytes of payload Packet 2: 140 bytes of payload
Related Works
NIME 2012, Munich, 30th July 2012
We will follow the second approach: Independent of underlying technology
More adequate for modeling the behaviour of different TCPs (if we have a concrete trace using a TCP variant, it is not valid for the rest of variants)
APDU of 1600 bytes
TCP stack of the computer
Packet 1: 1460 bytes of payload Packet 2: 140 bytes of payload
Related Works
NIME 2012, Munich, 30th July 2012
Characteristics of MMORPG traffic
TCP
Small packets, especially client-to-server
A lot of ACKs
Traffic varies with player’s activities: Trading, Questing, Dungeons, Player vs Player, etc.
Related Works
NIME 2012, Munich, 30th July 2012
TCP variants
Some variants have been deployed in order to solve certain problems (e.g. TCP hybla for solving RTT unfairness, etc.)
We will use three common TCP variants:
TCP New Reno
TCP SACK
TCP Vegas
Related Works
NIME 2012, Munich, 30th July 2012
Other studies have issued the problem of real-time vs best effort traffic, but mainly testing UDP vs TCP
In this work, we are comparing
TCP used for MMORPG
TCP used for FTP
NIME 2012, Munich, 30th July 2012
Index
1. Introduction
2. Related Works
3. Test Methodology
4. Results
5. Conclusions
Test Methodology
NIME 2012, Munich, 30th July 2012
Network scenario: WoW session vs FTP upload: the main problem is the uplink
WoW
FTP
WoW client
Tdejitter
512kbps
6Mbps
WoW server
Test Methodology
NIME 2012, Munich, 30th July 2012
Bandwidth: corresponding to a xDSL
Router buffer: 20 and 200 packets
OWD: 80 ms (inter-region scenario)
WoW
FTP
WoW client
Tdejitter
512kbps
6Mbps
WoW server
Test Methodology
NIME 2012, Munich, 30th July 2012
Traffic of the MMORPG
Two flows: client-server and server-client
We will use Questing activity, since it is the most common one
10 to 15 kbps
Small packets (the game sets to 1 the “push” bit in order to send them as soon as possible)
Test Methodology
NIME 2012, Munich, 30th July 2012
NS2 script that generates the scenario and the traffic
WoW: 2 flows of TCP SACK (commonly found in player’s machines)
FTP: NS2 implementations of TCP New Reno, SACK and Vegas
1000 seconds of simulation time
Test Methodology
NIME 2012, Munich, 30th July 2012
Client-to-server packets
0
0.2
0.4
0.6
0.8
1
0 200 400 600 800 1000 1200
ms
Inter Packet Time CDF - client to server
0
0.2
0.4
0.6
0.8
1
0 200 400 600 800 1000 1200 1400
bytes
Packet Size CDF - client to server
Test Methodology
NIME 2012, Munich, 30th July 2012
Client-to-server packets
0
0.2
0.4
0.6
0.8
1
0 200 400 600 800 1000 1200
ms
Inter Packet Time CDF - client to server
0
0.2
0.4
0.6
0.8
1
0 200 400 600 800 1000 1200 1400
bytes
Packet Size CDF - client to server
Test Methodology
NIME 2012, Munich, 30th July 2012
Server-to-client packets
0
0.2
0.4
0.6
0.8
1
0 200 400 600 800 1000 1200
ms
Inter Packet Time CDF - server to client
0
0.2
0.4
0.6
0.8
1
0 200 400 600 800 1000 1200 1400
bytes
Packet Size CDF - server to client
NIME 2012, Munich, 30th July 2012
Index
1. Introduction
2. Related Works
3. Test Methodology
4. Results
5. Conclusions
Results
NIME 2012, Munich, 30th July 2012
Game Buffer 200 packets
FTP
SACK New Reno Vegas
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
pa
ck
ets
seconds
TCP window size
0
50
100
150
200
250
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
pa
ck
ets
seconds
TCP window size
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
pa
ck
ets
seconds
TCP window size
0
50
100
150
200
250
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
2
4
6
8
10
12
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
Results
NIME 2012, Munich, 30th July 2012
Game Buffer 20 packets
FTP
SACK New Reno Vegas
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
pa
ck
ets
seconds
TCP window size
0
10
20
30
40
50
60
70
80
90
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
pa
ck
ets
seconds
TCP window size
0
10
20
30
40
50
60
70
80
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
2
4
6
8
10
12
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
Results
NIME 2012, Munich, 30th July 2012
The sending window of WoW behaves differently from that of FTP
The game is not trying to consume as much available bandwidth as possible (as FTP does)
It only has to send a continuous data flow of small packets, at a rate of less than 10 kbps.
Now we will discuss each buffer size
Results
NIME 2012, Munich, 30th July 2012
Game Buffer 200 packets
FTP
SACK New Reno Vegas
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
pa
ck
ets
seconds
TCP window size
0
50
100
150
200
250
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
pa
ck
ets
seconds
TCP window size
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
pa
ck
ets
seconds
TCP window size
0
50
100
150
200
250
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
2
4
6
8
10
12
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
Results
NIME 2012, Munich, 30th July 2012
Added queuing delays
0
50
100
150
200
250
300
350
SACK New Reno Vegas
ms
Average Queuing Delay
buffer 200 packets
buffer 20 packets
300 ms of queuing delay
Results
NIME 2012, Munich, 30th July 2012
Game Buffer 200 packets
FTP
SACK New Reno Vegas
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
pa
ck
ets
seconds
TCP window size
0
50
100
150
200
250
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
pa
ck
ets
seconds
TCP window size
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
pa
ck
ets
seconds
TCP window size
0
50
100
150
200
250
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
2
4
6
8
10
12
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
Results
NIME 2012, Munich, 30th July 2012
Added queuing delays
0
50
100
150
200
250
300
350
SACK New Reno Vegas
ms
Average Queuing Delay
buffer 200 packets
buffer 20 packets
Small queuing delay
Results
NIME 2012, Munich, 30th July 2012
Game Buffer 20 packets
FTP
SACK New Reno Vegas
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
pa
ck
ets
seconds
TCP window size
0
10
20
30
40
50
60
70
80
90
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
pa
ck
ets
seconds
TCP window size
0
10
20
30
40
50
60
70
80
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
2
4
6
8
10
12
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
Results
NIME 2012, Munich, 30th July 2012
Game Buffer 20 packets
FTP
SACK New Reno Vegas
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
pa
ck
ets
seconds
TCP window size
0
10
20
30
40
50
60
70
80
90
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
pa
ck
ets
seconds
TCP window size
0
10
20
30
40
50
60
70
80
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
2
4
6
8
10
12
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
Results
NIME 2012, Munich, 30th July 2012
Game Buffer 20 packets
FTP
SACK New Reno Vegas
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
pa
ck
ets
seconds
TCP window size
0
10
20
30
40
50
60
70
80
90
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
pa
ck
ets
seconds
TCP window size
0
10
20
30
40
50
60
70
80
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
2
4
6
8
10
12
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
And this is bad for the game
Results
NIME 2012, Munich, 30th July 2012
Game Buffer 20 packets
FTP
SACK New Reno Vegas
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
pa
ck
ets
seconds
TCP window size
0
10
20
30
40
50
60
70
80
90
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
5
10
15
20
25
0 100 200 300 400 500 600 700 800 900 1000
pa
ck
ets
seconds
TCP window size
0
10
20
30
40
50
60
70
80
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
0
2
4
6
8
10
12
0 100 200 300 400 500 600 700 800 900 1000
packe
ts
seconds
TCP window size
NIME 2012, Munich, 30th July 2012
Index
1. Introduction
2. Related Works
3. Test Methodology
4. Results
5. Conclusions
Conclusions
NIME 2012, Munich, 30th July 2012
TCP Vegas is able to maintain a constant rate while competing with the game traffic, since it prevents packet loss by avoiding the increase of the sending window size.
TCP SACK and TCP New Reno tend to keep on increasing the window size, thus adding undesired delays to the game traffic.
Smaller buffers have been demonstrated to be better for TCP-based MMORPGs, since larger buffers cause higher delays
NIME 2012, Munich, 30th July 2012
Thank you very much