Ahmed Mansy, Mostafa Ammar (Georgia Tech) Bill Ver Steeg (Cisco)

Post on 16-Mar-2016

86 views 1 download

Tags:

description

SABRE : A client based technique for mitigating the buffer bloat effect of adaptive video flows. Ahmed Mansy, Mostafa Ammar (Georgia Tech) Bill Ver Steeg (Cisco). Large buffers increase queuing delays and also delays loss events. Significantly high queuing delays from TCP & large buffers. - PowerPoint PPT Presentation

Transcript of Ahmed Mansy, Mostafa Ammar (Georgia Tech) Bill Ver Steeg (Cisco)

1

Ahmed Mansy, Mostafa Ammar(Georgia Tech)

Bill Ver Steeg(Cisco)

SABRE: A client based technique for mitigating the buffer bloat effect of

adaptive video flows

2

What is buffer bloat?Significantly high queuing delays from TCP & large

buffers

ClientServer

Bottleneck = C bps

RTT

• Ideally, cwnd should grow to BDP = C x RTT• TCP uses packet loss to detect congestion, and then it reduces its rate

Large buffers increase queuing delays and also delays loss events

• TCP sender tries to fill the pipe by increasing the sender window (cwnd)

3

DASH: Dynamic Adaptive Streaming over HTTPManifest

350kbps

600kbps

900kbps

1200kbps

Video is split into short segments

DASH client

Time

Initial bufferingphase Steady state

(On/Off)

HTTP server

Buffer 100%

Dow

nloa

d ra

teS. Akhshabi et al, “An experimental evaluation of rate-adaptation algorithms in adaptive streaming over HTTP”, MMSys’ 11

4

Problem description

Does DASH cause buffer bloat?

DASH

VoIP

Will the quality of VoIP calls get affected by DASH flows?

And if yes, how can we solve this problem?

5

Our approach• In order to answer the first two questions

– We perform experiments on a testbed in the lab to measure the buffer bloat effect of DASH flows

• We developed a scheme SABRE: Smooth Adaptive BitRatE to mitigate this problem

• We use the testbed to evaluate our solution

6

Measuring the buffer bloat effect

1Gbps

RTT 100ms

HTTP Video server Bottleneck emulatorTail-drop: 256 packets

DASH client

6Mbps(DSL)

iPerf client iPerf serverUDP traffic: 80kbps, pkt=150bytes OTT VoIP traffic

Adaptive HTTP video flows have a significant effect on VoIP traffic

6

7

Understanding the problem – Why do we get large bursts?

1Gbps 6Mbps

TCP is bursty

8

Possible solutions• Middlebox techniques

– Active Queue Management (AQM)• RED, BLUE, CODEL, etc.• RED is on every router but hard to tune

• Server techniques– Rate limiting at the server to reduce burst size

• Our solution:

Smooth download driven by the client

9

Some hidden details

DASH player

OS

Client

Server

Playout buffer

Socket buffer

HTTP GET

recv

1

2

In traditional DASH players:•while(true) recv•1 and 2 are coupled

Two data channels

10

Smooth download to eliminate bursts

DASH player

OS

Client

Server

Playout buffer

Socket buffer

HTTP GETrecv

Idea

TCP can send a burst of min(rwnd, cwnd)

Since we can not control cwnd, then

control rwnd

rwnd is a function of the empty space on the

receiver socket buffer

Socket buffer

rwndTwo objectives

1.Keep socket buffer almost full all the time2.Not to starve the playout buffer

11

Keeping the socket buffer full -Controlling recv rate

T

RateRate

T

On

Off Off

On

HTTP GET HTTP GET

While(timer) recvWhile(1) recv

ClientServerGET S1

S1

GET S2S2

Off

Off

ClientServer

GET S1S1

GET S2

S2 Empty socket buffer

Bursty

Socket PlayoutPlayoutSocket

12

Keeping the socket buffer fullHTTP Pipelining

# segments = 1 + Socket buffer sizeSegment size

ClientServer

GET S1, S2

S1

S2

PlayoutSocket

S1

S3

GET S3S2GET S4

#Segments = 1 +

ClientServerGET S1

S1

GET S2S2

Off

Off

PlayoutSocket

Socket buffer is always full, rwnd is small

13

Still one more problem• Socket buffer level drops temporarily when

the available bandwidth drops

• This results in larger values of rwnd– Can lead to large bursts and hence delay

spikes

• Continuous monitoring of the socket buffer level can help

Socket bufferAvailable BW Video bitrate

14

Experimental results

1Gbps

RTT 100ms

HTTP Video server Bottleneck emulatorTail-drop: 256 packets

DASH client

6Mbps(DSL)

iPerf client iPerf serverUDP traffic: 80kbps, pkt=150bytes OTT VoIP traffic

We implemented SABRE in the VLC DASH player

15

SABREOn/Off

Single DASH flow - constant available bandwidth

OnOff: delay > 200ms about 40% of the time

SABRE: delay < 50ms for 100% of the time

16

Video adaptation: how does SABRE react to variable bandwidth?

DASH player

OS

Client

Server

Playout buffer

Socket buffer

HTTP GETrecv

Rate

Time

Available BW

Video bitrate

Socket buffer gets grained, reduce recv rate, down-shift to a lower

bitrate Socket buffer is full, can not estimate the available BW

Players tries to up-shift to a

higher bitrate, but can’t sustain

it

Player can support this

bitrate, shoot for a higher

one

Socket buffer is full

17

Single DASH Flow –variable available bandwidth

Time (sec)

Rate 6Mbps

3Mbps

On/Off SABRE

T=180 T=380

18

Two clients

Two On/Off clients Two SABRE clients

C1

C2Server

19

Summary• The On/Off behavior of adaptive video players

can have a significant buffer bloat effect• We designed and implemented a client based

technique to mitigate this problem• A single On/Off client significantly increases

queuing delays

• Future work:– Improve SABRE adaptation logic for the

case of a mix of On/Off and SABRE clients– Investigate DASH-aware middlebox and

server based techniques

20

Questions?

Thank you!

21

Backup slides

22

Random Early Detection:Can RED help?

Avg queue size

Loss

pro

babi

lity

P=1

min maxOnce the burst is on the wire, not much can be done!

How can we eliminate large bursts?

P=0

23

Single DASH Flow -constant available bandwidth

SABRE

24

SABREOn/Off

Single DASH flow - constant available bandwidth

OnOff: delay > 200ms about 40% of the time

SABRE: delay < 50ms for 100% of the time

25

Single DASH Flow –variable available bandwidth

Time (sec)

Rate 6Mbps

3Mbps

On/Off SABRE

T=180 T=380

26

Single ABR Flow –variable available bandwidth

SABREOn/Off

27

Two clients

At least one OnOff DASH client significantly

increases queuing delays

28

Two clients