Michael K. Bradshaw, Bing Wang,
description
Transcript of Michael K. Bradshaw, Bing Wang,
Periodic Broadcast and Patching Services - Implementation,
Measurement, and Analysis in an Internet Streaming Video
Testbed
Michael K. Bradshaw, Bing Wang, Subhabrata Sen , Lixin Gao, Jim Kurose,
Prashant Shenoy, and Don Towsley
ACM Multimedia 2001
Introduction
Multimedia streaming :significant loads place on both server and network resources.Multicast approaches : Batching Periodic Broadcast Patching
Issues :control/signaling overhead, the interaction between disk and CPU scheduling, multicast join/leave times
Batching
Server batches requests that arrive close together in time and multicast the stream to the set of batched clients.A drawback is that client playback latency increase with an increasing amount of client request aggregation.
Periodic Broadcast
Server divides a video object into multiple segments, and continuously broadcasts segments over a set of multicast addresses.Earlier portions are broadcast more frequently than later ones to limit playback startup latency.Clients simultaneously listen to multiple addresses, storing future segments for later playback.
Patching (stream tapping)
Server streams the entire video sequentially to the very first client.Client-side workahead buffering is used to allow a later-arriving client to receive its future playback data by listening to an existing ongoing transmission of the same video. Server need only additionally transmit those earlier frames that were missed by the later-arriving client.
Server and Client Architecture
Server Architecture
Server Control Engine (SCE) One listener thread A pool of free scheduler threads One transmission schedule per video
Server Data Engine (SDE) A global buffer cache manager Disk thread (DT) : round-lengthδ Network thread (NT) : round-lengthτ
Schedule Data Structure
Signaling between Server and Client
Testbed (1)
100 Mbps switched Ethernet LANThree machines (server, workload generator and client) with Pentium-II 400 MHz CPU, 400 MB RAM, running Linux OSWorkload Generator generates a background load of client requests in a Poisson manner and logs the timing information for the request to be served
Testbed (2)
Periodic broadcast : L. Gao, J. Kurose, and D. Towsley.
Efficient schemes for broadcasting popular videos (Greedy Disk-conserving Broadcasting segmentation scheme)
l-GDB : the initial segment is l seconds Subsequent segments are of size 2i-1l
where 1 < i < [log2L]
Testbed (3)Sample Videos for the experiments
Video Format Length(min)
Frame rate Bandwidth (Mbps)
File size (MB)
# of RTP pkts
Blade1 MPEG-1
12 30 1.99 180.1 155146
Blade2 MPEG-1
15 30 3 337 296706
Demo MPEG-2
2.7 30 2 40.6 351383Mbps, 15min MPEG-1 Blade2 video
Scheme Segs. Segment Lengths (sec)
3-GDB 9 3,6,12,24,48,96,192,384,134.5(768)10-GDB 7 10,20,40,80,160,320,270.9(640)30-GDB 5 30,60,120,240,450.9(480)
Testbed (4)
Patching algorithm : L. Gao and D. Towsley.
Supplying instantaneous video-on-demand services using controlled multicast. (Threshold-based Controlled Multicast scheme)
When client arrival rate for a video is Poisson with parameterλand the length of a video is L seconds, the threshold is chosen to be (sqrt(2Lλ+1)-1)/λ seconds.
Performance Metrics
Server Side : System Read Load (SRL) Server Network Throughput (SNT) Deadline Conformance Percentage
(DCP)
Client Side : Client Frame Interarrival Time (CFIT) Reception Schedule Latency (RSL)
Catching Implications (1)PB :
Catching Implications (2)Patching :
Catching Implications (3)
SRL for patching and 10-GDB with LFU caching
Component BenchmarksConfiguratio
n# Videos # Addresses
per VideoBandwidt
hper Video
NT completion
Time
DT completionTime
I 3 8 16M bits 1.60ms / 33ms
6.16ms / 1sec
II 1 24 48M bits 5.08ms / 33ms
8.39ms / 1sec
End-End Performance (1)
Client Frame Interarrival Time (CFIT) histogram under 3-GDB, 10-GDB, and 30-GDB at 600 requests per minute.
PB :
End-End Performance (2)Patching :Request
RateNetwork
LoadCFIT DCP
1 per minute 20.85M bps Similar to the 30-GDB
99.9%
5 per minute 55.27M bps Similar to the 30-GDB
99.9%
Higher rates Bottleneck occurs
- -
Scheduling Among Videos
Conclusions
Network bandwidth, rather than server resources, is likely to be the bottleneck. PB : 600 requests per minute Patching : fully loading a 100 Mb network
An initial client startup delay of less than 1.5 sec is sufficient to handle startup signaling and absorb data jitter.Dramatic reductions can be gained via application-level data caching using LFU replacement policy.