1 Video Delivery Techniques. 2 Server Channels Videos are delivered to clients as a continuous...
-
Upload
javier-redd -
Category
Documents
-
view
219 -
download
1
Transcript of 1 Video Delivery Techniques. 2 Server Channels Videos are delivered to clients as a continuous...
1
Video Delivery Techniques
2
Server Channels• Videos are delivered to clients as a
continuous stream.
• Server bandwidth determines the number of video streams can be supported simultaneously.
• Server bandwidth can be organized and managed as a collection of logical channels.
• These channels can be scheduled to deliver various videos.
3
Using Dedicated Channel
Video Server
Client
Too Expensive !
Client
Client
Client
Dedicated stream
4
“Video on Demand” Quiz
1. Video-on-demand technology has many applications:
Electronic commerce Digital libraries Distance learning
News on demand Entertainment All of these applications
2. Broadcast can be used to substantially reduce the demand on server bandwidth ? True False
3. Broadcast cannot deliver videos on demand ? True False
? ?
5
Push Technologies
Broadcast technologies can deliver videos on demand.
Requirement on server bandwidth is independent of the number of users the system is designed to support.
If your answer to Question 3 was “True”, you are wrong:
Less expensive & more scalable !!
6
Simple Periodic BroadcastStaggered Broadcast Protocol
• A new stream is started every interval for each video.
• The worst service latency is the broadcast period.
Time
W
Channel 1
Channel 4
L
video i
video i
video i
video i
video i
video i
video i
video i
video j video jChannel 5
video jvideo j
video jvideo jChannel 7
W=L/N where N is thenumber of channelsW=L/4
7
Simple Periodic Broadcast
• A new stream is started every interval for each video.
• The worst service latency is the broadcast interval.
Advantage: The bandwidth requirement is proportional to the number of videos (not the number of users.)
Can we do better ?
8
Limitation of Simple Periodic Broadcast
• Access latency can be improved only linearly with increases to the server bandwidth.
• Substantial improvement can be achieved if we allow the client to preload data
9
Pyramid Broadcasting – Segmentation
[Viswanathan95]
• Each data segment Di is made times the size of Di-1 , for all i.
= , where B is the system bandwidth; M is the number of videos; and K is the number of server channels. opt = 2.72 (Euler’s constant).
1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 1 2 3 4 Channel 1
1 2 3 4 1 2 3 4 1 2 3 4 1 2
1 2 3 4 1 2 3
1 2 3
Channel 2
Channel 3
Channel 4
timebroadcastinterval ofchannel 1
Siz
es
incr
ea
se g
eo
me
tric
ally
The same video fragment of video 1
broadcast interval of channel 2
broadcast interval of channel 3
KM
B
10
Pyramid Broadcasting Download & Playback
Strategy• Server bandwidth is evenly divided
among the channels, each much faster than the playback rate.
• Client software has two loaders:
– Begin downloading the first data segment at the first occurrence, and start consuming it concurrently.
– Download the next data segment at the earliest possible time after beginning to consume the current data segment.
11
Disadvantages of Pyramid Broadcasting
• The channel bandwidth is substantially larger than the playback rate
• Huge storage space is required to buffer the preloaded data
• It requires substantial client bandwidth
Client bandwidth is typically the most expensive component of a VOD system
12
Permutation-Based Pyramid Broadcasting (PPB) [Aggarwal96]
• PPB further partitions each logical channel in PB scheme into P subchannels.
• A replica of each video fragment is broadcast on P different subchannels with a uniform phase delay.
Cha
nnel
Ci
Video V1
Video V2
A subchannel
Pause to allow the playback to catch up
Begin downloading Resume downloading
13
Advantages and Disadvantages of PPB
• Requirement on client bandwidth is substantially less than in PB
• Storage requirement is also reduced significantly (about 50% of the video size)
• The synchronization is difficult to implement since the client needs to tune to an appropriate point within a broadcast
14
• Each video is fragmented into K segments, each repeatedly broadcast on a dedicated channel at the playback rate.
• The sizes of the K segments have the following pattern:
[1, 2, 2, 5, 5, 12, 12, 25, 25, …, W, W, …, W]
Skyscraper Broadcasting [Hua97]
Size of larger segments are constrainedto W (width of the “skyscraper”).
Even group Odd group
wLatencyService
segment#length video
15
Generating Function
The broadcast series is generated using the following recursive function:
1 If n = 1,
2 If n = 2 or 3,
2 · f(n - 1)+1 If n mod 4 = 0.
f(n-1) If n mod 4 = 1,
2 · f(n - 1) + 2 If n mod 4 = 2,
f(n – 1) If n mod 4 = 3
f(n) =
16
• The Odd Loader and the Even Loader download the odd groups and the even groups, respectively.
• The W-segments are downloaded sequentially using only one loader.
• As the loaders fill the buffer, the Video Player consumes the data in the buffer.
Skyscraper Broadcasting Playback Procedure
17
Advantages of Skyscraper
Broadcasting• Since the first segment is very
short, service latency is excellent.
• Since the W-segments are downloaded sequentially, buffer requirement is minimal.
Pyramid Skyscraper
18
SB Example
Blue people share 2nd and 3rd fragments and 6th, 7th, 8th with Red people.
1
2
3
4
5
6
7
8
19
6
2
3
time
5
7
...
4
2 3 4 5 6 7 16. . .
Channel 1
Channel 2
Channel 3
Channel 4
Channel 5
Channel 6
Channel 7
Channel 15
Channel 16
Playback Schedule:
Loader 1
Loader 1
Loader 1
Loader 1
Loader 1
Loader 1
Loader 2
Loader 2
Loader 2
BufferLoader 1
Loader 2
VideoPlayer
BroadcastChannels
An
oth
er
Ap
pro
ach
20
CCA Broadcasting• Server broadcasts each
segment at the playback rate
• Clients use c loaders
• Each loader download its streams sequentially, e.g., i th loader is responsible for segments i, i+c, i+2c, i+3c, …
• Only one loader is used to download all the equal-size W-segments sequentially
W
Channel 1
Channel 2
Channel 3
Channel 4
Channel 5
Channel K
Group 1
Group 2
Group i
C = 3 (Clients have three loaders)
.1)(mod)1(
,1)(mod)1(2
,11
)(
cnnf
cnnf
n
nf
21
Advantages of CCA
• It has the advantages of Skyscraper Broadcasting.
• It can leverage client bandwidth to improve performance.
22
Cautious Harmonic Broadcasting
(Segmentation Design)• A video is partitioned into n equally-sized
segments.
• The first channel repeatedly broadcasts the first segment S1 at the playback rate.
• The second channel alternately broadcasts S2 and S3 repeadtedly at the playback rate.
• Each of the remaining segment Si is repeatedly broadcast on its dedicated channel at 1/(i–1) the playback rate.
23
Cautious Harmonic Broadcasting
(Playback Strategy)
• The client can start the playback as soon as it can download the first segment.
• Once the client starts receiving the first segment, the client will also start receiving every other segment.
24
Cautious Harmonic Broadcasting
Advantage: Better than SB in terms of service latency.
Disadvantage: Requires about three times more receiving bandwidth compared to SB.
Implementation Problem:
• The client must receive data from many channels simultaneously (e.g., 240 channels are required for a 2-hour video if the desired latency is 30 seconds).
• No practical storage subsystem can move their read heads fast enough to multiplex among so many concurrent streams.
25
Pagoda BroadcastingDownload and Playback
Strategy
• Each channel broadcasts data at the playback rate
• The client receives data from all channels simultaneously.
• It starts the playback as soon as it can download the first segment.
26
Pagoda BroadcastingAdvantage & Disadvantage
Advantage: Required server bandwidth is low compared to Skyscraper Broadcasting
Disadvantage: Required client bandwidth is many times higher than Skyscraper Broadcasting
Achieving a maximum delay of 138 seconds for a 2-hour video requires each client to have a bandwidth five times the playback rate, e.g., approximately 20 Mbps for MPEG-2
System cost is significantly more expensive
27
New Pagoda Broadcasting [Paris99]
• New Pagoda Broadcasting improves on the original Pagoda Broadcasting.
• Required client bandwidth remains very high
Example: Achieving a maximum delay of 110 seconds for a 2-hour video requires each client to have a bandwidth five times the playback rate.
Approximately 20 Mbps for MPEG-2
System cost is very expensive
28
Limitations of Periodic Broadcast
• Periodic broadcast is only good for very popular videos
• It is not suitable for a changing workload
• It can only offer near-on-demand services
29
Batching• FCFS
• MQL (Maximum Queue Length First)
• MFQ (Maximum Factored Queue Length
ResourcesWaiting queue for video i
newrequest
longest queue length
ResourcesWaiting queue for video i
newrequest
lf
l
ln
n
1
1
fis the largest
1Access frequency
of video 1
777 ResourcesWaiting queue
newrequest
Still only near VoD !
Can multicast provide true VoD ?
30
Current Hybrid Approaches
• FCFS-n : First Come First Served for unpopular video and n channels are reserved for popular video.
• MQL-n : Maximum Queue Length policy for unpopular video and n channels are reserved for popular video.
• Performance is limited.
31
New Hybrid Approach
SkyscraperBroadcasting scheme (SB)
Largest Aggregated Waiting TimeFirst (LAW)
Periodic Broadcast Scheduled Multicast
+
32
LAW(Largest Aggregated Waiting Time First)
• MFQ tends to MQL; loosing fairness
q1 / f1, q2 / f2, q3 / f3, q4 / f4, …
f1 f2 f3 f4 ...
q1, q2, q3, q4, ...• Whenever a stream becomes available, schedule the
video with the maximum value of Si :
Si = c * m - (ai1+ ai2 + …+ aim ),where c is current time,
m is total number of requests for video i,
aij is arrival time of jth request for video i.(Sum of each request’s waiting time in the queue)
33
LAW (Example)
• By MFQ, q1*t1= 5*(128-106)=110,
q2*t2 = 4*(128-100)=112. selected By MFQ
• Average waiting times are 12 and 8 time units.
• S1 = 128*5 - (107+111+115+121+126) = 60 selected
S2 = 128*4 - (112+119+122+127) = 32 by LAW
Request for video no.1
106 107 111 115 121 126 128
Time
last multicast
Request for video no.2
100 112 119 122 127 128
Time
last multicast
Current
Current
R11 R12 R13 R14 R15
R21 R22 R23 R24
34
AHA (Adaptive Hybrid Approach)• Popularity is re-evaluated periodically.• If a video is popular so broadcasting by SB currently,
then go Case.1. Otherwise, go Case.2.
Video is popular
Video ispopular ?
Terminate the SB broadcast after all the dependent playbacks end
Mark the waiting
queue as anLAW queue
Return the channels to the channel pool
YesNo
K channelsavailable ?
Initiate the SB
broadcast
Yes
No
Case.1
Case.2
Video is popular
Video ispopular ?
No
YesMark the waiting
queue as anLAW queue
The video is assumed torequire K logical channels
35
Performance Model • 100 videos (120 min. each),• Client Behavior follows
- the Zipf distribution (z = 0.1 ~ 0.9) for choice of videos,- the Poisson distribution for arrival time.- popularity is changing gradually every 5 min for dynamic environment.- for waiting time, = 5 min., s = 1 min.
• Performance Metrics- Defection Rate,- Average access latency,- Fairness, and- Throughput.
36
LAW vs. MFQ
Varying Request rate
0
1
2
3
4
5
6
7
8
5 7.5 10 12.5 15 20 30
Request Arrival rate (requests/min.)
Un
fair
nes
s
MFQLAW
Varying Server Capacity
0
1
2
3
4
5
6
7
8
300 400 500 600 700 800 900
Server Capacity (channels)
Un
fair
nes
s
MFQLAW
Varying Skew Factor
0
1
2
3
4
5
6
7
8
0.1 0.2 0.3 0.4 0.5
Skew Factor (Z)
Un
fair
nes
s
MFQLAW
37
AHA vs. MFQ-SB-nAverage Latency
00.5
11.5
22.5
33.5
44.5
5
600 800 1000 1200 1400 1600 1800
Server Capacity (channels)
Av
era
ge
La
ten
cy (
min
.)
MFQ-SB-nAHA
Throughput
0
5
10
15
20
25
30
35
40
45
600 800 1000 1200 1400 1600 1800
Server Capacity (channels)
Th
rou
gh
pu
t
MFQ-SB-nAHA
Defection Rate
0
10
20
30
40
50
60
70
600 800 1000 1200 1400 1600 1800
Server Capacity (channels)
De
fec
tio
n R
ate
(%
)
MFQ-SB-nAHA
Unfairness
0
1
2
3
4
5
6
7
8
600 800 1000 1200 1400 1600 1800
Server Capacity (channels)
Un
fair
nes
s
MFQ-SB-nAHA
38
• Low Latency: requests must be served immediately
Challenges – conflicting goals
• Highly Efficient: each multicast must still be able to serve a large number of clients
39
Some Solutions• Application level:
– Piggybacking
– Patching
– Chaining
• Network level:
– Caching Multicast Protocol (Range Multicast)
40
Piggybacking [Golubchik96]
new arrivals
departures
+5% -5%C B A
•Slow down an earlier service and speed up the new one to merge them into one stream
•Limited efficiency due to long catch-up delay
•Implementation is complicated
41
Patching
RegularMulticast
Video
A
42
Proposed Technique: Patching
RegularMulticast
AVideo Player Buffer
B
Video
t
Patching Stream
Skew point
43
Proposed Technique: Patching
RegularMulticast
ABuffer
B
Video
2t
Skew point is absorbed by client buffer
Video Player
44
Client Design
Video
Server
Lr
VideoPlayer
Regular MulticastPatching Multicast
Data Loader
RegularStream
PatchingStream
Client A
LrLp
VideoPlayer
Client B
BufferLrLp
VideoPlayer
Client C
45
Server Design
Server must decide when to schedule a regular stream or a patching stream
A
r
B
p
C
p
D
p
E
r
F
p
G
p
Multicast group Multicast group
time
46
Two Simple Approaches
• If no regular stream for the same video exists, a new regular stream is scheduled
• Otherwise, two policies can be used to make decision: Greedy Patching and Grace Patching
47
Greedy Patching
Patching stream is always scheduled
Video Length
Shared Data
Buffer Size
Shared Data
Buffer Size
Shared Data
A
B
A
C
48
Grace Patching If client buffer is large enough to absorb the
skew, a patching stream is scheduled; otherwise, a new regular stream is scheduled.
Video Length
Buffer Size
Regular Stream
A
Shared DataB
C
49
Local Distribution Technologies
Video
Server
Video
Server
ATM or Sonet
backbone network
Switch
Switch
Localdistributionnetwork
Localdistributionnetwork
Client
Client
Client
Client
– ADSL (Asymmetric Digital Subscriber Line): currently 8 Mbps in one direction,
and eventually speeds as high as 50 Mbps
– HFC (Hybrid Fiber Coax): current 300-450 Mhz coax cables are replaced by
750 mhz coax cable to achieve a total of 2 Gbps
50
Performance Study
• Compared with conventional batching
• Maximum Factored Queue (MFQ) is used
• Two scenarios are studied
– No defection • average latency
– Defection allowed• average latency, defection rate, and unfairness
51
Simulation Parameters
Request rate (requests/min)
Client buffer (min of data)
Server bandwidth (streams)
Video length (minutes)
Number of videos
Parameter
50 10-90
5 0-10
1,200 400-1,800
90 N/A
100 N/A
Default Range
Video Access Skew factor 0.7 N/A
Number of requests 200,000 N/A
52
Effect of Server Bandwidth
Client BufferRequest RateDefection
5 minutes50 arrivals/minuteNo
0
100
200
300
400
500
600
400 600 800 1000 1200 1400 1600 1800
Ave
rage
Lat
ency
(Se
cond
s)
Server Communication BW (streams)
Conventional Batching
Greedy PatchingGrace Patching
53
Effect of Client Buffer
Server BandwidthRequest RateDefection
1,200 streams50 arrivals/minuteNo
0
20
40
60
80
100
120
140
160
180
200
0 1 2 3 4 5 6 7 8 9 10
Ave
rage
Lat
ency
(se
cond
s)
Client Buffer Size (minutes of data)
Conventional Batching
Greedy PatchingGrace Patching
54
Effect of Request Rate
0
50
100
150
200
250
10 20 30 40 50 60 70 80 90 100 110
Ave
rage
Lat
ency
(se
cond
s)
Request Rate (requests/minutes)
Server BandwidthClient BufferDefection
1,200 streams5 minutesNo
Conventional Batching
Greedy PatchingGrace Patching
55
Optimal Patching
A
r
B
p
C
p
D
p
E
r
F
p
G
p
patching window patching window
Multicast group Multicast group
time
What is the optimal patching window ?
56
Optimal Patching Window
• D is the mean total amount of data transmitted by a multicast group
• Minimize Server Bandwidth Requirement, D/W , under various W values
Video Length
Buffer Size Buffer Size
A
W
57
Optimal Patching Window
• Compute D, the mean amount of data transmitted for each multicast group
• Determine , the average time duration of a multicast group
• Server bandwidth requirement is D/ which is a function of the patching period
• Finding the patching period that minimize the bandwidth requirement
58
Candidates for Optimal Patching Window
59
Concluding Remarks
• Unlike conventional multicast, requests can be served immediately under patching
• Patching makes multicast more efficient by dynamically expanding the multicast tree
• Video streams usually deliver only the first few minutes of video data
• Patching is very simple and requires no specialized hardware
60
Patching on Internet
• Problem: – Current Internet does not support
multicast
• A Solution:
– Deploying an overlay of software routers on the Internet
– Multicast is implemented on this overlay using only IP unicast
61
Content Routing
RootRouter
RouterA
RouterB
RouterE
RouterC
RouterD
ClientClient
Find (1)Find (2)
Fin
d
RouterD
MyRouter ?
No
Yes
Server
Client
Videostream
Each router forwards its Find messages to other routers in a round-robin manner.
62
Removal of An Overlay Node
A
B
C
D
GF
E
Client
A
B
C
D
G
E
Client
Before adjustment After adjustment
Server Server
Inform the child nodes to reconnect to the grandparent
63
Failure of Parent Node
A
B
C
D
GF
E
Client
A
B
C
D
G
E
Client
After adjustmentBefore adjustment
– Data stop coming from the parent
– Reconnect to the server
64
Slow Incoming Stream
A
B
C
D
GF
E A
B
C
D
GF
E
Before adjustment After adjustment
Reconnect upward to the grandparent
65
Downward Reconnection
A
B
C
D
GF
E
Before adjustment After adjustment
A
B
C
D
GF
E
Slow
Slow
• When reconnection reaches the server, future reconnection of this link goes downward.
• Downward reconnection is done through a sibling node selected in a round-robin manner.
• When downward reconnection reaches a leave node, future reconnection of this link goes upward again.
66
Limitation of Patching
• The performance of Patching is limited by the server bandwidth.
• Can we scale the application beyond the physical limitation of the server ?
67
Chainin
g
• Using a hierarchy of multicasts• Clients multicast data to other clients
in the downstream• Demand on the server-bandwidth
requirement is substantially improved
Batch3
Batch1
Batch 2
A virtualbatch
Dedicated Channels Multicast Chaining
Only onevideo stream
3 videostreams
7 videostreams
client
Vid
eo
se
rve
r
Vid
eo
se
rve
r
Vid
eo
se
rve
r
Networkcache
68
Chaining
– Highly scalable and efficient
– But implementation is a challenge
Video Server
disk
Screen
disk
Screen
Screen
disk
Client A
Client B
Client C
69
Scheduling Multicasts
• Conventional Multicast
I State: The video has no pending requests. Q State: The video has at least one pending request.
• Chaining
C State: Until the first frame is dropped from the multicast tree, the tree continues to grow and the video stays in the C state.
I Q
request arrives
grant resources
requestarrives
I Q
request arrives
grantresources
requestarrives
C
requestarrives
dropthe firstframe
70
Enhancement
• When resources become available, the service begins for all the pending requests except for the “youngest” one.
• As long as new requests continue to arrive, the video remains in the E state.
• If the arrival of the requests momentarily discontinues for an extended period of time, the video transits into the C state after initiating the service for the last pending request.
E State:
grantresources
requestarrives
requestarrives
dropthe first
frame
I Q
request arrives
EC
request arrives
serve the lastpending request
• This strategy returns to the I state much less frequently.
• It is less demanding on the server bandwidth.
71
Advantages of Chaining• Requests do not have to wait for the next
multicast.
– Better service latency
• Clients can receive data from the expanding multicast hierarchy instead of the server.
– Less demanding on server bandwidth
• Every client that uses the service contributes its resources to the distributed environment.
– Scalable
72
Chaining is Expensive ?
• Each receive end must have caching space.
• 56 Mbytes can cache five minutes of MPEG-1 video
• The additional cost can easily pay for itself in a short time.
73
Limitation of Chaining
• It only works for a collaborating environment
i.e., the receiving nodes are on all the time
• It conserves server bandwidth, but not network bandwidth.
74
Another Challenge
• Can a multicast deliver the entire video to all the receivers who may subscribe to the multicast at different times ?
• If we can achieve the above capability, we would not need to multicast too frequently.
75
Range Multicast [Hua02]
• Deploying an overlay of software routers on the Internet
• Video data are transmitted to clients through these software routers
• Each router caches a prefix of the video streams passing through
• This buffer may be used to provide the entire video content to subsequent clients arriving within a buffer-size period
76
Range Multicast Group Caching Multicast Protocol (CMP)
C1 C3
C4 C2VideoServer
0
7
8
110
0
0
7
7
7
8
8
11
R5
R6R3
R4R1 R7
R8R2
Root
• Four clients join the same server stream at different times without delay
• Each client sees the entire video
Buffer Size: Each router can cache 10 time units of video data.
Assumption: No transmission delay
77
Multicast Range
• All members of a conventional multicast group share the same play point at all time
– They must join at the multicast time
• Members of a range multicast group can have a range of different play points
– They can join at their own time Multicast Range at time 11: [0, 11]
C1 C3
C4 C2VideoServer
0
7
8
110
0
0
7
7
7
8
8
11
R5
R6R3
R4R1 R7
R8R2
Root
78
Network Cache Management
• Initially, a cache chunk is free.
• When a free chunk is dispatched for a new stream, the chunk becomes busy.
• A busy chunk becomes hot if its content matches a new service request.
free
busy
hot
New streamarrives
A servicerequest arrivesbefore the chunkis full
Lastserviceends
A servicerequest arrives
before the chunkis full
Replacedby a newstream
79
CMP vs. Chaining
VideoServer
RouterR1
RouterR2
RouterR3
RouterR4
RouterR10
RouterR5
RouterR8
RouterR9
RouterR6
RouterR7
C1
C3
C4 C2
0
7
8
11
11
7
7
7
8
8
0
0
0
0
7
7
8
88
11
11
11 11
8
8
7
11
VideoServer
RouterR1
RouterR2
RouterR3
RouterR4
RouterR10
RouterR5
RouterR8
RouterR9
RouterR6
RouterR7
C1
C3
C4 C2
0
7
8
11
11
7
7
7
8
8
0
0
0
0Chaining CMP
Assumption: Each router has one chunk of storage space
capable of caching 10 time units of video.
80
CMP vs. Proxy Servers
Proxy servers are placed at the edge of the network to serve local users.
CMP routers are located throughout the network for all users to share.
Proxy servers are managed autonomously.
The router caches are seen collectively as a single unit.
Proxy Servers CMP
81
CMP vs. Proxy Servers
Popular data are heavily duplicated if we cache long videos.
CMP routers cache only a small leading portion of the video passing through
Caching long videos is not advisable. Many data must still be obtained from the server
Majority of the data are obtained from the network.
Proxy Servers CMP
82
VCR-Like Interactivity
• Continuous Interactive functions– Fast forward– Fast rewind– Pause
• Discontinuous Interactive functions– Jump forward– Jump backward
Useful for many VoD applications
VCR Interaction Using Client Buffer
Play PointN N + 20
N+2 N+22
N+4 N+24
Before
Play
Pause
4X Fast ForwardN+6 N+26
N+6 N+26
Jump Backward
Video stream
Video stream
Video stream
Video stream
Video stream
84
Interaction Using Batching [Almeroth96]
• Requests arriving during a time slot form a multicast group
• Jump operations can be realized by switching to an appropriate multicast group
• Use an emergency stream if a destination multicast group does not exist
Emergency Stream
Batching Period time
Ju
mp
Ju
mp
Ju
mp
Continuous Interactivity under
Batching• Pause:
– Stop the display– Return to normal play as in Jump
• Fast Forward:– Fast forward the video frames in the
buffer– When the buffer is exhausted, return to
normal play as in Jump
• Fast Rewind:– Same as in fast forward, but in reverse
direction
SAM (Split and Merge) Protocol [Liao97]
• Uses 2 types of streams, S streams for normal multicast and I streams for interactivity.
• When a user initiates an interactive operation:– Use an I channel to interact with the video
– When done, use the I channel as a patching stream to join an existing multicast
– Return the I channel
Advantage: Unrestricted fast forward and rewind
Disadvantage: I streams require substantial bandwidth
87
Resuming Normal Play in SAM
62 5431
762 5431
762 5431
762 5431
Original S Stream
Ineligible S StreamSegment 6 is in the future
NowPoint to resumenormal play
Targeted S StreamEnough buffer to cache
segments 8 and 9
d
Ineligible S Streamd > buffer size, patching cannot help
d
• Use the I stream to download segments 6 and 7, and render them onto the screen
• At the same time, join the target multicast and cache the data, starting from segment 8, in a local buffer
88
Interaction with Broadcast Video
• The interactive techniques developed for Batching can also be used for Staggered Broadcast
• However, Staggered Broadcast does not perform well
89
Client Centric Approach (CCA)
• Server broadcasts each segment at the playback rate
• Clients use c loaders
• Each loader download its streams sequentially, e.g., i th loader is responsible for segments i, i+c, i+2c, i+3c, …
• Only one loader is used to download all the equal-size W-segments sequentially
W
Channel 1
Channel 2
Channel 3
Channel 4
Channel 5
Channel K
Group 1
Group 2
Group i
C = 3 (Clients have three loaders)
.1)(mod)1(
,1)(mod)1(2
,11
)(
cnnf
cnnf
n
nf
90
CCA is Good for Interactivity
• Segments in the same group are downloaded at the same time
– Facilitate fast forward
• The last segment of a group is of the same size as the first segment of the next group
– Ensure smooth continuous playback after interactivity
Broadcast point
Seg 1 of size 1
Seg 2 of size 2
Seg 3 of size 2
Seg 4 of size 5
Seg 5 of size 5
Seg 6 of size 12
Seg 7 of size 12
Gr 1
Gr2
Gr3
Gr4
After a Jump Action : Skyscraper does not guarantee a smooth play back
Broadcast point
Group 1
Group2
Group3
After a Jump Action : CCA technique always guarantees a smoth play back
Desired destination point
Actual destination point
Downloaded by Odd Loader
Downloaded by Even Loader
Missing data
Desired destination point
Actual destination pointDownloaded byLoader 3
Downloaded byLoader 1
91
Broadcast-based Interactive Technique (BIT) [Hua02]
Cr1
Cr2
Cr3
Cr4
Ci1
Cr5
CrKr-3
CrKr-2
Crkr-1
CrKr
Ciki
W
Group 1
Group Ki
An interactivechannel
broadcasts acompressed
version of thedata in the
group
92
BITEnd ofvideo ?
Render next framein Normal Buffer
Interactioninitiated ?
Continuousoperation ?
Destination point inNormal Buffer ?
Render next frame inInteractive Buffer
Resumenormal play ?
Interactive Bufferexhausted ?
Load appropriate group to keep resume point near “middle” of Normal Buffer
Jump todestination point
No
Yes
No
Yes
No
Yes
Yes
No
No
No
Yes
YesEND
BEGIN
• Two Buffers– Normal Buffer
– Interactive Buffer
• When Interactive Buffer is exhausted, client must resume normal play
93
BIT – Resume-Play Operation
case 1 case 2
Broadcast point Broadcast point
Broadcast point Broadcast point
Desired destination
Desired destination
Desireddestination
Broadcast point Broadcast point
Desired destination Desired destination
case 3 case 4
case 5 case 6
i
i + 1
i + 2
i
i + 1
i + 2
i
i + 1
i + 2
i
i + 1
i + 2
i
i + 1
i + 2
i
i + 1
i + 2
i
i + 1
i + 2
i
i + 1
i + 2
Destination point Desired destination
case 7 case 8
Broadcast point Broadcast point
Actualdestination
Actualdestination
Desired destination
Actualdestination
Actualdestination
Actualdestination
Actualdestination
Actualdestination
Actualdestination
Three segments are being downloaded simultaneously
Actual destination point is chosen from among frames at broadcast point to ensure continuous playback
94
BIT - User Behavior Model
Playmp
Fast Reversemfr
Fast Forwardmff
Pausempause
JumpForward
mjf'
JumpBackward
mjb'
1
1
11
1
Pplay
Pfr
Pff
Ppause
Pjb Pjf
• mx: duration of action x• Px: probability to issue action x• Pi: probability to issue
interaction• mi: duration of the interaction• mff = mfr = mpause = mjf = mjb, • Ppause = Pff = Pfb = Pjf = Pjb = Pi/5. • dr : mi/mp interaction ratio.
Performance Metrics
• Percentage of unsuccessful action
– Interaction fails if the buffer fails to accommodate the operation
– E.g., a long-duration fast forward pushes the play point off the Interactive Buffer
• Average Percentage of Completion
– Measure the degree of incompleteness
– E.g., if a 20-second fast forward is forced to resume normal play after 15 seconds, the Percentage of Completion is 15/20, or 75%.
96
BIT - Simulation
Results
0.6
0.65
0.7
0.75
0.8
0.85
0.9
0.95
0.5 1 1.5 2 2.5 3 3.5
Duration ratio
Ave
rage
Per
cent
age
of C
ompl
etio
n
Active BufferManagement
BIT
0.1
0.15
0.2
0.25
0.3
0.35
0.4
0.45
0.5
0.55
0.5 1 1.5 2 2.5 3 3.5
Duration ratio
Perc
enta
ge o
f Uns
ucce
ssfu
l Act
ions
Active BufferManagement
BIT
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
0.55
0.60
0.65
1 2 3 4 5 6 7
regular buffer size
Per
cent
age
of U
nsuc
cess
ful A
ctio
ns
A.B.M, d_ratio =1
BIT, d_ratio =1
A.B.M, d_ratio =1.5
BIT, d_ratio = 1.5
0.50
0.55
0.60
0.65
0.70
0.75
0.80
0.85
0.90
0.95
1 2 3 4 5 6 7
regular buffer size
Ave
rage
Per
cent
age
of C
ompl
etio
n
A.B.M, d_ratio =1
BIT, d_ratio 1
A.B.M, d_ratio =1.5
BIT, d_ratio = 1.5
97
Support Client Heterogeneity
• Using multi-resolution encoding
• Bandwidth Adaptor
• HeRO Broadcasting
98
Multi-resolution Encoding
• Encode the video data as a series of layers
• A user can individually mould its service to fit its capacity
• A user keeps adding layers until it is congested, then drops the higher layer
Drawback: Compromise the display quality
99
Bandwidth Adaptors
Video Server
ClientClientClient
Bandwidth Adaptor
Averagebandwidth
Averagebandwidth
More bandwidth
Client
Client
Bandwidth Adaptor
Bandwidth Adaptor
Client
Less bandwidth
Even less bandwidth
Even less bandwidth
Server-end Adaptor
Client-end Adaptor
Advantage: All clients enjoy the same quality display
100
Requirements for an Adaptor
• An adaptor dynamically transforms a given broadcast into another less demanding one
• The segmentation scheme must allow easy transformation of a broadcast into another
• CCA segmentation technique has this property
101
Two Segmentation Examples
102
Adaptation (1)
Server
Sender routine 1
Sender routine 2
Sender routine Ks
Segment 1
Segment 2
Segment Ks
Adaptor
Channel 1
71 70 69
Segment 1
Buffer space
68
InsertChunk(68)?
Yes
No. Ignore chunk.
Loader Routine 1
Adaptor downloads from all broadcast channels simultaneously
103
Adaptation (2)
Adaptor
Sender routine KaSegment Ka
369 368 367
DeleteChunk(370)?
No. Just send.
Yes. Send anddelete from buffer.
• Each sender routine retrieves data chunks from buffer, and broadcast them to the downstream
• For each chunk, the sender routine calls deleteChunk to decide if the chunk can be deleted from the buffer
104
Buffer Management
• insertChunk implements an As Late As Possible policy, i.e.,
– If another occurrence of this chunk will be available from the server before it is needed, then ignore this one, else buffer it.
• deleteChunk implements an As soon As Possible policy, i.e.,
– Determine the next time when the chunk will need to be broadcast to the downstream.
– If this moment comes before the availability of the chunk at the server, then keep it in storage, else delete it.
105
The Adaptor Buffer
• Computation is not intensive.
• It is only performed for the first chunk of the segment, i.e.,
– If this initial chunk is marked for caching, so will be the rest of the segment.
• Same thing goes for deletion.
106
The start-up delayThe start-up delay is the broadcast period of
the first segment on the server
107
HeRO – Heterogeneous Receiver-Oriented
Broadcasting • Allows receivers of various
communication capabilities to share the same periodic broadcast
• All receivers enjoy the same video quality
• Bandwidth adaptors are not used
108
HeRO – Data Segmentation
• The size of the i th segment is 2i-1 times the size of the first segment
109
HeRO – Download Strategy
• The number of channels needed depends on the time slot of the arrival of the service request
• Loader i downloads segments i, i+C, i+2C, i+3C, etc. sequentially, where C is the number of loaders available.
Global Period
110
HeRO – Regular Channels
• The first user can download from six channels simultaneously
Request 1
111
HeRO – Regular Channels
• The second user can download from two channels simultaneously
Request 2
112
Worst-Case for Clients with 2 loaders
• Worst-case latency is 11 time units
• The worst-cases appear because the broadcast periods coincide at the end of the global period
Request 2
Coincidence of the broadcast periods
11 time units
113
Worst-Case for Clients with 3 loaders
• Worst-case latency is 5 time units
• The worst-cases appear because the broadcast periods coincide at the end of the global period
Request
5 time units
Coincidence of the broadcast periods
114
Observations of Worst-Cases
• For a client with a given bandwidth, the time slots it can start the video are not uniformly distributed over the global period.
• The non-uniformity varies over the global period depending on the degree of coincidence among the broadcast periods of various segments.
115
Observations of Worst-Cases (cont…)
• The worst non-uniformity occurs at the end of each global period when the broadcast periods of all segments coincide.
• The non-uniformity causes long service delays for clients with less bandwidth.
We need to minimize this coincidence to improve the worst case.
116
• We broadcast the last segment on one more channel, but with a time shift half its size.
• We now offer more possibilities to download the last segment; and above all, we eliminate every coincidence with the previous segments.
RegularGroup
ShiftedChannel
Adding one more channel
117
ShiftedChannels
• To reduce service latency for less capable clients, broadcast the longest segments on a second channel with a phase offset half their size.
Channel 1Channel 2Channel 3Channel 4Channel 5a
0 1 2 3 4 5 6 7 8 9 1110 1312 1514 1716 18 2019 2221 2423 25 2726 2928 3130 32Shift = D5/2
Channel 5bChannel 6aChannel 6b
Shift = D6/2The t unit is D1
t
4 1 2 2 3 2 2 3 4 2 2 2 3 2 2 3 4 1 2 2 3 2 2 3 4 2 2 2 3 2 2 3
HeRO
118
• Under a homogeneous environment, HeRO is
– very competitive in service latencies compared to the best protocols to date
– the most efficient protocol to save client buffer space
• HeRO is the first periodic broadcast technique designed to address the heterogeneity in receiver bandwidth
• Less capable clients enjoy the same playback quality
HeRO – Experimental Results
119
2-Phase Service Model(2PSM)
Browsing Videos in a Low Bandwidth Environment
120
Search Model
• Use similarity matching (e.g., keyword search) to look for the candidate videos.
• Preview some of the candidates to identify the desired video.
• Apply VCR-style functions to search for the video segments.
121
Conventional Approach
Advantage: Reduce wait time
1. Download So
2. Download S1
while playing S0
3.Download S2
while playing S1...
Disadvantage:Unsuitable for video libraries.
S 1 S 2 S 3 ...
...
display S 0
display S 1
display S 2
S0
S 1
S 2
S 3
Server
Client
S0
Time
122
Search Techniques
• Use extra preview files to support the preview function.
• Use separate fast-forward and fast-reverse files to provide the VCR-style operations.
• It requires more storage space.• Downloading the preview file adds delay to the service.
• It requires more storage space.• Server can becomes a bottleneck.
123
Challenges
How to download the preview frames for FREE ?
No additional delay
No additional storage requirement
How to support VCR operations without VCR files ?
No overhead for the server
No additional storage requirement
124
2PSM – Preview Phase
0
1
2
3
4
6
7
8
9
10
12 18
13
14
15
16
5 11 17
19
20
21
22
23
25
26
27
28
29
24 30 36 42 48
31
32
33
34
35
37
38
39
40
41
43
44
45
46
47
54 60 66 72 78 84 90
49
50
51
52
53
55
56
57
58
59
61
62
63
64
65
67
68
69
70
71
73
74
75
76
77
79
80
81
82
83
85
86
87
88
89
91
92
93
94
95
96
97
98
99
100
102
103
104
105
106
108 114
109
110
111
112
101 107 113
115
116
117
118
119
121
122
123
124
125
120 126 132 138 144
127
128
129
130
131
133
134
135
136
137
139
140
141
142
143
150 156 186162 168 174 180
145
146
147
148
149
151
152
153
154
155
158
157
189159
190160
187
161
188
191
163
164
165
166
167
169
170
171
172
173
175
176
177
178
179
181
182
183
184
185
downloadedduring Step 1
downloadedduring Step 2
downloadedduring Step 3
L
R
.
.
.
.
.
.
.
.
.
GOFs available for previewing after 3 steps
The preview quality improves gradually.
.
.
.
90
1146618
42
162
138
1741501261027854306downloaded
during Step 4
125
2PSM – Playback PhaseServer
. . .PU0 PU1 PU2 PU3 PU4 PU5 PU6
L3 L4 L5 L6
L
0 L1
L2
L3
L4
L5
L6
L7
L7R2 R3
R
5
R6
R
1
R
0
R
2 R
3 R
4
R4 R5
R
6
PU0
Client
Download during Initialization Phase Download during Playback Phase
PU1
. . .
display
display
display
display
display
display
display
PU2
PU3
PU4
PU5
PU6
R0L0L1
R1 L2
t
126
Remarks
1. It requires no extra files to provide the preview feature.
2. Downloading the preview frames is free.
3. It requires no extra files to support the VCR functionality.
4. Each client manages its own VCR-style interaction. Server is not involved.