Experiences with High-Definition Video Conferencing

19
Experiences with High-Definition Video Conferencing Colin Perkins, Alvaro Saurin University of Glasgow, Department of Computing Science Ladan Gharai, Tom Lehman University of Southern California, Information Sciences Institute

description

 

Transcript of Experiences with High-Definition Video Conferencing

Page 1: Experiences with High-Definition Video Conferencing

Experiences with High-DefinitionVideo Conferencing

Colin Perkins, Alvaro SaurinUniversity of Glasgow, Department of Computing Science

Ladan Gharai, Tom LehmanUniversity of Southern California, Information Sciences Institute

Page 2: Experiences with High-Definition Video Conferencing

Cop

yrig

ht ©

200

6 U

nive

rsity

of G

lasg

owA

ll rig

hts r

eser

ved.

Talk Outline

• Scaling multimedia conferencing• The UltraGrid system

– Hardware requirements– Software architecture

• Experimental performance• Challenges in congestion control• Conclusions

Page 3: Experiences with High-Definition Video Conferencing

Cop

yrig

ht ©

200

6 U

nive

rsity

of G

lasg

owA

ll rig

hts r

eser

ved.

Scaling Multimedia Conferencing

• Given advances in system power, network bandwidth and videocameras, why are video conferencing environments so limited?

Why are we stuck with low qualityimages…?

352

288

Page 4: Experiences with High-Definition Video Conferencing

Cop

yrig

ht ©

200

6 U

nive

rsity

of G

lasg

owA

ll rig

hts r

eser

ved.

Why do conferencing systems look like this?

Page 5: Experiences with High-Definition Video Conferencing

Cop

yrig

ht ©

200

6 U

nive

rsity

of G

lasg

owA

ll rig

hts r

eser

ved.

…and not like this?

Page 6: Experiences with High-Definition Video Conferencing

Cop

yrig

ht ©

200

6 U

nive

rsity

of G

lasg

owA

ll rig

hts r

eser

ved.

Research Objectives

• To explore the problems inherent in delivering high definitioninteractive multimedia over IP:– Related to the protocols– Related to the network– Related to the end-system

• To push the limits of:– Image resolution, frame rate and quality– Network and end system capacity

• To demonstrate the ability of best effort IP networks to supporthigh quality, high rate, media with effective congestion control

Page 7: Experiences with High-Definition Video Conferencing

Cop

yrig

ht ©

200

6 U

nive

rsity

of G

lasg

owA

ll rig

hts r

eser

ved.

RFC 4175Demo at iGrid’05, San Diego

Sep. 2005

HDTV work at ISI starts1999

Demo at SC’02, Baltimore(24 bit colour, 60fps ⇒ 1.0 Gbps)

Nov. 2002

Demo at SC’05, SeattleNov. 2005

Full uncompressed HDTV(30 bit colour, 60fps ⇒ 1.2 Gbps)

Apr. 2005

Public code release(BSD-style open source license)

Jan. 2002

Demo at SC’01, Denver(24 bit colour, 45 fps ⇒ 650Mbps

Nov. 2001

UltraGrid: High Definition Conferencing

Build an HDTV conferencing demonstrator:• Standard protocols

– RTP over UDP/IP– HDTV payload formats & TFRC profile– Best effort congestion controlled delivery

no additional QoS• Commodity networks

– High performance IP networks• OC-48 or higher• Competing with other IP traffic

– Local area up to 10 gigabit Ethernet• Commodity end systems

– PC or similar workstation– HDTV capture and display

UltraGrid: The first HDTV conferencing system using commodity hardware

Page 8: Experiences with High-Definition Video Conferencing

Cop

yrig

ht ©

200

6 U

nive

rsity

of G

lasg

owA

ll rig

hts r

eser

ved.

Media Formats and Equipment

• Capture and transmit a range of video formats:– Standard definition video:

• IEEE 1394 + DV camera– High definition video:

• DVS HDstation or Centaurus capture card– 100MHz PCI-X– 720p/1080i HDTV capture from SMPTE-292M– Approx. $6,000

• Video data rates up to 1.2Gbps

• Chelsio T110 10-gigabit Ethernet• Dual processor Linux 2.6 system

288

352

CIF/288 lines

1280

720

HDTV/720p

576

720

PAL/576 lines

Page 9: Experiences with High-Definition Video Conferencing

Cop

yrig

ht ©

200

6 U

nive

rsity

of G

lasg

owA

ll rig

hts r

eser

ved.

Media Formats and Equipment

• A variety of HDTV cameras are now available:– Broadcast quality cameras:

• Generally expensive ~$20,000– Panasonic AJ-HDC27F– Thomson LDK 6000

• SMPTE-292M output ⇒ directly connect to UltraGrid, low latency– Consumer grade cameras:

• Price is in the $3,000–5,000 range– Sony HVR-Z1E, HDR-FX1– JVC GY-HD-100U HDV Pro

• No SMPTE-292M output ⇒ converter needed (E.g. AJA HD10A), higherlatency

• Displays must accommodate:– 16:9 aspect ratio– 1280×720 progressive or 1920×1080 interlaced

Page 10: Experiences with High-Definition Video Conferencing

Cop

yrig

ht ©

200

6 U

nive

rsity

of G

lasg

owA

ll rig

hts r

eser

ved.

Software Architecture

• Classical media tool architecture– Video capture and display– Video codecs

• DV and M-JPEG only at present,others can be added

– RTP– Adaptive video playout buffer

• Two interesting additions:– Congestion control over RTP– Sophisticated video sending buffer

Page 11: Experiences with High-Definition Video Conferencing

Cop

yrig

ht ©

200

6 U

nive

rsity

of G

lasg

owA

ll rig

hts r

eser

ved.

Experimental Performance

UG sender

UGreceiver

UG sender

UGreceiver

Los Angeles

10 Gbs Ethernet

Seattle, WASC 2005

Chicago

Houston

Arlington, VAISI-East

OC-192 SONET/SDH

LDK 6000

AJ-HDC27F

• Wide area HDTV testson the Internet2 backbone– ISI-East ⇔ ISI-West– ISI-East ⇔ Denver (SC’01)– ISI-East ⇔ Seattle (SC’05)

• Demonstrated interactive low-latency uncompressed HDTV conferencingbetween ISI-East and Seattle at SC’05– Gigabit rate bi-directional video flows (tested using both HOPI and Abilene)

• Ongoing low-rate tests between ISI-East and Glasgow using 25 Mbps DVformat video

Page 12: Experiences with High-Definition Video Conferencing

Cop

yrig

ht ©

200

6 U

nive

rsity

of G

lasg

owA

ll rig

hts r

eser

ved.

Page 13: Experiences with High-Definition Video Conferencing

Cop

yrig

ht ©

200

6 U

nive

rsity

of G

lasg

owA

ll rig

hts r

eser

ved.

Inter-packet interval (measured at receiver)

Experimental Performance

• Environment:– Seattle ⇔ ISI-East over Abilene; 14-18 November 2005– Best effort IP service, non-QoS enabled, shared with production traffic– 8,800 byte packets; 10 gigabit Ethernet w/jumbo frames; OC-192 WAN

• Packet loss:– Overwhelming majority of RTCP reports showed no packet loss– Occasional transient loss (≤0.04%) observed due to cross traffic

• Inter-packet interval:– Inter-packet interval (jitter) shows

expected sharp peak with long tail– Network disrupts packet timing:

not significant for the application• Playout jitter buffer compensates

Page 14: Experiences with High-Definition Video Conferencing

Cop

yrig

ht ©

200

6 U

nive

rsity

of G

lasg

owA

ll rig

hts r

eser

ved.

Deployment Issues

• Good performance on Internet2 – usable today– Observe occasional loss due to transient congestion

• HDTV flows not TCP-Friendly, cause transient disruption during loss periods– Cannot support large numbers of uncompressed HDTV flows

• But active user community exists in well provisioned regions of the network(UltraGrid nodes in US, Canada, Korea, Spain, Czech Republic...)

• Two approaches to wider deployment– Optical network provisioning and/or quality of service

• E.g. Internet2 hybrid optical packet network (HOPI) also used for some tests• Possible, solves problem, but expensive and hard to deploy widely• Necessary for guaranteed-use deployments

– Congestion control• Adaptive video transmission rate to match network capacity• Preferred end-to-end approach for incremental, on demand, deployment• Necessary for safety, even if QoS provisioned network available

Page 15: Experiences with High-Definition Video Conferencing

Cop

yrig

ht ©

200

6 U

nive

rsity

of G

lasg

owA

ll rig

hts r

eser

ved.

Congestion Control for Interactive Video

• TCP not suitable for interactive video– Abrupt variations in sending rate– Couples congestion control and reliability– Too slow

• Obvious alternative: TCP-Friendly rate control (TFRC)– Well specified, widely studied rate-based congestion control– Aims to provide relatively smooth variations in sending rate– Doesn’t couple congestion response and reliability

– Two implementation choices:• Use DCCP with CCID 3• Use RTP profile for TFRC

• DCCP implementations not mature• Deployment challenges due to firewalls• Not feasible to use at this time

• Can be deployed in end systems only (running over UDP)• Easy to develop, deploy, debug and experiment with code

Page 16: Experiences with High-Definition Video Conferencing

Cop

yrig

ht ©

200

6 U

nive

rsity

of G

lasg

owA

ll rig

hts r

eser

ved.

TFRC Implementation

• Rate based algorithm,clocking packets fromsending buffer

• Sending buffer size chosen to respect 150ms oneway latency constraint (⇒ a couple of frames)

• Rate based control driving queuing system:– Widely spaced (16ms) bursts of data from codec– Fast, smoothly paced, transmission (~70µs spacing)

• Mismatched adaptation rates– TFRC ⇒ O(round-trip time)– Codec ⇒ O(inter-frame time)– Relies on buffering to align rates, varies codec rate ⇒ problematic for stability

Page 17: Experiences with High-Definition Video Conferencing

Cop

yrig

ht ©

200

6 U

nive

rsity

of G

lasg

owA

ll rig

hts r

eser

ved.

TFRC Performance

Throughput with varying RTT

Transport protocol stable on large RTTpaths, less stable for shorter paths

100ms RTT, 800kbps bottleneck, 10 fps M-JPEGTesting in dummynet

Desired vs. actual sending rate

Video rate can follow congestion controlrate, provided frame rate and RTT similar

Page 18: Experiences with High-Definition Video Conferencing

Cop

yrig

ht ©

200

6 U

nive

rsity

of G

lasg

owA

ll rig

hts r

eser

ved.

Implications and Conclusions

• Well engineered IP networks can support very high performanceinteractive multimedia applications– The current Internet2 best effort IP service provides real-time performance

suitable for gigabit rate interactive video when shared with other traffic– Transient congestion causes occasional transient packet loss, but recall that

we added a gigabit rate real-time video flow to an existing network withoutre-engineering that network to support it

• Initial congestion control experiments raise more questions thanthey answer– Possible to implement, but more sophisticated codecs needed– Difficult to match codec and network rates, causes bursty behaviour

• Impact on perceptual quality due to implied quality variation unclear• Likely easier as video quality, frame-rate, and network bandwidth increase

Page 19: Experiences with High-Definition Video Conferencing

Cop

yrig

ht ©

200

6 U

nive

rsity

of G

lasg

owA

ll rig

hts r

eser

ved.

UltraGridA High Definition Collaboratory

http://ultragrid.dcs.gla.ac.uk/