THE CURRENT LANDSCAPE:

Post on 13-Feb-2016

43 views 0 download

description

THE CURRENT LANDSCAPE:. Media Streams- Video annotation and editing system Designed by Marc Davis, Brian Williams, and Golan Levin 1991-1997 Machine Understanding Group of the MIT Media Laboratory and Interval Research Corporation. THE CURRENT LANDSCAPE:. - PowerPoint PPT Presentation

Transcript of THE CURRENT LANDSCAPE:

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

THE CURRENT LANDSCAPE:Media Streams-Video annotation and editing systemDesigned by Marc Davis, Brian Williams, and Golan Levin1991-1997 Machine Understanding Group of the MIT Media Laboratory and Interval Research Corporation

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

THE CURRENT LANDSCAPE:Media Streams is a system for annotating, retrieving, repurposing, and automatically assembling digital video.

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

WHAT ARE THEY SOLVING?

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

WHAT ARE THEY SOLVING?The problem of finding video information in a large and growing archive- examining how to annotate and describe video data in a way that is comprehensible by all people and is searchable by computer.

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

WHAT ARE THEY SOLVING?Their intent is to give access to video materials to users who want to repurpose or recompose them.

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

WHAT ARE THEY SOLVING?Current annotation is limited by the language of key word choice, resulting in a missed search opportunity. Often, key words are not specific enough for searching in video.

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

WHAT ARE THEY SOLVING?Current annotation has no universal guidelines. Annotation “language” is personal and varies from person to person.

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW ARE THEY SOLVING IT?

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW ARE THEY SOLVING IT?Video-annotation software that allows multiple annotations of the same clip, including varying levels of overlap of clip annotations.

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW ARE THEY SOLVING IT?To make search and annotation more reliable, Media Streams uses a system of visual icons which represent what is depicted in the video clip. It can both read and write computer-generated icons.

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

“…Media Timeline, on which iconic annotations of video are temporally indexed. Each stream in the Media Timeline contains annotations about a unique aspect of video content, such as settings, characters, objects, actions, camera motions, etc.”

Golan Levin, Principal Designer of Icon Visual Language

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW ARE THEY SOLVING IT?Users select an individual icon or combination of icons (compound) to annotate a clip. Icons represent what is visually depicted in a scene, not the meaning of a scene.

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

“…Icon Space, an atemporal, hierarchically-indexed "dictionary" of iconic descriptors. The Icon Space incorporates utilities for icon construction and search.”

Golan Levin, Principal Designer of Icon Visual Language

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW ARE THEY SOLVING IT?Searching the archive affords setting parameters of what you want to appear in the clip. MS locates existing footage representing that description. It can then recompose existing shots to create a clip meeting the desired parameters.

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

“…cross-section of the icon hierarchies, including: Historic Period, Calendar Time, Time of Day, Functional Building Space, Topological Relationships, …. Character Body Types, Occupations, Tools, Food, Animals, Weather, and a variety of other objects and cinematographic relationships..”

Golan Levin, Principal Designer of Icon Visual Language

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

WHAT KEY ISSUES WERE FOUND?

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

WHAT KEY ISSUES WERE FOUND?Syntax and Semantics- the meaning of video information is constructed from its relationship to the shots surrounding it. Annotating by physical description is effective, annotating by more complex meaning does not hold up.

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW DOES IT RELATE?

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW DOES IT RELATE?Although Media Streams is solving a different problem than ours, the issue of examining and creating meaning from video texts is significant.

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW DOES IT RELATE?Media Timeline visually displays an entire clip. This is an interesting model for visualizing a film. Users can quickly visually scan to find a relevant point, instead of remembering time code.

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW DOES IT RELATE?Media Timeline makes viewable the context of single clips within an entire work. Users can mark clips while recognizing and understanding their syntax.

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

HOW DOES IT RELATE?Icons propose an interesting way of quickly flagging types of clips. Could add meaning to the process marking clips. They could also be useful for some analytical tasks such as identifying elements of film grammar.

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

REFERENCES:Davis, Marc. "Media Streams: An Iconic Visual Language for Video Representation." In: Readings in Human-Computer Interaction: Toward the Year 2000, ed. Ronald M. Baecker, Jonathan Grudin, William A. S. Buxton, and Saul Greenberg. 854-866. 2nd ed., San Francisco: Morgan Kaufmann Publishers, Inc., 1995.

http://acg.media.mit.edu/people/golan/mediastreams/

Eric Bailey, Peter Worth

ED 229 C Seminar in Learning, Design and TechnologyStanford University School of EducationJanuary, 21 2004

FIN