1. 2 Pittsburgh Brain Activity Interpretation Competition Inferring Experience Based Cognition from...

28
1

Transcript of 1. 2 Pittsburgh Brain Activity Interpretation Competition Inferring Experience Based Cognition from...

1

2

Pittsburgh Brain Activity Interpretation Competition Inferring Experience Based Cognition from fMRI

Welcome to the presentation of the Competition Overview

This is a short auditory graphical overview of the competition.

For details see, please see our web page at:

http://www.ebc.pitt.edu/competition.html

The guidebook is available at:http://www.ebc.pitt.edu/Competition/PittsburghBrainInterpretationCompetitionGuide2006.pdf

Speaker: Walter Schneider

University of Pittsburgh

3

Goals of Competition

• Advance the understanding of how the brain represents and manipulates information

• Enable scientists from many disciplines and nations to develop new brain interpretation methodologies

• Provide focus, data, and educational materials to expand research in this area

• Give top groups visibility for advances

4

Overview of competition

• Who can compete– People from many disciplines, nations, and type of

position (students, faculty, scientists, engineers)

– Individuals or groups

– Cross disciplinary teams

– Classes

(note only one cash prize per institution)

5

Overview of what to do

• You will be examine the brain activity and feature ratings of 3 people watching 3 twenty minute videos from a TV series

• Develop classifier systems such that for Movie1 and Movie2 predict the related feature rating data

• Apply that classifier to the Movie3 brain activity data to predict the ratings of the same people watching Movie3.

6

Brain Activity and Eye Movement Data Collection

• Data was collected at the Brain Imaging Research Center in Pittsburgh http://www.birc.pitt.edu/

• Data was collected on 3 subjects viewing three videos while collecting eye movement data.

• Rating data was collected on 13 features rated by each subject, plus 7 actor ratings and 7 location ratings and some processing of the video (e.g., RMS amplitude of the sound track).

Auditory

Visual

7

Diagram of VideoRuns

Drawings by: Sue Schneider

Brain activation data 34x64x64x860 (1.75s)

Subject and expert rating data 23x860(1.75 s) after hemodynamic lag

8

Create ROI x TimeTable

(Provided)

Do PostPost Process

Clean-Up

Identify Movie ActiveRegion of Interest

ROIs(Provided)

Submit Data

MovieRun2

Basic Analysis StepLoad

MovieRun2 Sub14D Functional Data

Regress ROI Table to

Feature Hemodynamics

9

Example Linear Prediction Approach

=

For a linear system you can solve for betas by taking inverse

ACTIVITY(ROI,Time)

To predict each feature calculate the betas to linearly predict feature strength from Activation Table (ROI,Time)

n – number of time points

k – number brain areas

Note this approach is meant as an illustration only and can be done as an exercise to learn to work with the data

BETA(ROI)

0 0.2 0.4 0.6 0.8

1

2

3

4

5

6

7

8

FEATURE(Time)

10

Some Example Post processing steps

• Predict new feature Is Movie Playing

• If Not Movie Playing set feature predictions to Not active state.

11

Developing/training techniques• Use the data from Movie1 & Movie2 to

develop the ability to go from the brain activation data to the behavioral rating data.

Drawings by: Sue Schneider

Brain activation data 34x64x64x860 (1.75s)

Movie1

Brain activation data 34x64x64x860 (1.75s)

Movie2

12

Prediction of rating data for Movie3Do not know what scenes were presented

? ? ?? ?

Eye movement x,y, pupil 3x9000 (0.016s)

Optional prediction of eye movement dataDrawings by: Sue Schneider

Use techniques to predict rating data

Subject and expert rating data 23x860(1.75 s) after hemodynamic lag

Brain activation data 34x64x64x860 (1.75s)

13

Example Advanced Analysis

For Top ROIs of Each

Feature

Use Advanced Techniques To Predict

Feature

Do PostPost Process

Clean-Up

Create Local Flat or Structure

Maps Submit Data

Regress Advanced Data to

Feature Hemodynamics

14

Example Identify Areas Active hearing Speech

Formisano et al. (in preparation)

15

Actor

Indetifying speaker

Formisano et al. (in preparation)

16

Actor

Create Feature Specific Prediction

Example Prediction of individual actor’s speech

Predicted Feature(A1) = Distance(A1) - Average (Distance(A2,A3,A4))

Predict Actor = Max of (A1, A2, A3, A4)

17

Keep this a Collegial Competition

• You might work with multiple people at your site.

• We will be providing resources (e.g., readings, additional formats, routines) that people wish to share.

• If you have a processing step you would like others to try and comment on contribute them.

• We will be doing special events (web conferences) if requested.

• Make suggestions for this and next years competition.

18

What are the Feature Vectors

• Subject rated vectors (all scored in competition)– Content: Body Parts, Environmental Sounds, Faces, Food,

Language, Laughter, Motion, Tools

– Reaction: Amusement, Arousal, Attention, Sadness

• Content questions (optionally scored)– Actors

– Locations

• Quantitiative measurement of content (not scored)– Auditory RMS amplitude

– Visual mean pixel brightness

19

Sub 1 Movie 1 Raw

0

2

4

6

8

10

12

14

16

0 500 1000 1500 2000

Amusement

Attention

Arousal

Body Parts

Environmental Sounds

Faces

Food

Language

Laughter

Motion

Music

Sadness

Tools

Blank

Subject FeaturesRating Data Season 1 Sub 1 Sub 1 Movie 1

0

2

4

6

8

10

12

14

16

0 500 1000 1500 2000

Amusement

Attention

Arousal

Body Parts

Environmental Sounds

Faces

Food

Language

Laughter

Motion

Music

Sadness

Tools

Blank

Hemodynamic Lag Output

20

time [s]0.5÷2 4 10

stimulus

0-0.5

15

% s

igna

l cha

nge

Blood Oxygenation Level Dependent contrast fMRI

Neural pathway Hemodynamics MR scanner

21

Pre/Post Hemodynamic Lag Effect

• Examples – – Luminance effect

– Auditory RMS effect

22

Keep in Contact

• We will provide posting on the discussion board and major notices via email.

• We will be posting updates on procedures and corrections of documents.

23

Submit Predicted Movie3 data to competition before May 1, 2006

• Note you are only allowed to submit the Movie3 predictions 3 times.

• Test out the methods on Movie1 and Movie2 data sets.

24

Comment on Selection of Video

The video was selected with the intent to provide a test bed for data mining methods. We used excepts from the Home Improvement TV series video by Disney Studios (http://www.ultimatedisney.com/tvshows.html, http://tvplex.go.com/touchstone/homeimprovement/ ) for a description of the series see http://www.morepower.com/) . You can obtain the source video from local stores or Amazon. COM (http://www.amazon.com/gp/product/B000AJJNIQ/102-8197023-4912152?n=130). The series provided 36 hours of material to choose from fairly well segmented scenes in a common professional video recording.  This TV video provided long shots and a repeating use of a small number of actors in a small number of sets that allows common elements to reoccur.  Also the materials (character types, settings, events, objects) are typical of what the subjects would be expected to have experience with (relative to weapons in action Videos). The specific scenes were selected through a series of rating efforts to rank order the clips that presented the categories uniquely and in combination and selected to make Movie1, Movie2, Movie3

Home Improvement Season 1

Home Improvement Season 2

Home Improvement Season 3

25

Example functional data sets

DICOM slices Extracted Brain Inflated Brain Flat Mapped Brain

Raw Data Pre-processed data

We provide multiple formats to minimize the start up time and allow those without specific brain imaging experience to get the benefits from experts on the preprocessing stages. For example, some computer science data-mining students that may enter the competition may have no brain imaging experience. Given that, we will make available the data preprocessed (and raw) so they do not have to create software for all the data handling. We will also provide raw data allowing participants to apply whatever methods they wish to analyze the data.

DICOM, Analyze or Brain Voyager formats provided

26

Suggested Entry Methods• We recommend that you start by predicting a single feature

(e.g., faces) on a single subject (subject 1) on a given movie (Movie2), developing methods and then add additional features.

• Test the methods developed on Movie2 at predicting Movie1 data improving your methods.

• Notes– You can submit entries with subsets for the features predicted (though the

accuracy of others will treated as zero).

– Teams can divide features and submit as a joint effort (e.g., each member work on 4 features but submit a combined effort). Note: the team will need to create a joint document detailing the methods and determine how to share a prize if awarded.

– Watch the Competition Message Board for updates of materials, readings, and suggestions.

27

Competition Board of Scientists

28

Credit & Contact• For questions, e-mail [email protected]

• This research is being managed by the Experience Based Cognition project at the University of Pittsburgh. Grant number: N00014-05-1-0881

• Experience Based Cognition team members: – PI: Walter Schneider

Co-PIs: Greg Siegle, Mark Wheeler and Kwan-Jin JungUniversity of Pittsburgh

– Rainer Goebel and Elia Formisano Maastricht University, Netherlands

– Rob GoldbergUniversity of Pennsylvania

– Tom Landauer and Peter Foltz Pearson Knowledge Technologies

– Daniel LevinVanderbilt University

• Pittsburgh Technical staff: Maureen McHugo, Ikumi Suzuki, Lori Koerbel, Lena Gemmer, Kate Fissell, Dan Jones, Sharanya Nama

Presentation includes images from Elia Formisano & Rainer Goebel Maastricht University, Netherlands